diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/about-the-api/about-the-api.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/about-the-api/about-the-api.md
new file mode 100644
index 00000000000..f1756eb6e5d
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/about-the-api/about-the-api.md
@@ -0,0 +1,93 @@
+---
+title: API
+---
+
+
+
+
+
+## 如何使用 API
+
+API 有自己的用户界面,你可以从 Web 浏览器访问它。这是查看资源、执行操作以及查看等效 cURL 或 HTTP 请求和响应的一种简单的方法。要访问它:
+
+
+
+
+1. 单击右上角的用户头像。
+1. 单击**账号 & API 密钥**。
+1. 在 **API 密钥**下,找到 **API 端点**字段并单击链接。该链接类似于 `https:///v3`,其中 `` 是 Rancher deployment 的完全限定域名。
+
+
+
+
+转到位于 `https:///v3` 的 URL 端点,其中 `` 是你的 Rancher deployment 的完全限定域名。
+
+
+
+
+## 认证
+
+API 请求必须包含认证信息。认证是通过 [API 密钥](../user-settings/api-keys.md)使用 HTTP 基本认证完成的。API 密钥可以创建新集群并通过 `/v3/clusters/` 访问多个集群。[集群和项目角色](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md)会应用于这些键,并限制账号可以查看的集群和项目以及可以执行的操作。
+
+默认情况下,某些集群级别的 API 令牌是使用无限期 TTL(`ttl=0`)生成的。换言之,除非你让令牌失效,否则 `ttl=0` 的 API 令牌永远不会过期。有关如何使 API 令牌失效的详细信息,请参阅 [API 令牌](api-tokens.md)。
+
+## 发出请求
+
+该 API 通常是 RESTful 的,但是还具有多种功能。这些功能可以使客户端发现所有内容,因此可以编写通用客户端,而不必为每种资源编写特定代码。有关通用 API 规范的详细信息,请参阅[此处](https://github.com/rancher/api-spec/blob/master/specification.md)。
+
+- 每种类型都有一个 Schema,这个 Schema 描述了以下内容:
+ - 用于获取此类资源集合的 URL
+ - 资源可以具有的每个字段及其类型、基本验证规则、是必填还是可选字段等
+ - 在此类资源上可以执行的每个操作,以及它们的输入和输出(也作为 schema)
+ - 允许过滤的每个字段
+ - 集合本身或集合中的单个资源可以使用的 HTTP 操作方法
+
+
+- 因此,你可以只加载 schema 列表并了解 API 的所有信息。实际上,这是 API 的 UI 工作方式,它不包含特定于 Rancher 本身的代码。每个 HTTP 响应中的 `X-Api-Schemas` 标头都会发送获取 Schemas 的 URL。你可以按照每个 schema 上的 `collection` 链接了解要在哪里列出资源,并在返回资源中的其他 `links` 中获取其他信息。
+
+- 在实践中,你可能只想构造 URL 字符串。我们强烈建议将此限制为在顶层列出的集合 (`/v3/`),或获取特定资源 (`/v3//`)。除此之外的任何内容都可能在将来的版本中发生更改。
+
+- 资源之间相互之间有联系,称为链接(links)。每个资源都包含一个 `links` 映射,其中包含链接名称和用于检索该信息的 URL。同样,你应该 `GET` 资源并遵循 `links` 映射中的 URL,而不是自己构造这些字符串。
+
+- 大多数资源都有操作(action),表示可以执行某个操作或改变资源的状态。要使用操作,请将 HTTP `POST` 请求发送到 `actions` 映射中你想要的操作的 URL。某些操作需要输入或生成输出,请参阅每种类型的独立文档或 schema 以获取具体信息。
+
+- 要编辑资源,请将 HTTP `PUT` 请求发送到资源上的 `links.update` 链接,其中包含要更改的字段。如果链接丢失,则你无权更新资源。未知字段和不可编辑的字段将被忽略。
+
+- 要删除资源,请将 HTTP `DELETE` 请求发送到资源上的 `links.remove` 链接。如果链接丢失,则你无权更新资源。
+
+- 要创建新资源,HTTP `POST` 到 schema(即 `/v3/`)中的集合 URL。
+
+## 过滤
+
+你可以使用 HTTP 查询参数的公共字段在服务器端过滤大多数集合。`filters` 映射显示了可以过滤的字段,以及过滤后的值在你发起的请求中是什么。API UI 具有设置过滤和显示适当请求的控件。对于简单的 "equals" 匹配,它只是 `field=value`。你可以将修饰符添加到字段名称,例如 `field_gt=42` 表示“字段大于 42”。详情请参阅 [API 规范](https://github.com/rancher/api-spec/blob/master/specification.md#filtering)。
+
+## 排序
+
+你可以使用 HTTP 查询参数的公共字段在服务器端排序大多数集合。`sortLinks` 映射显示了可用的排序,以及用于获取遵循该排序的集合的 URL。它还包括当前响排序依据的信息(如果指定)。
+
+## 分页
+
+默认情况下,API 响应以每页 100 个资源的限制进行分页。你可以通过 `limit` 查询参数进行更改,最大为 1000,例如 `/v3/pods?limit=1000`。集合响应中的 `pagination` 映射能让你知道你是否拥有完整的结果集,如果没有,则会指向下一页的链接。
+
+## 捕获 Rancher API 调用
+
+你可以使用浏览器开发人员工具来捕获 Rancher API 的调用方式。例如,你可以按照以下步骤使用 Chrome 开发人员工具来获取用于配置 RKE 集群的 API 调用:
+
+1. 在 Rancher UI 中,转到**集群管理**并单击**创建**。
+1. 单击某个集群类型。此示例使用 Digital Ocean。
+1. 使用集群名称和节点模板填写表单,但不要单击**创建**。
+1. 在创建集群之前,你需要打开开发人员工具才能看到正在记录的 API 调用。要打开工具,右键单击 Rancher UI,然后单击**检查**。
+1. 在开发者工具中,单击 **Network** 选项卡。
+1. 在 **Network** 选项卡上,确保选择了 **Fetch/XHR**。
+1. 在 Rancher UI 中,单击**创建**。在开发者工具中,你应该会看到一个名为 `cluster?_replace=true` 的新网络请求。
+1. 右键单击 `cluster?_replace=true` 并单击**复制 > 复制为 cURL**。
+1. 将结果粘贴到文本编辑器中。你将能够看到 POST 请求,包括被发送到的 URL、所有标头以及请求的完整正文。此命令可用于从命令行创建集群。请注意,请求包含凭证,因此请将请求存储在安全的地方。
+
+### 启用在 API 中查看
+
+你还可以查看针对各自集群和资源捕获的 Rancher API 调用。 默认情况下不启用此功能。 要启用它:
+
+1. 单击 UI 右上角的 **用户图标**,然后从下拉菜单中选择 **偏好设置**
+1. 在**高级功能**部分下,单击**启用"在 API 中查看"**
+
+选中后,**在 API 中查看**链接现在将显示在 UI 资源页面上的 **⋮** 子菜单下。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cli-with-rancher/cli-with-rancher.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cli-with-rancher/cli-with-rancher.md
new file mode 100644
index 00000000000..51f50ec2dca
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cli-with-rancher/cli-with-rancher.md
@@ -0,0 +1,9 @@
+---
+title: Rancher CLI
+---
+
+
+
+
+
+Rancher CLI 是一个命令行工具,用于在工作站中与 Rancher 进行交互。以下文档将描述 [Rancher CLI](rancher-cli.md) 和 [kubectl实用程序](kubectl-utility.md)。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cli-with-rancher/rancher-cli.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cli-with-rancher/rancher-cli.md
index 4cf44238004..8e85d8dfe22 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cli-with-rancher/rancher-cli.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/cli-with-rancher/rancher-cli.md
@@ -3,6 +3,10 @@ title: Rancher CLI
description: Rancher CLI 是一个命令行工具,用于在工作站中与 Rancher 进行交互。
---
+
+
+
+
Rancher CLI(命令行界面)是一个命令行工具,可用于与 Rancher 进行交互。使用此工具,你可以使用命令行而不用通过 GUI 来操作 Rancher。
### 下载 Rancher CLI
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/hardening-guides.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/hardening-guides.md
new file mode 100644
index 00000000000..6de7d34574f
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/hardening-guides.md
@@ -0,0 +1,55 @@
+---
+title: Rancher 自我评估和加固指南
+---
+
+
+
+
+
+Rancher 为每个受支持的 Rancher 版本的 Kubernetes 发行版提供了特定的安全强化指南。
+
+## Rancher Kubernetes 发行版
+
+Rancher 使用以下 Kubernetes 发行版:
+
+- [**RKE**](https://rancher.com/docs/rke/latest/en/)(Rancher Kubernetes Engine)是经过 CNCF 认证的 Kubernetes 发行版,完全在 Docker 容器中运行。
+- [**RKE2**](https://docs.rke2.io/) 是一个完全合规的 Kubernetes 发行版,专注于安全和合规性。
+- [**K3s**](https://docs.k3s.io/) 是一个完全合规的,轻量级 Kubernetes 发行版。它易于安装,内存需求只有上游 Kubernetes 的一半,所有组件都在一个小于 100 MB 的二进制文件中。
+
+要加固运行未列出的发行版的 Kubernetes 集群,请参阅 Kubernetes 提供商文档。
+
+## 加固指南和 Benchmark 版本
+
+每个自我评估指南都附有强化指南。这些指南与列出的 Rancher 版本一起进行了测试。每个自我评估指南都在特定的 Kubernetes 版本和 CIS Benchmark 版本上进行了测试。如果 CIS Benchmark 尚未针对你的 Kubernetes 版本进行验证,你可以使用现有指南,直到添加适合你的版本的指南。
+
+### RKE 指南
+
+| Kubernetes 版本 | CIS Benchmark 版本 | 自我评估指南 | 加固指南 |
+|--------------------|-----------------------|-----------------------|------------------|
+| Kubernetes v1.23 | CIS v1.23 | [链接](rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md) | [链接](rke1-hardening-guide/rke1-hardening-guide.md) |
+| Kubernetes v1.24 | CIS v1.24 | [链接](rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.24-k8s-v1.24.md) | [链接](rke1-hardening-guide/rke1-hardening-guide.md) |
+| Kubernetes v1.25/v1.26/v1.27 | CIS v1.7 | [链接](rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md) | [链接](rke1-hardening-guide/rke1-hardening-guide.md) |
+
+### RKE2 指南
+
+| 类型 | Kubernetes 版本 | CIS Benchmark 版本 | 自我评估指南 | 加固指南 |
+|------|--------------------|-----------------------|-----------------------|------------------|
+| Rancher provisioned RKE2 | Kubernetes v1.23 | CIS v1.23 | [链接](rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md) | [链接](rke2-hardening-guide/rke2-hardening-guide.md) |
+| Rancher provisioned RKE2 | Kubernetes v1.24 | CIS v1.24 | [链接](rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.24-k8s-v1.24.md) | [链接](rke2-hardening-guide/rke2-hardening-guide.md) |
+| Rancher provisioned RKE2 | Kubernetes v1.25/v1.26/v1.27 | CIS v1.7 | [链接](rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md) | [链接](rke2-hardening-guide/rke2-hardening-guide.md) |
+| Standalone RKE2 | Kubernetes v1.25/v1.26/v1.27 | CIS v1.7 | [链接](https://docs.rke2.io/security/cis_self_assessment123) | [链接](https://docs.rke2.io/security/hardening_guide) |
+
+### K3s 指南
+
+| 类型 | Kubernetes 版本 | CIS Benchmark 版本 | 自我评估指南 | 加固指南 |
+|------|--------------------|-----------------------|-----------------------|------------------|
+| Rancher provisioned K3s cluster | Kubernetes v1.23 | CIS v1.23 | [链接](k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md) | [链接](k3s-hardening-guide/k3s-hardening-guide.md) |
+| Rancher provisioned K3s cluster | Kubernetes v1.24 | CIS v1.24 | [链接](k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.24-k8s-v1.24.md) | [链接](k3s-hardening-guide/k3s-hardening-guide.md) |
+| Rancher provisioned K3s cluster | Kubernetes v1.25/v1.26/v1.27 | CIS v1.7 | [链接](k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md) | [链接](k3s-hardening-guide/k3s-hardening-guide.md) |
+| Standalone K3s | Kubernetes v1.22 up to v1.24 | CIS v1.23 | [链接](https://docs.k3s.io/security/self-assessment) | [链接](https://docs.k3s.io/security/hardening-guide) |
+
+## 在 SELinux 上使用 Rancher
+
+[Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) 是一个内核模块,为 Linux 添加了额外的访问控制和安全工具。SELinux 过去曾被政府机构使用,现在已成为行业标准。SELinux 在 RHEL 和 CentOS 上默认启用。
+
+要将 Rancher 与 SELinux 结合使用,我们建议[安装](../selinux-rpm/about-rancher-selinux.md) `rancher-selinux`。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-hardening-guide.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-hardening-guide.md
new file mode 100644
index 00000000000..d40edd47e96
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-hardening-guide.md
@@ -0,0 +1,744 @@
+---
+title: K3s 加固指南
+---
+
+
+
+
+
+本文档提供了针对生产环境的 K3s 集群进行加固的具体指导,以便在使用 Rancher 部署之前进行配置。它概述了满足信息安全中心(Center for Information Security, CIS)Kubernetes benchmark controls 所需的配置和控制。
+
+:::note
+这份加固指南描述了如何确保你集群中的节点安全。我们建议你在安装 Kubernetes 之前遵循本指南。
+:::
+
+此加固指南适用于 K3s 集群,并与以下版本的 CIS Kubernetes Benchmark、Kubernetes 和 Rancher 相关联:
+
+| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 |
+|-----------------|-----------------------|------------------------------|
+| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 |
+| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 |
+| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 至 v1.26 |
+
+:::note
+在 Benchmark v1.7 中,不再需要 `--protect-kernel-defaults` (4.2.6) 参数,并已被 CIS 删除。
+:::
+
+有关如何评估加固的 K3s 集群与官方 CIS benchmark 的更多细节,请参考特定 Kubernetes 和 CIS benchmark 版本的 K3s 自我评估指南。
+
+K3s 在不需要修改的情况下通过了许多 Kubernetes CIS controls,因为它默认应用了几个安全缓解措施。然而,有一些值得注意的例外情况,需要手动干预才能完全符合 CIS Benchmark 要求:
+
+1. K3s 不修改主机操作系统。任何主机级别的修改都需要手动完成。
+2. 某些 CIS policy controls,例如 `NetworkPolicies` 和 `PodSecurityStandards`(在 v1.24 及更早版本中为 `PodSecurityPolicies`),会限制集群功能。
+ 你必须选择让 K3s 配置这些策略。在你的命令行标志或配置文件中添加相应的选项(启用准入插件),并手动应用适当的策略。
+ 请参阅以下详细信息。
+
+CIS Benchmark 的第一部分(1.1)主要关注于 Pod manifest 的权限和所有权。由于发行版中的所有内容都打包在一个二进制文件中,因此这一部分不适用于 K3s 的核心组件。
+
+## 主机级别要求
+
+### 确保 `protect-kernel-defaults`已经设置
+
+
+
+
+自 CIS benchmark 1.7 开始,不再需要`protect-kernel-defaults`。
+
+
+
+
+这是一个 kubelet 标志,如果所需的内核参数未设置或设置为与 kubelet 的默认值不同的值,将导致 kubelet 退出。
+
+可以在 Rancher 的集群配置中设置 `protect-kernel-defaults` 标志。
+
+```yaml
+spec:
+ rkeConfig:
+ machineSelectorConfig:
+ - config:
+ protect-kernel-defaults: true
+```
+
+
+
+
+### 设置内核参数
+
+建议为集群中的所有节点类型设置以下 `sysctl` 配置。在 `/etc/sysctl.d/90-kubelet.conf` 中设置以下参数:
+
+```ini
+vm.panic_on_oom=0
+vm.overcommit_memory=1
+kernel.panic=10
+kernel.panic_on_oops=1
+```
+
+运行 `sudo sysctl -p /etc/sysctl.d/90-kubelet.conf` 以启用设置。
+
+此配置需要在设置 kubelet 标志之前完成,否则 K3s 将无法启动。
+
+## Kubernetes 运行时要求
+
+CIS Benchmark 的运行时要求主要围绕 Pod 安全(通过 PSP 或 PSA)、网络策略和 API 服务器审计日志展开。
+
+默认情况下,K3s 不包含任何 Pod 安全或网络策略。然而,K3s 附带一个控制器,可以强制执行你创建的任何网络策略。默认情况下,K3s 启用了 `PodSecurity` 和 `NodeRestriction` 等多个准入控制器。
+
+### Pod 安全
+
+
+
+
+K3s v1.25 及更新版本支持 [Pod 安全准入(PSA)](https://kubernetes.io/docs/concepts/security/pod-security-admission/),用于控制 Pod 安全性。
+
+你可以在 Rancher 中通过集群配置,设置 `defaultPodSecurityAdmissionConfigurationTemplateName` 字段来指定 PSA 配置:
+
+```yaml
+spec:
+ defaultPodSecurityAdmissionConfigurationTemplateName: rancher-restricted
+```
+
+Rancher 提供了 `rancher-restricted` 模板,用于强制执行高度限制性的 Kubernetes 上游 [`Restricted`](https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted) 配置文件,其中包含了 Pod 加固的最佳实践。
+
+
+
+
+K3s v1.24 及更早版本支持 [Pod 安全策略 (PSP)](https://v1-24.docs.kubernetes.io/docs/concepts/security/pod-security-policy/) 以控制 Pod 安全性。
+
+你可以在 Rancher 中通过集群配置,传递以下标志来启用 PSPs:
+
+```yaml
+spec:
+ rkeConfig:
+ machineGlobalConfig:
+ kube-apiserver-arg:
+ - enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount
+```
+
+这会保留 `NodeRestriction` 插件并启用 `PodSecurityPolicy`。
+
+启用 PSPs 后,你可以应用策略来满足 CIS Benchmark 第 5.2 节中描述的必要控制。
+
+:::note
+这些是 CIS Benchmark 中的手动检查。CIS 扫描结果将标记为 `warning`,因为需要集群操作员进行手动检查。
+:::
+
+以下是合规的 PSP 示例:
+
+```yaml
+---
+apiVersion: policy/v1beta1
+kind: PodSecurityPolicy
+metadata:
+ name: restricted-psp
+spec:
+ privileged: false # CIS - 5.2.1
+ allowPrivilegeEscalation: false # CIS - 5.2.5
+ requiredDropCapabilities: # CIS - 5.2.7/8/9
+ - ALL
+ volumes:
+ - 'configMap'
+ - 'emptyDir'
+ - 'projected'
+ - 'secret'
+ - 'downwardAPI'
+ - 'csi'
+ - 'persistentVolumeClaim'
+ - 'ephemeral'
+ hostNetwork: false # CIS - 5.2.4
+ hostIPC: false # CIS - 5.2.3
+ hostPID: false # CIS - 5.2.2
+ runAsUser:
+ rule: 'MustRunAsNonRoot' # CIS - 5.2.6
+ seLinux:
+ rule: 'RunAsAny'
+ supplementalGroups:
+ rule: 'MustRunAs'
+ ranges:
+ - min: 1
+ max: 65535
+ fsGroup:
+ rule: 'MustRunAs'
+ ranges:
+ - min: 1
+ max: 65535
+ readOnlyRootFilesystem: false
+```
+
+要使示例 PSP 生效,我们需要创建一个 `ClusterRole` 和 一个`ClusterRoleBinding`。我们还需要为需要额外权限的系统级 Pod 提供“系统无限制策略”,以及允许必要的 sysctls 来实现 ServiceLB 完整功能的额外策略。
+
+```yaml
+---
+apiVersion: policy/v1beta1
+kind: PodSecurityPolicy
+metadata:
+ name: restricted-psp
+spec:
+ privileged: false
+ allowPrivilegeEscalation: false
+ requiredDropCapabilities:
+ - ALL
+ volumes:
+ - 'configMap'
+ - 'emptyDir'
+ - 'projected'
+ - 'secret'
+ - 'downwardAPI'
+ - 'csi'
+ - 'persistentVolumeClaim'
+ - 'ephemeral'
+ hostNetwork: false
+ hostIPC: false
+ hostPID: false
+ runAsUser:
+ rule: 'MustRunAsNonRoot'
+ seLinux:
+ rule: 'RunAsAny'
+ supplementalGroups:
+ rule: 'MustRunAs'
+ ranges:
+ - min: 1
+ max: 65535
+ fsGroup:
+ rule: 'MustRunAs'
+ ranges:
+ - min: 1
+ max: 65535
+ readOnlyRootFilesystem: false
+---
+apiVersion: policy/v1beta1
+kind: PodSecurityPolicy
+metadata:
+ name: system-unrestricted-psp
+ annotations:
+ seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
+spec:
+ allowPrivilegeEscalation: true
+ allowedCapabilities:
+ - '*'
+ fsGroup:
+ rule: RunAsAny
+ hostIPC: true
+ hostNetwork: true
+ hostPID: true
+ hostPorts:
+ - max: 65535
+ min: 0
+ privileged: true
+ runAsUser:
+ rule: RunAsAny
+ seLinux:
+ rule: RunAsAny
+ supplementalGroups:
+ rule: RunAsAny
+ volumes:
+ - '*'
+---
+apiVersion: policy/v1beta1
+kind: PodSecurityPolicy
+metadata:
+ name: svclb-psp
+ annotations:
+ seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
+spec:
+ allowPrivilegeEscalation: false
+ allowedCapabilities:
+ - NET_ADMIN
+ allowedUnsafeSysctls:
+ - net.ipv4.ip_forward
+ - net.ipv6.conf.all.forwarding
+ fsGroup:
+ rule: RunAsAny
+ hostPorts:
+ - max: 65535
+ min: 0
+ runAsUser:
+ rule: RunAsAny
+ seLinux:
+ rule: RunAsAny
+ supplementalGroups:
+ rule: RunAsAny
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: psp:restricted-psp
+rules:
+- apiGroups:
+ - policy
+ resources:
+ - podsecuritypolicies
+ verbs:
+ - use
+ resourceNames:
+ - restricted-psp
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: psp:system-unrestricted-psp
+rules:
+- apiGroups:
+ - policy
+ resources:
+ - podsecuritypolicies
+ resourceNames:
+ - system-unrestricted-psp
+ verbs:
+ - use
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: psp:svclb-psp
+rules:
+- apiGroups:
+ - policy
+ resources:
+ - podsecuritypolicies
+ resourceNames:
+ - svclb-psp
+ verbs:
+ - use
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: psp:svc-local-path-provisioner-psp
+rules:
+- apiGroups:
+ - policy
+ resources:
+ - podsecuritypolicies
+ resourceNames:
+ - system-unrestricted-psp
+ verbs:
+ - use
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: psp:svc-coredns-psp
+rules:
+- apiGroups:
+ - policy
+ resources:
+ - podsecuritypolicies
+ resourceNames:
+ - system-unrestricted-psp
+ verbs:
+ - use
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: psp:svc-cis-operator-psp
+rules:
+- apiGroups:
+ - policy
+ resources:
+ - podsecuritypolicies
+ resourceNames:
+ - system-unrestricted-psp
+ verbs:
+ - use
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: default:restricted-psp
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: psp:restricted-psp
+subjects:
+- kind: Group
+ name: system:authenticated
+ apiGroup: rbac.authorization.k8s.io
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: system-unrestricted-node-psp-rolebinding
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: psp:system-unrestricted-psp
+subjects:
+- apiGroup: rbac.authorization.k8s.io
+ kind: Group
+ name: system:nodes
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: system-unrestricted-svc-acct-psp-rolebinding
+ namespace: kube-system
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: psp:system-unrestricted-psp
+subjects:
+- apiGroup: rbac.authorization.k8s.io
+ kind: Group
+ name: system:serviceaccounts
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: svclb-psp-rolebinding
+ namespace: kube-system
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: psp:svclb-psp
+subjects:
+- kind: ServiceAccount
+ name: svclb
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: svc-local-path-provisioner-psp-rolebinding
+ namespace: kube-system
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: psp:svc-local-path-provisioner-psp
+subjects:
+- kind: ServiceAccount
+ name: local-path-provisioner-service-account
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: svc-coredns-psp-rolebinding
+ namespace: kube-system
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: psp:svc-coredns-psp
+subjects:
+- kind: ServiceAccount
+ name: coredns
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: svc-cis-operator-psp-rolebinding
+ namespace: cis-operator-system
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: psp:svc-cis-operator-psp
+subjects:
+- kind: ServiceAccount
+ name: cis-operator-serviceaccount
+```
+
+上述策略可以放置在 `/var/lib/rancher/k3s/server/manifests` 目录下名为 `policy.yaml` 的文件中。在启动 K3s 之前,必须创建策略文件和其目录结构。建议限制访问权限以避免泄露潜在的敏感信息。
+
+```shell
+sudo mkdir -p -m 700 /var/lib/rancher/k3s/server/manifests
+```
+
+:::note
+CNI、DNS 和 Ingress 等关键 Kubernetes 组件在 `kube-system` 命名空间中作为 Pod 运行。因此,这个命名空间的限制政策较少,从而使这些组件能够正常运行。
+:::
+
+
+
+
+### 网络策略
+
+CIS 要求所有命名空间应用网络策略,合理限制进入命名空间和 Pod 的流量。
+
+:::note
+这些是 CIS Benchmark 中的手动检查。CIS 扫描结果将标记为 `warning`,因为需要集群操作员进行手动检查。
+:::
+
+网络策略可以放置在 `/var/lib/rancher/k3s/server/manifests` 目录下的 `policy.yaml` 文件中。如果该目录不是作为 PSP(如上所述)的一部分创建的,则必须首先创建该目录。
+
+```shell
+sudo mkdir -p -m 700 /var/lib/rancher/k3s/server/manifests
+```
+
+以下是合规的网络策略示例:
+
+```yaml
+---
+kind: NetworkPolicy
+apiVersion: networking.k8s.io/v1
+metadata:
+ name: intra-namespace
+ namespace: kube-system
+spec:
+ podSelector: {}
+ ingress:
+ - from:
+ - namespaceSelector:
+ matchLabels:
+ name: kube-system
+---
+kind: NetworkPolicy
+apiVersion: networking.k8s.io/v1
+metadata:
+ name: intra-namespace
+ namespace: default
+spec:
+ podSelector: {}
+ ingress:
+ - from:
+ - namespaceSelector:
+ matchLabels:
+ name: default
+---
+kind: NetworkPolicy
+apiVersion: networking.k8s.io/v1
+metadata:
+ name: intra-namespace
+ namespace: kube-public
+spec:
+ podSelector: {}
+ ingress:
+ - from:
+ - namespaceSelector:
+ matchLabels:
+ name: kube-public
+```
+
+除非特意允许,否则活动限制会阻止 DNS。以下是允许 DNS 相关流量的网络策略示例:
+
+```yaml
+---
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: default-network-dns-policy
+ namespace:
+spec:
+ ingress:
+ - ports:
+ - port: 53
+ protocol: TCP
+ - port: 53
+ protocol: UDP
+ podSelector:
+ matchLabels:
+ k8s-app: kube-dns
+ policyTypes:
+ - Ingress
+```
+
+如果没有创建网络策略来允许访问,则默认情况下会阻止 metrics-server 和 Traefik Ingress 控制器。
+
+```yaml
+---
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: allow-all-metrics-server
+ namespace: kube-system
+spec:
+ podSelector:
+ matchLabels:
+ k8s-app: metrics-server
+ ingress:
+ - {}
+ policyTypes:
+ - Ingress
+---
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: allow-all-svclbtraefik-ingress
+ namespace: kube-system
+spec:
+ podSelector:
+ matchLabels:
+ svccontroller.k3s.cattle.io/svcname: traefik
+ ingress:
+ - {}
+ policyTypes:
+ - Ingress
+---
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: allow-all-traefik-v121-ingress
+ namespace: kube-system
+spec:
+ podSelector:
+ matchLabels:
+ app.kubernetes.io/name: traefik
+ ingress:
+ - {}
+ policyTypes:
+ - Ingress
+```
+
+:::note
+你必须像平常一样管理你创建的任何其他命名空间的网络策略。
+:::
+
+### API server 审计配置
+
+CIS 要求 1.2.19 至 1.2.22 与配置 API server 审核日志相关。默认情况下,K3s 不会创建日志目录和审计策略,因为每个用户的审计策略要求和环境都是特定的。
+
+如果你需要日志目录,则必须在启动 K3s 之前创建它。我们建议限制访问权限以避免泄露敏感信息。
+
+```bash
+sudo mkdir -p -m 700 /var/lib/rancher/k3s/server/logs
+```
+
+以下是用于记录请求元数据的初始审计策略。应将策略写入到 `/var/lib/rancher/k3s/server` 目录下名为 `audit.yaml` 的文件中。有关 API server 的策略配置的详细信息,请参阅 [官方 Kubernetes 文档](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/)。
+
+```yaml
+---
+apiVersion: audit.k8s.io/v1
+kind: Policy
+rules:
+- level: Metadata
+```
+
+还需要进一步配置才能通过 CIS 检查。这些在 K3s 中默认不配置,因为它们根据你的环境和需求而有所不同:
+
+- 确保 `--audit-log-path` 参数已经设置。
+- 确保 `--audit-log-maxage` 参数设置为 30 或适当的值。
+- 确保 `--audit-log-maxbackup` 参数设置为 10 或适当的值。
+- 确保 `--audit-log-maxsize` 参数设置为 100 或适当的值。
+
+综合起来,要启用和配置审计日志,请将以下行添加到 Rancher 的 K3s 集群配置文件中:
+
+```yaml
+spec:
+ rkeConfig:
+ machineGlobalConfig:
+ kube-apiserver-arg:
+ - audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml # CIS 3.2.1
+ - audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log # CIS 1.2.18
+ - audit-log-maxage=30 # CIS 1.2.19
+ - audit-log-maxbackup=10 # CIS 1.2.20
+ - audit-log-maxsize=100 # CIS 1.2.21
+```
+
+### Controller Manager 要求
+
+CIS 要求 1.3.1 检查 Controller Manager 中的垃圾收集设置。垃圾收集对于确保资源充足可用性并避免性能和可用性下降非常重要。根据你的系统资源和测试结果,选择一个适当的阈值来激活垃圾收集。
+
+你可以在 Rancher 的 K3s 集群文件中设置以下配置来解决此问题。下面的值仅是一个示例,请根据当前环境设置适当的阈值。
+
+```yaml
+spec:
+ rkeConfig:
+ machineGlobalConfig:
+ kube-controller-manager-arg:
+ - terminated-pod-gc-threshold=10 # CIS 1.3.1
+```
+
+### 配置 `default` Service Account
+
+Kubernetes 提供了一个名为 `default` 的 service account,供集群工作负载使用,其中没有为 Pod 分配特定的 service account。当 Pod 需要从 Kubernetes API 获取访问权限时,应为该 Pod 创建一个特定的 service account,并为该 service account 授予权限。
+
+对于 CIS 5.1.5,`default` service account 应配置为不提供 service account 令牌,并且不具有任何明确的权限分配。
+
+可以通过在每个命名空间中将 `default` service account 的 `automountServiceAccountToken` 字段更新为 `false` 来解决此问题。
+
+对于内置命名空间(`kube-system`、`kube-public`、`kube-node-lease` 和 `default`)中的 `default` service accounts,K3s 不会自动执行此操作。
+
+将以下配置保存到名为 `account_update.yaml` 的文件中。
+
+```yaml
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: default
+automountServiceAccountToken: false
+```
+
+创建一个名为 `account_update.sh` 的 Bash 脚本文件。确保使用 `chmod +x account_update.sh` 给脚本添加可执行权限。
+
+```shell
+#!/bin/bash -e
+
+for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do
+ kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)"
+done
+```
+
+每次向你的集群添加新的 service account 时,运行该脚本。
+
+## 加固版 K3s 模板配置参考
+
+Rancher 使用以下参考模板配置,基于本指南中的每个 CIS 控件创建加固过的自定义 K3s 集群。此参考内容不包括其他必需的**集群配置**指令,这些指令因你的环境而异。
+
+
+
+
+```yaml
+apiVersion: provisioning.cattle.io/v1
+kind: Cluster
+metadata:
+ name: # 定义集群名称
+spec:
+ defaultPodSecurityAdmissionConfigurationTemplateName: rancher-restricted
+ enableNetworkPolicy: true
+ kubernetesVersion: # 定义 K3s 版本
+ rkeConfig:
+ machineGlobalConfig:
+ kube-apiserver-arg:
+ - enable-admission-plugins=NodeRestriction,ServiceAccount # CIS 1.2.15, 1.2.13
+ - audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml # CIS 3.2.1
+ - audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log # CIS 1.2.18
+ - audit-log-maxage=30 # CIS 1.2.19
+ - audit-log-maxbackup=10 # CIS 1.2.20
+ - audit-log-maxsize=100 # CIS 1.2.21
+ - request-timeout=300s # CIS 1.2.22
+ - service-account-lookup=true # CIS 1.2.24
+ kube-controller-manager-arg:
+ - terminated-pod-gc-threshold=10 # CIS 1.3.1
+ secrets-encryption: true
+ machineSelectorConfig:
+ - config:
+ kubelet-arg:
+ - make-iptables-util-chains=true # CIS 4.2.7
+```
+
+
+
+
+```yaml
+apiVersion: provisioning.cattle.io/v1
+kind: Cluster
+metadata:
+ name: # 定义集群名称
+spec:
+ enableNetworkPolicy: true
+ kubernetesVersion: # 定义 K3s 版本
+ rkeConfig:
+ machineGlobalConfig:
+ kube-apiserver-arg:
+ - enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount # CIS 1.2.15, 5.2, 1.2.13
+ - audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml # CIS 3.2.1
+ - audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log # CIS 1.2.18
+ - audit-log-maxage=30 # CIS 1.2.19
+ - audit-log-maxbackup=10 # CIS 1.2.20
+ - audit-log-maxsize=100 # CIS 1.2.21
+ - request-timeout=300s # CIS 1.2.22
+ - service-account-lookup=true # CIS 1.2.24
+ kube-controller-manager-arg:
+ - terminated-pod-gc-threshold=10 # CIS 1.3.1
+ secrets-encryption: true
+ machineSelectorConfig:
+ - config:
+ kubelet-arg:
+ - make-iptables-util-chains=true # CIS 4.2.7
+ protect-kernel-defaults: true # CIS 4.2.6
+```
+
+
+
+
+## 结论
+
+如果你按照本指南操作,由 Rancher 提供的 K3s 自定义集群将配置为通过 CIS Kubernetes Benchmark 测试。你可以查看我们的 K3s 自我评估指南,了解我们是如何验证每个 benchmarks 的,并且你可以在你的集群上执行相同的操作。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md
index ddf9809440a..5bfb97c99f7 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md
@@ -1,36 +1,40 @@
---
-title: K3s Self-Assessment Guide - CIS Benchmark v1.23 - K8s v1.23
+title: K3s 自我评估指南 - CIS Benchmark v1.23 - K8s v1.23
---
-This document is a companion to the [K3s Hardening Guide](../../../../pages-for-subheaders/k3s-hardening-guide.md), which provides prescriptive guidance on how to harden K3s clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark.
+
+
+
-This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes:
+本文档是 [K3s 加固指南](k3s-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 K3s 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。
-| Rancher Version | CIS Benchmark Version | Kubernetes Version |
+本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes:
+
+| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 |
|-----------------|-----------------------|--------------------|
| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 |
-This document is for Rancher operators, security teams, auditors and decision makers.
+本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。
-For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.23. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
+有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.23 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。
-## Testing Methodology
+## 测试方法
-Each control in the CIS Kubernetes Benchmark was evaluated against a K3s cluster that was configured according to the accompanying hardening guide.
+每个 CIS Kubernetes Benchmark 中的 control 都根据附带的加固指南评估了针对 K3s 集群的配置。
-Where control audits differ from the original CIS benchmark, the audit commands specific to K3s are provided for testing.
+当 control 审计与原始的 CIS benchmark 不同的时候,提供了针对 K3s 的特定审计命令,以供测试使用。
-These are the possible results for each control:
+以下是每个 control 可能的结果:
-- **Pass** - The K3s cluster passes the audit outlined in the benchmark.
-- **Not Applicable** - The control is not applicable to K3s because of how it is designed to operate. The remediation section explains why.
-- **Warn** - The control is manual in the CIS benchmark and it depends on the cluster's use-case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure K3s doesn't prevent their implementation, but no further configuration or auditing of the cluster has been performed.
+- **Pass(通过)** - K3s 集群通过了 benchmark 中概述的审计。
+- **Not Applicable(不适用)** - 由于 K3s 的设计方式,该 control 不适用于 K3s。在补救措施部分解释了原因。
+- **Warn(警告)** - 在 CIS benchmark 中,该 control 是手动的,它取决于集群的使用情况或其他必须由集群操作员确定的因素。这些 control 措施已经过评估,以确保 K3s 不会阻止其实施,但尚未对集群进行进一步的配置或审计。
-This guide makes the assumption that K3s is running as a Systemd unit. Your installation may vary. Adjust the "audit" commands to fit your scenario.
+本指南假设 K3s 作为 Systemd 单元运行。你的安装可能会有所不同。调整"审计"命令以适合你的场景。
:::note
-This guide only covers `automated` (previously called `scored`) tests.
+本指南仅涵盖 `automated`(之前称为 `scored`)测试。
:::
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.24.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.24.md
deleted file mode 100644
index 8944de04c27..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.24.md
+++ /dev/null
@@ -1,3148 +0,0 @@
----
-title: K3s Self-Assessment Guide - CIS Benchmark v1.23 - K8s v1.24
----
-
-This document is a companion to the [K3s Hardening Guide](../../../../pages-for-subheaders/k3s-hardening-guide.md), which provides prescriptive guidance on how to harden K3s clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark.
-
-This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes:
-
-| Rancher Version | CIS Benchmark Version | Kubernetes Version |
-|-----------------|-----------------------|--------------------|
-| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.24 |
-
-This document is for Rancher operators, security teams, auditors and decision makers.
-
-For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.23. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
-
-## Testing Methodology
-
-Each control in the CIS Kubernetes Benchmark was evaluated against a K3s cluster that was configured according to the accompanying hardening guide.
-
-Where control audits differ from the original CIS benchmark, the audit commands specific to K3s are provided for testing.
-
-These are the possible results for each control:
-
-- **Pass** - The K3s cluster passes the audit outlined in the benchmark.
-- **Not Applicable** - The control is not applicable to K3s because of how it is designed to operate. The remediation section explains why.
-- **Warn** - The control is manual in the CIS benchmark and it depends on the cluster's use-case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure K3s doesn't prevent their implementation, but no further configuration or auditing of the cluster has been performed.
-
-This guide makes the assumption that K3s is running as a Systemd unit. Your installation may vary. Adjust the "audit" commands to fit your scenario.
-
-:::note
-
-This guide only covers `automated` (previously called `scored`) tests.
-
-:::
-
-### Controls
-
-## 1.1 Control Plane Node Configuration Files
-### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the
-control plane node.
-For example, chmod 644 /etc/kubernetes/manifests/kube-apiserver.yaml
-
-### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml
-
-### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644 /etc/kubernetes/manifests/kube-controller-manager.yaml
-
-### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml
-
-### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644 /etc/kubernetes/manifests/kube-scheduler.yaml
-
-### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml
-
-### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 /etc/kubernetes/manifests/etcd.yaml
-
-### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root /etc/kubernetes/manifests/etcd.yaml
-
-### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644
-
-### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root
-
-### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above). For example,
-chmod 700 /var/lib/etcd
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 1.1.11
-```
-
-**Expected Result**:
-
-```console
-'700' is equal to '700'
-```
-
-**Returned Value**:
-
-```console
-700
-```
-
-### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above).
-For example, chown etcd:etcd /var/lib/etcd
-
-### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 600 /var/lib/rancher/k3s/server/cred/admin.kubeconfig
-
-### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /etc/kubernetes/admin.conf
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 scheduler
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root scheduler
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 controllermanager
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/controller.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/controller.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root controllermanager
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/k3s/server/tls
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown -R root:root /etc/kubernetes/pki/
-
-**Audit:**
-
-```bash
-find /var/lib/rancher/k3s/server/tls | xargs stat -c %U:%G
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root
-```
-
-### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod -R 644 /etc/kubernetes/pki/*.crt
-
-**Audit:**
-
-```bash
-stat -c %n %a /var/lib/rancher/k3s/server/tls/*.crt
-```
-
-### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod -R 600 /etc/kubernetes/pki/*.key
-
-**Audit:**
-
-```bash
-stat -c %n %a /var/lib/rancher/k3s/server/tls/*.key
-```
-
-## 1.2 API Server
-### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---anonymous-auth=false
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth'
-```
-
-### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and configure alternate mechanisms for authentication. Then,
-edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the --token-auth-file= parameter.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--token-auth-file' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the `DenyServiceExternalIPs`
-from enabled admission plugins.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep containerd | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' is present OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 521 1 0 22:09 ? 00:00:00 /usr/bin/containerd root 802 1 0 22:09 ? 00:00:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 3970 1 0 22:13 ? 00:00:00 /var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin/containerd-shim-runc-v2 -namespace k8s.io -id 836d5508918bf22689e12f2885aeb9de5c2420beb956f6dc587d11b049e4edf4 -address /run/k3s/containerd/containerd.sock root 4088 1 0 22:13 ? 00:00:00 /var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin/containerd-shim-runc-v2 -namespace k8s.io -id 6148f976b57db791da86119c599700f683df838df56b1382d428ec38402caa64 -address /run/k3s/containerd/containerd.sock root 4109 1 0 22:13 ? 00:00:00 /var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin/containerd-shim-runc-v2 -namespace k8s.io -id b0e22b39e80606c4ad5d7fdafe21a771a27a5dce8f06155af579a9fad8158219 -address /run/k3s/containerd/containerd.sock root 5321 1 0 22:14 ? 00:00:00 /var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin/containerd-shim-runc-v2 -namespace k8s.io -id f60b96ef6beaf1b7903aba944af101a7d401082969ad8b5aea633bda1043b165 -address /run/k3s/containerd/containerd.sock root 5395 1 0 22:14 ? 00:00:00 /var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin/containerd-shim-runc-v2 -namespace k8s.io -id df001d7d85239f1cfdd8e041db801a6383bc18e5e10d0cd8e8e4e6ca9b7ce036 -address /run/k3s/containerd/containerd.sock root 6366 1 0 22:15 ? 00:00:00 /var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin/containerd-shim-runc-v2 -namespace k8s.io -id a900df9f736ca35b5dbd603034588a461d706ae6c83529136a81ec13e0ecedc0 -address /run/k3s/containerd/containerd.sock root 7859 1 0 22:15 ? 00:00:00 /var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin/containerd-shim-runc-v2 -namespace k8s.io -id b3c1510cafad33b31758db1b7c771efe98f0d395d5d52caf90db7bd379966299 -address /run/k3s/containerd/containerd.sock root 7896 1 0 22:15 ? 00:00:00 /var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin/containerd-shim-runc-v2 -namespace k8s.io -id e5720fc8f9e95f2fd8b8e87c93f25213315d78b630fc889b2a5cb8b18f2a1442 -address /run/k3s/containerd/containerd.sock root 9773 9761 4 22:22 ? 00:00:16 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd root 10275 1 0 22:22 ? 00:00:00 /var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin/containerd-shim-runc-v2 -namespace k8s.io -id 8b29d2205c5f7d47d0ecd4f95b7b96062454233e66450f9163058647e7b5c390 -address /run/k3s/containerd/containerd.sock root 11185 1 0 22:23 ? 00:00:00 /var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9581871a145dd1c0bbff06cddea0030a30de2760e90c5fa9154cccb250f05faa -address /run/k3s/containerd/containerd.sock root 13605 1 0 22:27 ? 00:00:00 /var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin/containerd-shim-runc-v2 -namespace k8s.io -id e05825dc2a33d576c35cc1b56610d3e4459d209524645f95f163c100f6e5df46 -address /run/k3s/containerd/containerd.sock root 13770 1 0 22:27 ? 00:00:00 /var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin/containerd-shim-runc-v2 -namespace k8s.io -id 74d10151f05f78b0d38deb2b1ee045bf7b72e7b879e6a15112c8495f98d230b8 -address /run/k3s/containerd/containerd.sock
-```
-
-### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the --kubelet-https parameter.
-
-### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the
-apiserver and kubelets. Then, edit API server pod specification file
-/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
-kubelet client certificate and key parameters as below.
---kubelet-client-certificate=
---kubelet-client-key=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and setup the TLS connection between
-the apiserver and kubelets. Then, edit the API server pod specification file
-/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
---kubelet-certificate-authority=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-certificate-authority' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
-One such example could be as below.
---authorization-mode=RBAC
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes Node.
---authorization-mode=Node,RBAC
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'Node'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
-for example `--authorization-mode=Node,RBAC`.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'RBAC'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and set the desired limits in a configuration file.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-and set the below parameters.
---enable-admission-plugins=...,EventRateLimit,...
---admission-control-config-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'EventRateLimit'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
-value that does not include AlwaysAdmit.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-AlwaysPullImages.
---enable-admission-plugins=...,AlwaysPullImages,...
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'AlwaysPullImages'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-SecurityContextDeny, unless PodSecurityPolicy is already in place.
---enable-admission-plugins=...,SecurityContextDeny,...
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and create ServiceAccount objects as per your environment.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
-value that does not include ServiceAccount.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'ServiceAccount'
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --disable-admission-plugins parameter to
-ensure it does not include NamespaceLifecycle.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.16 Ensure that the admission control plugin NodeRestriction is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to a
-value that includes NodeRestriction.
---enable-admission-plugins=...,NodeRestriction,...
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'NodeRestriction'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.17 Ensure that the --secure-port argument is not set to 0 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and either remove the --secure-port parameter or
-set it to a different (non-zero) desired port.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'secure-port'
-```
-
-**Expected Result**:
-
-```console
-'--secure-port' is greater than 0 OR '--secure-port' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.18 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'profiling'
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.19 Ensure that the --audit-log-path argument is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-path parameter to a suitable path and
-file where you would like audit logs to be written, for example,
---audit-log-path=/var/log/apiserver/audit.log
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-path'
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-path' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.20 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxage parameter to 30
-or as an appropriate number of days, for example,
---audit-log-maxage=30
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxage'
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxage' is greater or equal to 30
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.21 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
-value. For example,
---audit-log-maxbackup=10
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxbackup'
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxbackup' is greater or equal to 10
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.22 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
-For example, to set it as 100 MB, --audit-log-maxsize=100
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxsize'
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxsize' is greater or equal to 100
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.23 Ensure that the --request-timeout argument is set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-and set the below parameter as appropriate and if needed.
-For example, --request-timeout=300s
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'request-timeout'
-```
-
-**Expected Result**:
-
-```console
-'--request-timeout' is not present OR '--request-timeout' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.24 Ensure that the --service-account-lookup argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---service-account-lookup=true
-Alternatively, you can delete the --service-account-lookup parameter from this file so
-that the default takes effect.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'service-account-lookup'
-```
-
-**Expected Result**:
-
-```console
-'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.25 Ensure that the --service-account-key-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --service-account-key-file parameter
-to the public key file for service accounts. For example,
---service-account-key-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'service-account-key-file'
-```
-
-**Expected Result**:
-
-```console
-'--service-account-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.26 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate and key file parameters.
---etcd-certfile=
---etcd-keyfile=
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 1.2.29
-```
-
-**Expected Result**:
-
-```console
-'--etcd-certfile' is present AND '--etcd-keyfile' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.27 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the TLS certificate and private key file parameters.
---tls-cert-file=
---tls-private-key-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep -A1 'Running kube-apiserver' | tail -n2
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
-```
-
-### 1.2.28 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the client certificate authority file.
---client-ca-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'client-ca-file'
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.29 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate authority file parameter.
---etcd-cafile=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-cafile'
-```
-
-**Expected Result**:
-
-```console
-'--etcd-cafile' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.30 Ensure that the --encryption-provider-config argument is set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --encryption-provider-config parameter to the path of that file.
-For example, --encryption-provider-config=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'encryption-provider-config'
-```
-
-**Expected Result**:
-
-```console
-'--encryption-provider-config' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.31 Ensure that encryption providers are appropriately configured (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-In this file, choose aescbc, kms or secretbox as the encryption provider.
-
-**Audit:**
-
-```bash
-grep aescbc /path/to/encryption-config.json
-```
-
-### 1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
-TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
-TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
-TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'tls-cipher-suites'
-```
-
-**Expected Result**:
-
-```console
-'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-## 1.3 Controller Manager
-### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
-for example, --terminated-pod-gc-threshold=10
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'terminated-pod-gc-threshold'
-```
-
-### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'profiling'
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
-```
-
-### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node to set the below parameter.
---use-service-account-credentials=true
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'use-service-account-credentials'
-```
-
-**Expected Result**:
-
-```console
-'--use-service-account-credentials' is not equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
-```
-
-### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --service-account-private-key-file parameter
-to the private key file for service accounts.
---service-account-private-key-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'service-account-private-key-file'
-```
-
-**Expected Result**:
-
-```console
-'--service-account-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
-```
-
-### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
---root-ca-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'root-ca-file'
-```
-
-**Expected Result**:
-
-```console
-'--root-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
-```
-
-### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
---feature-gates=RotateKubeletServerCertificate=true
-
-### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'bind-address'
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
-```
-
-## 1.4 Scheduler
-### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
-```
-
-### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'bind-address'
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
-```
-
-## 2 Etcd Node Configuration
-### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the etcd service documentation and configure TLS encryption.
-Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
-on the master node and set the below parameters.
---cert-file=
---key-file=
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.1
-```
-
-**Expected Result**:
-
-```console
-'cert-file' is present AND 'key-file' is present
-```
-
-**Returned Value**:
-
-```console
-cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key
-```
-
-### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
-node and set the below parameter.
---client-cert-auth="true"
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.2
-```
-
-**Expected Result**:
-
-```console
-'--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-client-cert-auth: true
-```
-
-### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
-node and either remove the --auto-tls parameter or set it to false.
---auto-tls=false
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.3
-```
-
-**Expected Result**:
-
-```console
-'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-error: process ID list syntax error Usage: ps [options] Try 'ps --help ' or 'ps --help ' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory
-```
-
-### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the etcd service documentation and configure peer TLS encryption as appropriate
-for your etcd cluster.
-Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the
-master node and set the below parameters.
---peer-client-file=
---peer-key-file=
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.4
-```
-
-**Expected Result**:
-
-```console
-'cert-file' is present AND 'key-file' is present
-```
-
-**Returned Value**:
-
-```console
-cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key
-```
-
-### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
-node and set the below parameter.
---peer-client-cert-auth=true
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.5
-```
-
-**Expected Result**:
-
-```console
-'--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-client-cert-auth: true
-```
-
-### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
-node and either remove the --peer-auto-tls parameter or set it to false.
---peer-auto-tls=false
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.6
-```
-
-**Expected Result**:
-
-```console
-'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-error: process ID list syntax error Usage: ps [options] Try 'ps --help ' or 'ps --help ' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory
-```
-
-### 2.7 Ensure that a unique Certificate Authority is used for etcd (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-[Manual test]
-Follow the etcd documentation and create a dedicated certificate authority setup for the
-etcd service.
-Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the
-master node and set the below parameter.
---trusted-ca-file=
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.7
-```
-
-**Expected Result**:
-
-```console
-'trusted-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt
-```
-
-## 3.1 Authentication and Authorization
-### 3.1.1 Client certificate authentication should not be used for users (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
-implemented in place of client certificates.
-
-## 3.2 Logging
-### 3.2.1 Ensure that a minimal audit policy is created (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Create an audit policy file for your cluster.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file'
-```
-
-### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the audit policy provided for the cluster and ensure that it covers
-at least the following areas,
-- Access to Secrets managed by the cluster. Care should be taken to only
- log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
- order to avoid risk of logging sensitive data.
-- Modification of Pod and Deployment objects.
-- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
- For most requests, minimally logging at the Metadata level is recommended
- (the most basic level of logging).
-
-## 4.1 Worker Node Configuration Files
-### 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example, chmod 644 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-
-### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-
-### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
-
-**Audit:**
-
-```bash
-stat -c %a /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'permissions' is present OR '/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig' is not present
-```
-
-**Returned Value**:
-
-```console
-644
-```
-
-### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example, chown root:root /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present OR '/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig' is not present
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /var/lib/rancher/k3s/server/cred/admin.kubeconfig
-
-**Audit:**
-
-```bash
-stat -c %a /var/lib/rancher/k3s/agent/kubelet.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'644' is equal to '644'
-```
-
-**Returned Value**:
-
-```console
-644
-```
-
-### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chown root:root /var/lib/rancher/k3s/server/cred/admin.kubeconfig
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/k3s/agent/kubelet.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the file permissions of the
---client-ca-file chmod 644
-
-**Audit:**
-
-```bash
-stat -c %a /var/lib/rancher/k3s/server/tls/server-ca.crt
-```
-
-**Expected Result**:
-
-```console
-'644' is equal to '644' OR '640' is present OR '600' is present OR '444' is present OR '440' is present OR '400' is present OR '000' is present
-```
-
-**Returned Value**:
-
-```console
-644
-```
-
-### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the ownership of the --client-ca-file.
-chown root:root
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/k3s/server/tls/client-ca.crt
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chmod 644 /var/lib/kubelet/config.yaml
-
-### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chown root:root /var/lib/kubelet/config.yaml
-
-## 4.2 Kubelet
-### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
-`false`.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-`--anonymous-auth=false`
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" | grep -v grep; else echo "--anonymous-auth=false"; fi'
-```
-
-**Expected Result**:
-
-```console
-'--anonymous-auth' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
-using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---authorization-mode=Webhook
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" | grep -v grep; else echo "--authorization-mode=Webhook"; fi'
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
-the location of the client CA file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---client-ca-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file" | grep -v grep; else echo "--client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt"; fi'
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:02 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:02Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 4.2.4 Ensure that the --read-only-port argument is set to 0 (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---read-only-port=0
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'read-only-port'
-```
-
-**Expected Result**:
-
-```console
-'--read-only-port' is equal to '0' OR '--read-only-port' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:03 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:03Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-2 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=995abdc2-f967-4bd6-936e-f3c26594573c --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
-```
-
-### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
-value other than 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---streaming-connection-idle-timeout=5m
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'streaming-connection-idle-timeout'
-```
-
-### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---protect-kernel-defaults=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'protect-kernel-defaults'
-```
-
-**Expected Result**:
-
-```console
-'--protect-kernel-defaults' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:03 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:03Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-2 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=995abdc2-f967-4bd6-936e-f3c26594573c --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
-```
-
-### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove the --make-iptables-util-chains argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'make-iptables-util-chains'
-```
-
-**Expected Result**:
-
-```console
-'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:03 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:03Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-2 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=995abdc2-f967-4bd6-936e-f3c26594573c --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
-```
-
-### 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and remove the --hostname-override argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-### 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC containerd
-```
-
-### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
-of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
-to the location of the corresponding private key file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
---tls-cert-file=
---tls-private-key-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 22:22:03 ip-172-31-25-2 k3s[9761]: time="2023-02-27T22:22:03Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-2 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=995abdc2-f967-4bd6-936e-f3c26594573c --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
-```
-
-### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
-remove it altogether to use the default value.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
-variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC containerd
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'{.rotateCertificates}' is present OR '{.rotateCertificates}' is not present
-```
-
-### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
---feature-gates=RotateKubeletServerCertificate=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
-TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-or to a subset of these values.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the --tls-cipher-suites parameter as follows, or to a subset of these values.
---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC containerd
-```
-
-## 5.1 RBAC and Service Accounts
-### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
-if they need this role or if they could use a role with fewer privileges.
-Where possible, first bind users to a lower privileged role and then remove the
-clusterrolebinding to the cluster-admin role :
-kubectl delete clusterrolebinding [name]
-
-### 5.1.2 Minimize access to secrets (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove get, list and watch access to Secret objects in the cluster.
-
-### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible replace any use of wildcards in clusterroles and roles with specific
-objects or actions.
-
-### 5.1.4 Minimize access to create pods (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove create access to pod objects in the cluster.
-
-### 5.1.5 Ensure that default service accounts are not actively used. (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Create explicit service accounts wherever a Kubernetes workload requires specific access
-to the Kubernetes API server.
-Modify the configuration of each default service account to include this value
-automountServiceAccountToken: false
-
-### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Modify the definition of pods and service accounts which do not need to mount service
-account tokens to disable it.
-
-### 5.1.7 Avoid use of system:masters group (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Remove the system:masters group from all users in the cluster.
-
-### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove the impersonate, bind and escalate rights from subjects.
-
-## 5.2 Pod Security Standards
-### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that either Pod Security Admission or an external policy control system is in place
-for every namespace which contains user workloads.
-
-### 5.2.2 Minimize the admission of privileged containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of privileged containers.
-
-### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostPID` containers.
-
-### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostIPC` containers.
-
-### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostNetwork` containers.
-
-### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
-
-### 5.2.7 Minimize the admission of root containers (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
-or `MustRunAs` with the range of UIDs not including 0, is set.
-
-### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with the `NET_RAW` capability.
-
-### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that `allowedCapabilities` is not present in policies for the cluster unless
-it is set to an empty array.
-
-### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the use of capabilites in applications running on your cluster. Where a namespace
-contains applicaions which do not require any Linux capabities to operate consider adding
-a PSP which forbids the admission of containers which do not drop all capabilities.
-
-### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
-
-### 5.2.12 Minimize the admission of HostPath volumes (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `hostPath` volumes.
-
-### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers which use `hostPort` sections.
-
-## 5.3 Network Policies and CNI
-### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If the CNI plugin in use does not support network policies, consideration should be given to
-making use of a different plugin, or finding an alternate mechanism for restricting traffic
-in the Kubernetes cluster.
-
-### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the documentation and create NetworkPolicy objects as you need them.
-
-## 5.4 Secrets Management
-### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If possible, rewrite application code to read Secrets from mounted secret files, rather than
-from environment variables.
-
-### 5.4.2 Consider external secret storage (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Refer to the Secrets management options offered by your cloud provider or a third-party
-secrets management solution.
-
-## 5.5 Extensible Admission Control
-### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and setup image provenance.
-
-## 5.7 General Policies
-### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the documentation and create namespaces for objects in your deployment as you need
-them.
-
-### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
-An example is as below:
-securityContext:
-seccompProfile:
-type: RuntimeDefault
-
-### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
-suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
-Containers.
-
-### 5.7.4 The default namespace should not be used (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
-resources and that all new resources are created in a specific namespace.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.25.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.25.md
deleted file mode 100644
index 1f5b7dfd37b..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.25.md
+++ /dev/null
@@ -1,3148 +0,0 @@
----
-title: K3s Self-Assessment Guide - CIS Benchmark v1.23 - K8s v1.25
----
-
-This document is a companion to the [K3s Hardening Guide](../../../../pages-for-subheaders/k3s-hardening-guide.md), which provides prescriptive guidance on how to harden K3s clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark.
-
-This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes:
-
-| Rancher Version | CIS Benchmark Version | Kubernetes Version |
-|-----------------|-----------------------|--------------------|
-| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.25 |
-
-This document is for Rancher operators, security teams, auditors and decision makers.
-
-For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.23. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
-
-## Testing Methodology
-
-Each control in the CIS Kubernetes Benchmark was evaluated against a K3s cluster that was configured according to the accompanying hardening guide.
-
-Where control audits differ from the original CIS benchmark, the audit commands specific to K3s are provided for testing.
-
-These are the possible results for each control:
-
-- **Pass** - The K3s cluster passes the audit outlined in the benchmark.
-- **Not Applicable** - The control is not applicable to K3s because of how it is designed to operate. The remediation section explains why.
-- **Warn** - The control is manual in the CIS benchmark and it depends on the cluster's use-case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure K3s doesn't prevent their implementation, but no further configuration or auditing of the cluster has been performed.
-
-This guide makes the assumption that K3s is running as a Systemd unit. Your installation may vary. Adjust the "audit" commands to fit your scenario.
-
-:::note
-
-This guide only covers `automated` (previously called `scored`) tests.
-
-:::
-
-### Controls
-
-## 1.1 Control Plane Node Configuration Files
-### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the
-control plane node.
-For example, chmod 644 /etc/kubernetes/manifests/kube-apiserver.yaml
-
-### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml
-
-### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644 /etc/kubernetes/manifests/kube-controller-manager.yaml
-
-### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml
-
-### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644 /etc/kubernetes/manifests/kube-scheduler.yaml
-
-### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml
-
-### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 /etc/kubernetes/manifests/etcd.yaml
-
-### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root /etc/kubernetes/manifests/etcd.yaml
-
-### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644
-
-### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root
-
-### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above). For example,
-chmod 700 /var/lib/etcd
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 1.1.11
-```
-
-**Expected Result**:
-
-```console
-'700' is equal to '700'
-```
-
-**Returned Value**:
-
-```console
-700
-```
-
-### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above).
-For example, chown etcd:etcd /var/lib/etcd
-
-### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 600 /var/lib/rancher/k3s/server/cred/admin.kubeconfig
-
-### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /etc/kubernetes/admin.conf
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 scheduler
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root scheduler
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 controllermanager
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/controller.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/controller.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root controllermanager
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/k3s/server/tls
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown -R root:root /etc/kubernetes/pki/
-
-**Audit:**
-
-```bash
-find /var/lib/rancher/k3s/server/tls | xargs stat -c %U:%G
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root
-```
-
-### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod -R 644 /etc/kubernetes/pki/*.crt
-
-**Audit:**
-
-```bash
-stat -c %n %a /var/lib/rancher/k3s/server/tls/*.crt
-```
-
-### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod -R 600 /etc/kubernetes/pki/*.key
-
-**Audit:**
-
-```bash
-stat -c %n %a /var/lib/rancher/k3s/server/tls/*.key
-```
-
-## 1.2 API Server
-### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---anonymous-auth=false
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth'
-```
-
-### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and configure alternate mechanisms for authentication. Then,
-edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the --token-auth-file= parameter.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--token-auth-file' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the `DenyServiceExternalIPs`
-from enabled admission plugins.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep containerd | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' is present OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 519 1 0 22:09 ? 00:00:00 /usr/bin/containerd root 801 1 0 22:09 ? 00:00:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 3864 1 0 22:31 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id d00174abbc275f6bb85c7f0be1d3154b9c91982a10b9dba6b5cb280f4d4c531d -address /run/k3s/containerd/containerd.sock root 4105 1 0 22:31 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 7c2b546b4d2380bcb51278661f34cff94fad2ba06978e13f8f1b92dafcc89d43 -address /run/k3s/containerd/containerd.sock root 4206 1 0 22:31 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 68d8a55ff4663985be004608dbf78b0362f5522e18490c81d4c8dc9963de1556 -address /run/k3s/containerd/containerd.sock root 5374 1 0 22:31 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id ca0ae9e0b37dfd7b1ce05f72e1bc5a1be8f5cb08f2b4543081536de3bdbc925d -address /run/k3s/containerd/containerd.sock root 5443 1 0 22:31 ? 00:00:01 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3ea3c1cdbbd5adb8efd5c67a46aadd0fca9918dc0ad1f7cafe38b83171e3dc1b -address /run/k3s/containerd/containerd.sock root 7130 1 0 22:32 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4d838297d35a31003106ac5989c3547433985bb2964b47baad12cee6e375645e -address /run/k3s/containerd/containerd.sock root 7639 1 0 22:32 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 341cb9bcd8486aa2f1acb8e1ae51baebd630ac6ed266643266c34d677f61c7d0 -address /run/k3s/containerd/containerd.sock root 10308 1 0 23:17 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id c534fbee8e0d06fd9b29bf8fc70a138975c6b18db25f1faf2615677dfdb4199e -address /run/k3s/containerd/containerd.sock root 11370 1 0 23:18 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4ff4b8776dac7a35b83616d341dbe4d5a689ac7fb9b8eee8db5978e3968380ea -address /run/k3s/containerd/containerd.sock root 13736 13723 2 23:21 ? 00:00:10 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd root 16022 1 0 23:29 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9027256349086e458119478e5e00384b1b76fbf5e6dbee23699f596a88d9f2bc -address /run/k3s/containerd/containerd.sock root 16159 1 0 23:29 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 929bf369fc5881654f4c1925624151ddb7cea51073267b8d213d966ba45406f3 -address /run/k3s/containerd/containerd.sock
-```
-
-### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the --kubelet-https parameter.
-
-### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the
-apiserver and kubelets. Then, edit API server pod specification file
-/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
-kubelet client certificate and key parameters as below.
---kubelet-client-certificate=
---kubelet-client-key=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and setup the TLS connection between
-the apiserver and kubelets. Then, edit the API server pod specification file
-/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
---kubelet-certificate-authority=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-certificate-authority' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
-One such example could be as below.
---authorization-mode=RBAC
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes Node.
---authorization-mode=Node,RBAC
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'Node'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
-for example `--authorization-mode=Node,RBAC`.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'RBAC'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and set the desired limits in a configuration file.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-and set the below parameters.
---enable-admission-plugins=...,EventRateLimit,...
---admission-control-config-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'EventRateLimit'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
-value that does not include AlwaysAdmit.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-AlwaysPullImages.
---enable-admission-plugins=...,AlwaysPullImages,...
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'AlwaysPullImages'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-SecurityContextDeny, unless PodSecurityPolicy is already in place.
---enable-admission-plugins=...,SecurityContextDeny,...
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and create ServiceAccount objects as per your environment.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
-value that does not include ServiceAccount.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'ServiceAccount'
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --disable-admission-plugins parameter to
-ensure it does not include NamespaceLifecycle.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.16 Ensure that the admission control plugin NodeRestriction is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to a
-value that includes NodeRestriction.
---enable-admission-plugins=...,NodeRestriction,...
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'NodeRestriction'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.17 Ensure that the --secure-port argument is not set to 0 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and either remove the --secure-port parameter or
-set it to a different (non-zero) desired port.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'secure-port'
-```
-
-**Expected Result**:
-
-```console
-'--secure-port' is greater than 0 OR '--secure-port' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.18 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'profiling'
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.19 Ensure that the --audit-log-path argument is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-path parameter to a suitable path and
-file where you would like audit logs to be written, for example,
---audit-log-path=/var/log/apiserver/audit.log
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-path'
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-path' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.20 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxage parameter to 30
-or as an appropriate number of days, for example,
---audit-log-maxage=30
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxage'
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxage' is greater or equal to 30
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.21 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
-value. For example,
---audit-log-maxbackup=10
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxbackup'
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxbackup' is greater or equal to 10
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.22 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
-For example, to set it as 100 MB, --audit-log-maxsize=100
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxsize'
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxsize' is greater or equal to 100
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.23 Ensure that the --request-timeout argument is set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-and set the below parameter as appropriate and if needed.
-For example, --request-timeout=300s
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'request-timeout'
-```
-
-**Expected Result**:
-
-```console
-'--request-timeout' is not present OR '--request-timeout' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.24 Ensure that the --service-account-lookup argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---service-account-lookup=true
-Alternatively, you can delete the --service-account-lookup parameter from this file so
-that the default takes effect.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'service-account-lookup'
-```
-
-**Expected Result**:
-
-```console
-'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.25 Ensure that the --service-account-key-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --service-account-key-file parameter
-to the public key file for service accounts. For example,
---service-account-key-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'service-account-key-file'
-```
-
-**Expected Result**:
-
-```console
-'--service-account-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.26 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate and key file parameters.
---etcd-certfile=
---etcd-keyfile=
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 1.2.29
-```
-
-**Expected Result**:
-
-```console
-'--etcd-certfile' is present AND '--etcd-keyfile' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.27 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the TLS certificate and private key file parameters.
---tls-cert-file=
---tls-private-key-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep -A1 'Running kube-apiserver' | tail -n2
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
-```
-
-### 1.2.28 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the client certificate authority file.
---client-ca-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'client-ca-file'
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.29 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate authority file parameter.
---etcd-cafile=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-cafile'
-```
-
-**Expected Result**:
-
-```console
-'--etcd-cafile' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.30 Ensure that the --encryption-provider-config argument is set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --encryption-provider-config parameter to the path of that file.
-For example, --encryption-provider-config=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'encryption-provider-config'
-```
-
-**Expected Result**:
-
-```console
-'--encryption-provider-config' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.31 Ensure that encryption providers are appropriately configured (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-In this file, choose aescbc, kms or secretbox as the encryption provider.
-
-**Audit:**
-
-```bash
-grep aescbc /path/to/encryption-config.json
-```
-
-### 1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
-TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
-TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
-TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'tls-cipher-suites'
-```
-
-**Expected Result**:
-
-```console
-'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-## 1.3 Controller Manager
-### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
-for example, --terminated-pod-gc-threshold=10
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'terminated-pod-gc-threshold'
-```
-
-### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'profiling'
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
-```
-
-### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node to set the below parameter.
---use-service-account-credentials=true
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'use-service-account-credentials'
-```
-
-**Expected Result**:
-
-```console
-'--use-service-account-credentials' is not equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
-```
-
-### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --service-account-private-key-file parameter
-to the private key file for service accounts.
---service-account-private-key-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'service-account-private-key-file'
-```
-
-**Expected Result**:
-
-```console
-'--service-account-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
-```
-
-### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
---root-ca-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'root-ca-file'
-```
-
-**Expected Result**:
-
-```console
-'--root-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
-```
-
-### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
---feature-gates=RotateKubeletServerCertificate=true
-
-### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'bind-address'
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
-```
-
-## 1.4 Scheduler
-### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
-```
-
-### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'bind-address'
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
-```
-
-## 2 Etcd Node Configuration
-### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the etcd service documentation and configure TLS encryption.
-Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
-on the master node and set the below parameters.
---cert-file=
---key-file=
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.1
-```
-
-**Expected Result**:
-
-```console
-'cert-file' is present AND 'key-file' is present
-```
-
-**Returned Value**:
-
-```console
-cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key
-```
-
-### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
-node and set the below parameter.
---client-cert-auth="true"
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.2
-```
-
-**Expected Result**:
-
-```console
-'--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-client-cert-auth: true
-```
-
-### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
-node and either remove the --auto-tls parameter or set it to false.
---auto-tls=false
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.3
-```
-
-**Expected Result**:
-
-```console
-'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-error: process ID list syntax error Usage: ps [options] Try 'ps --help ' or 'ps --help ' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory
-```
-
-### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the etcd service documentation and configure peer TLS encryption as appropriate
-for your etcd cluster.
-Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the
-master node and set the below parameters.
---peer-client-file=
---peer-key-file=
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.4
-```
-
-**Expected Result**:
-
-```console
-'cert-file' is present AND 'key-file' is present
-```
-
-**Returned Value**:
-
-```console
-cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key
-```
-
-### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
-node and set the below parameter.
---peer-client-cert-auth=true
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.5
-```
-
-**Expected Result**:
-
-```console
-'--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-client-cert-auth: true
-```
-
-### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
-node and either remove the --peer-auto-tls parameter or set it to false.
---peer-auto-tls=false
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.6
-```
-
-**Expected Result**:
-
-```console
-'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-error: process ID list syntax error Usage: ps [options] Try 'ps --help ' or 'ps --help ' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory
-```
-
-### 2.7 Ensure that a unique Certificate Authority is used for etcd (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-[Manual test]
-Follow the etcd documentation and create a dedicated certificate authority setup for the
-etcd service.
-Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the
-master node and set the below parameter.
---trusted-ca-file=
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.7
-```
-
-**Expected Result**:
-
-```console
-'trusted-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt
-```
-
-## 3.1 Authentication and Authorization
-### 3.1.1 Client certificate authentication should not be used for users (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
-implemented in place of client certificates.
-
-## 3.2 Logging
-### 3.2.1 Ensure that a minimal audit policy is created (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Create an audit policy file for your cluster.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file'
-```
-
-### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the audit policy provided for the cluster and ensure that it covers
-at least the following areas,
-- Access to Secrets managed by the cluster. Care should be taken to only
- log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
- order to avoid risk of logging sensitive data.
-- Modification of Pod and Deployment objects.
-- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
- For most requests, minimally logging at the Metadata level is recommended
- (the most basic level of logging).
-
-## 4.1 Worker Node Configuration Files
-### 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example, chmod 644 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-
-### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-
-### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
-
-**Audit:**
-
-```bash
-stat -c %a /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'permissions' is present OR '/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig' is not present
-```
-
-**Returned Value**:
-
-```console
-644
-```
-
-### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example, chown root:root /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present OR '/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig' is not present
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /var/lib/rancher/k3s/server/cred/admin.kubeconfig
-
-**Audit:**
-
-```bash
-stat -c %a /var/lib/rancher/k3s/agent/kubelet.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'644' is equal to '644'
-```
-
-**Returned Value**:
-
-```console
-644
-```
-
-### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chown root:root /var/lib/rancher/k3s/server/cred/admin.kubeconfig
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/k3s/agent/kubelet.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the file permissions of the
---client-ca-file chmod 644
-
-**Audit:**
-
-```bash
-stat -c %a /var/lib/rancher/k3s/server/tls/server-ca.crt
-```
-
-**Expected Result**:
-
-```console
-'644' is equal to '644' OR '640' is present OR '600' is present OR '444' is present OR '440' is present OR '400' is present OR '000' is present
-```
-
-**Returned Value**:
-
-```console
-644
-```
-
-### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the ownership of the --client-ca-file.
-chown root:root
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/k3s/server/tls/client-ca.crt
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chmod 644 /var/lib/kubelet/config.yaml
-
-### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chown root:root /var/lib/kubelet/config.yaml
-
-## 4.2 Kubelet
-### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
-`false`.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-`--anonymous-auth=false`
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" | grep -v grep; else echo "--anonymous-auth=false"; fi'
-```
-
-**Expected Result**:
-
-```console
-'--anonymous-auth' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
-using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---authorization-mode=Webhook
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" | grep -v grep; else echo "--authorization-mode=Webhook"; fi'
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
-the location of the client CA file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---client-ca-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file" | grep -v grep; else echo "--client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt"; fi'
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 4.2.4 Ensure that the --read-only-port argument is set to 0 (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---read-only-port=0
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'read-only-port'
-```
-
-**Expected Result**:
-
-```console
-'--read-only-port' is equal to '0' OR '--read-only-port' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:44 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:44Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-31-124 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c42d922-ed1e-4e15-8414-d399d179d897 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
-```
-
-### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
-value other than 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---streaming-connection-idle-timeout=5m
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'streaming-connection-idle-timeout'
-```
-
-### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---protect-kernel-defaults=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'protect-kernel-defaults'
-```
-
-**Expected Result**:
-
-```console
-'--protect-kernel-defaults' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:44 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:44Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-31-124 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c42d922-ed1e-4e15-8414-d399d179d897 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
-```
-
-### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove the --make-iptables-util-chains argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'make-iptables-util-chains'
-```
-
-**Expected Result**:
-
-```console
-'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:44 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:44Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-31-124 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c42d922-ed1e-4e15-8414-d399d179d897 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
-```
-
-### 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and remove the --hostname-override argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-### 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC containerd
-```
-
-### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
-of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
-to the location of the corresponding private key file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
---tls-cert-file=
---tls-private-key-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:44 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:44Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-31-124 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c42d922-ed1e-4e15-8414-d399d179d897 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
-```
-
-### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
-remove it altogether to use the default value.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
-variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC containerd
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'{.rotateCertificates}' is present OR '{.rotateCertificates}' is not present
-```
-
-### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
---feature-gates=RotateKubeletServerCertificate=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
-TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-or to a subset of these values.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the --tls-cipher-suites parameter as follows, or to a subset of these values.
---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC containerd
-```
-
-## 5.1 RBAC and Service Accounts
-### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
-if they need this role or if they could use a role with fewer privileges.
-Where possible, first bind users to a lower privileged role and then remove the
-clusterrolebinding to the cluster-admin role :
-kubectl delete clusterrolebinding [name]
-
-### 5.1.2 Minimize access to secrets (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove get, list and watch access to Secret objects in the cluster.
-
-### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible replace any use of wildcards in clusterroles and roles with specific
-objects or actions.
-
-### 5.1.4 Minimize access to create pods (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove create access to pod objects in the cluster.
-
-### 5.1.5 Ensure that default service accounts are not actively used. (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Create explicit service accounts wherever a Kubernetes workload requires specific access
-to the Kubernetes API server.
-Modify the configuration of each default service account to include this value
-automountServiceAccountToken: false
-
-### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Modify the definition of pods and service accounts which do not need to mount service
-account tokens to disable it.
-
-### 5.1.7 Avoid use of system:masters group (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Remove the system:masters group from all users in the cluster.
-
-### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove the impersonate, bind and escalate rights from subjects.
-
-## 5.2 Pod Security Standards
-### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that either Pod Security Admission or an external policy control system is in place
-for every namespace which contains user workloads.
-
-### 5.2.2 Minimize the admission of privileged containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of privileged containers.
-
-### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostPID` containers.
-
-### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostIPC` containers.
-
-### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostNetwork` containers.
-
-### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
-
-### 5.2.7 Minimize the admission of root containers (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
-or `MustRunAs` with the range of UIDs not including 0, is set.
-
-### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with the `NET_RAW` capability.
-
-### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that `allowedCapabilities` is not present in policies for the cluster unless
-it is set to an empty array.
-
-### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the use of capabilites in applications running on your cluster. Where a namespace
-contains applicaions which do not require any Linux capabities to operate consider adding
-a PSP which forbids the admission of containers which do not drop all capabilities.
-
-### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
-
-### 5.2.12 Minimize the admission of HostPath volumes (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `hostPath` volumes.
-
-### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers which use `hostPort` sections.
-
-## 5.3 Network Policies and CNI
-### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If the CNI plugin in use does not support network policies, consideration should be given to
-making use of a different plugin, or finding an alternate mechanism for restricting traffic
-in the Kubernetes cluster.
-
-### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the documentation and create NetworkPolicy objects as you need them.
-
-## 5.4 Secrets Management
-### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If possible, rewrite application code to read Secrets from mounted secret files, rather than
-from environment variables.
-
-### 5.4.2 Consider external secret storage (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Refer to the Secrets management options offered by your cloud provider or a third-party
-secrets management solution.
-
-## 5.5 Extensible Admission Control
-### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and setup image provenance.
-
-## 5.7 General Policies
-### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the documentation and create namespaces for objects in your deployment as you need
-them.
-
-### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
-An example is as below:
-securityContext:
-seccompProfile:
-type: RuntimeDefault
-
-### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
-suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
-Containers.
-
-### 5.7.4 The default namespace should not be used (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
-resources and that all new resources are created in a specific namespace.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.24-k8s-v1.24.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.24-k8s-v1.24.md
new file mode 100644
index 00000000000..464073a825f
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.24-k8s-v1.24.md
@@ -0,0 +1,3208 @@
+---
+title: K3s 自我评估指南 - CIS Benchmark v1.24 - K8s v1.24
+---
+
+
+
+
+
+本文档是 [K3s 加固指南](k3s-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 K3s 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。
+
+本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes:
+
+| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 |
+|-----------------|-----------------------|--------------------|
+| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 |
+
+本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。
+
+有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.24 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。
+
+## 测试方法
+
+每个 CIS Kubernetes Benchmark 中的 control 都根据附带的加固指南评估了针对 K3s 集群的配置。
+
+当 control 审计与原始的 CIS benchmark 不同的时候,提供了针对 K3s 的特定审计命令,以供测试使用。
+
+以下是每个 control 可能的结果:
+
+- **Pass(通过)** - K3s 集群通过了 benchmark 中概述的审计。
+- **Not Applicable(不适用)** - 由于 K3s 的设计方式,该 control 不适用于 K3s。在补救措施部分解释了原因。
+- **Warn(警告)** - 在 CIS benchmark 中,该 control 是手动的,它取决于集群的使用情况或其他必须由集群操作员确定的因素。这些 control 措施已经过评估,以确保 K3s 不会阻止其实施,但尚未对集群进行进一步的配置或审计。
+
+本指南假设 K3s 作为 Systemd 单元运行。你的安装可能会有所不同。调整"审计"命令以适合你的场景。
+
+:::note
+
+本指南仅涵盖 `automated`(之前称为 `scored`)测试。
+
+:::
+
+### Controls
+
+
+## 1.1 Control Plane Node Configuration Files
+### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the
+control plane node.
+For example, chmod 644 /etc/kubernetes/manifests/kube-apiserver.yaml
+
+### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml
+
+### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml
+
+### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml
+
+### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml
+
+### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml
+
+### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod 600 /etc/kubernetes/manifests/etcd.yaml
+
+### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root /etc/kubernetes/manifests/etcd.yaml
+
+### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600
+
+### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root
+
+### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above). For example,
+chmod 700 /var/lib/etcd
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 1.1.11
+```
+
+**Expected Result**:
+
+```console
+'700' is equal to '700'
+```
+
+**Returned Value**:
+
+```console
+700
+```
+
+### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above).
+For example, chown etcd:etcd /var/lib/etcd
+
+### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600 /var/lib/rancher/k3s/server/cred/admin.kubeconfig
+
+### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /etc/kubernetes/admin.conf
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod 600 scheduler
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 600, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600
+```
+
+### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root scheduler
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod 600 controllermanager
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/controller.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/controller.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 600, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600
+```
+
+### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root controllermanager
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/k3s/server/tls
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown -R root:root /etc/kubernetes/pki/
+
+**Audit:**
+
+```bash
+find /var/lib/rancher/k3s/server/tls | xargs stat -c %U:%G
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root
+```
+
+### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod -R 600 /etc/kubernetes/pki/*.crt
+
+**Audit:**
+
+```bash
+stat -c %n %a /var/lib/rancher/k3s/server/tls/*.crt
+```
+
+### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod -R 600 /etc/kubernetes/pki/*.key
+
+**Audit:**
+
+```bash
+stat -c %n %a /var/lib/rancher/k3s/server/tls/*.key
+```
+
+## 1.2 API Server
+### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--anonymous-auth=false
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth'
+```
+
+### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the documentation and configure alternate mechanisms for authentication. Then,
+edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and remove the --token-auth-file= parameter.
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--token-auth-file' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and remove the `DenyServiceExternalIPs`
+from enabled admission plugins.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep containerd | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' is present OR '--enable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 410 1 0 Sep11 ? 00:01:50 /usr/bin/containerd root 539 1 0 Sep11 ? 00:00:09 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 45213 45195 3 Sep11 ? 00:45:14 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd root 47188 1 0 Sep11 ? 00:01:00 /var/lib/rancher/k3s/data/484bf694b486d93cc93bcf90f74d5c77628550d4456f21760fd720b88c93881e/bin/containerd-shim-runc-v2 -namespace k8s.io -id 02b8fdd94b7628d575ace92f337dcc93202151d07f33858341cfcb0178fea586 -address /run/k3s/containerd/containerd.sock root 47235 1 0 Sep11 ? 00:00:33 /var/lib/rancher/k3s/data/484bf694b486d93cc93bcf90f74d5c77628550d4456f21760fd720b88c93881e/bin/containerd-shim-runc-v2 -namespace k8s.io -id eb0f03c4bc1125ae34e5de77c7eaeb9c2c1e46bec0bb106b99281df7353f9ded -address /run/k3s/containerd/containerd.sock root 47948 1 0 Sep11 ? 00:00:31 /var/lib/rancher/k3s/data/484bf694b486d93cc93bcf90f74d5c77628550d4456f21760fd720b88c93881e/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5eece7110bd3876414f890ed1b7158eb91f60d1f14450a9f8c77a0fc62b73c4a -address /run/k3s/containerd/containerd.sock root 48047 1 0 Sep11 ? 00:00:32 /var/lib/rancher/k3s/data/484bf694b486d93cc93bcf90f74d5c77628550d4456f21760fd720b88c93881e/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4ef831f4bb8830a0a7b46a1e8be58197be00846ea0caa094014ea7ce3adf008e -address /run/k3s/containerd/containerd.sock root 48220 1 0 Sep11 ? 00:00:33 /var/lib/rancher/k3s/data/484bf694b486d93cc93bcf90f74d5c77628550d4456f21760fd720b88c93881e/bin/containerd-shim-runc-v2 -namespace k8s.io -id f8d5d3791ad2b1a8534476aa2fbdf8da1e4f0b9fdcdd26b1ae161142458a5336 -address /run/k3s/containerd/containerd.sock root 48878 1 0 Sep11 ? 00:00:32 /var/lib/rancher/k3s/data/484bf694b486d93cc93bcf90f74d5c77628550d4456f21760fd720b88c93881e/bin/containerd-shim-runc-v2 -namespace k8s.io -id 41b93a6581e272b0852257b3d7c44ab4cadcdf465a3579a8b29b493ea75feaf7 -address /run/k3s/containerd/containerd.sock root 49870 1 0 Sep11 ? 00:00:30 /var/lib/rancher/k3s/data/484bf694b486d93cc93bcf90f74d5c77628550d4456f21760fd720b88c93881e/bin/containerd-shim-runc-v2 -namespace k8s.io -id be329cdccda4049f048c759a2b1915cd72526473ff7768e895bcfc153a66c125 -address /run/k3s/containerd/containerd.sock root 50271 1 0 Sep11 ? 00:00:32 /var/lib/rancher/k3s/data/484bf694b486d93cc93bcf90f74d5c77628550d4456f21760fd720b88c93881e/bin/containerd-shim-runc-v2 -namespace k8s.io -id cceb5d26eba7d7d3c474a2dc842b72495689f4e40055505ea2598e4ac6849d40 -address /run/k3s/containerd/containerd.sock root 50571 1 0 Sep11 ? 00:00:32 /var/lib/rancher/k3s/data/484bf694b486d93cc93bcf90f74d5c77628550d4456f21760fd720b88c93881e/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3a26e691e0d0186165d523bef22b0af71f175c39df82845b32ce7b4e3d652d4a -address /run/k3s/containerd/containerd.sock root 96693 1 0 16:13 ? 00:00:00 /var/lib/rancher/k3s/data/484bf694b486d93cc93bcf90f74d5c77628550d4456f21760fd720b88c93881e/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9ed77e6e58410f1295f730ba4a8f8daf7aff8f4e896e24219a8fa710729c2287 -address /run/k3s/containerd/containerd.sock root 97726 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/484bf694b486d93cc93bcf90f74d5c77628550d4456f21760fd720b88c93881e/bin/containerd-shim-runc-v2 -namespace k8s.io -id 1be4fe50e6bb29d4549f8b03102b5dc15b7669fe56b32beae3f701cf6d2e58cc -address /run/k3s/containerd/containerd.sock root 97923 1 1 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/484bf694b486d93cc93bcf90f74d5c77628550d4456f21760fd720b88c93881e/bin/containerd-shim-runc-v2 -namespace k8s.io -id 7c65c47376f36a8980a5bebad7ffdc1abe1853821cf6c4a1d45f1b6be3452062 -address /run/k3s/containerd/containerd.sock
+```
+
+### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and remove the --kubelet-https parameter.
+
+### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the
+apiserver and kubelets. Then, edit API server pod specification file
+/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
+kubelet client certificate and key parameters as below.
+--kubelet-client-certificate=
+--kubelet-client-key=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'
+```
+
+**Expected Result**:
+
+```console
+'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and setup the TLS connection between
+the apiserver and kubelets. Then, edit the API server pod specification file
+/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
+--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
+--kubelet-certificate-authority=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'
+```
+
+**Expected Result**:
+
+```console
+'--kubelet-certificate-authority' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
+One such example could be as below.
+--authorization-mode=RBAC
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' does not have 'AlwaysAllow'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to a value that includes Node.
+--authorization-mode=Node,RBAC
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' has 'Node'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
+for example `--authorization-mode=Node,RBAC`.
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' has 'RBAC'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and set the desired limits in a configuration file.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+and set the below parameters.
+--enable-admission-plugins=...,EventRateLimit,...
+--admission-control-config-file=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'EventRateLimit'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
+value that does not include AlwaysAdmit.
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+AlwaysPullImages.
+--enable-admission-plugins=...,AlwaysPullImages,...
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'AlwaysPullImages'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+SecurityContextDeny, unless PodSecurityPolicy is already in place.
+--enable-admission-plugins=...,SecurityContextDeny,...
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the documentation and create ServiceAccount objects as per your environment.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
+value that does not include ServiceAccount.
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'ServiceAccount'
+```
+
+**Expected Result**:
+
+```console
+'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --disable-admission-plugins parameter to
+ensure it does not include NamespaceLifecycle.
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.16 Ensure that the admission control plugin NodeRestriction is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to a
+value that includes NodeRestriction.
+--enable-admission-plugins=...,NodeRestriction,...
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'NodeRestriction'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.17 Ensure that the --secure-port argument is not set to 0 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and either remove the --secure-port parameter or
+set it to a different (non-zero) desired port.
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'secure-port'
+```
+
+**Expected Result**:
+
+```console
+'--secure-port' is greater than 0 OR '--secure-port' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.18 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'profiling'
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.19 Ensure that the --audit-log-path argument is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-path parameter to a suitable path and
+file where you would like audit logs to be written, for example,
+--audit-log-path=/var/log/apiserver/audit.log
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-path'
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-path' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.20 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxage parameter to 30
+or as an appropriate number of days, for example,
+--audit-log-maxage=30
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxage'
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-maxage' is greater or equal to 30
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.21 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
+value. For example,
+--audit-log-maxbackup=10
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxbackup'
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-maxbackup' is greater or equal to 10
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.22 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
+For example, to set it as 100 MB, --audit-log-maxsize=100
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxsize'
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-maxsize' is greater or equal to 100
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.23 Ensure that the --request-timeout argument is set as appropriate (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+and set the below parameter as appropriate and if needed.
+For example, --request-timeout=300s
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'request-timeout'
+```
+
+**Expected Result**:
+
+```console
+'--request-timeout' is not present OR '--request-timeout' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.24 Ensure that the --service-account-lookup argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--service-account-lookup=true
+Alternatively, you can delete the --service-account-lookup parameter from this file so
+that the default takes effect.
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'service-account-lookup'
+```
+
+**Expected Result**:
+
+```console
+'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.25 Ensure that the --service-account-key-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --service-account-key-file parameter
+to the public key file for service accounts. For example,
+--service-account-key-file=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'service-account-key-file'
+```
+
+**Expected Result**:
+
+```console
+'--service-account-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.26 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the etcd certificate and key file parameters.
+--etcd-certfile=
+--etcd-keyfile=
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 1.2.29
+```
+
+**Expected Result**:
+
+```console
+'--etcd-certfile' is present AND '--etcd-keyfile' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.27 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the TLS certificate and private key file parameters.
+--tls-cert-file=
+--tls-private-key-file=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep -A1 'Running kube-apiserver' | tail -n2
+```
+
+**Expected Result**:
+
+```console
+'--tls-cert-file' is present AND '--tls-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
+```
+
+### 1.2.28 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the client certificate authority file.
+--client-ca-file=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'client-ca-file'
+```
+
+**Expected Result**:
+
+```console
+'--client-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.29 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the etcd certificate authority file parameter.
+--etcd-cafile=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-cafile'
+```
+
+**Expected Result**:
+
+```console
+'--etcd-cafile' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.30 Ensure that the --encryption-provider-config argument is set as appropriate (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and configure a EncryptionConfig file.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --encryption-provider-config parameter to the path of that file.
+For example, --encryption-provider-config=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'encryption-provider-config'
+```
+
+**Expected Result**:
+
+```console
+'--encryption-provider-config' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.31 Ensure that encryption providers are appropriately configured (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and configure a EncryptionConfig file.
+In this file, choose aescbc, kms or secretbox as the encryption provider.
+
+**Audit:**
+
+```bash
+grep aescbc /path/to/encryption-config.json
+```
+
+### 1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
+TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
+TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
+TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
+TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
+TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
+TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
+TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'tls-cipher-suites'
+```
+
+**Expected Result**:
+
+```console
+'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+## 1.3 Controller Manager
+### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
+for example, --terminated-pod-gc-threshold=10
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'terminated-pod-gc-threshold'
+```
+
+**Expected Result**:
+
+```console
+'--terminated-pod-gc-threshold' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
+```
+
+### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'profiling'
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
+```
+
+### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node to set the below parameter.
+--use-service-account-credentials=true
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'use-service-account-credentials'
+```
+
+**Expected Result**:
+
+```console
+'--use-service-account-credentials' is not equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
+```
+
+### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --service-account-private-key-file parameter
+to the private key file for service accounts.
+--service-account-private-key-file=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'service-account-private-key-file'
+```
+
+**Expected Result**:
+
+```console
+'--service-account-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
+```
+
+### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
+--root-ca-file=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'root-ca-file'
+```
+
+**Expected Result**:
+
+```console
+'--root-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
+```
+
+### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
+--feature-gates=RotateKubeletServerCertificate=true
+
+### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and ensure the correct value for the --bind-address parameter
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'bind-address'
+```
+
+**Expected Result**:
+
+```console
+'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
+```
+
+## 1.4 Scheduler
+### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
+```
+
+### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml
+on the control plane node and ensure the correct value for the --bind-address parameter
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'bind-address'
+```
+
+**Expected Result**:
+
+```console
+'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
+```
+
+## 2 Etcd Node Configuration
+### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the etcd service documentation and configure TLS encryption.
+Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
+on the master node and set the below parameters.
+--cert-file=
+--key-file=
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 2.1
+```
+
+**Expected Result**:
+
+```console
+'cert-file' is present AND 'key-file' is present
+```
+
+**Returned Value**:
+
+```console
+cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key
+```
+
+### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
+node and set the below parameter.
+--client-cert-auth="true"
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 2.2
+```
+
+**Expected Result**:
+
+```console
+'--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+client-cert-auth: true
+```
+
+### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
+node and either remove the --auto-tls parameter or set it to false.
+ --auto-tls=false
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 2.3
+```
+
+**Expected Result**:
+
+```console
+'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present
+```
+
+**Returned Value**:
+
+```console
+error: process ID list syntax error Usage: ps [options] Try 'ps --help ' or 'ps --help ' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory
+```
+
+### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the etcd service documentation and configure peer TLS encryption as appropriate
+for your etcd cluster.
+Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the
+master node and set the below parameters.
+--peer-client-file=
+--peer-key-file=
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 2.4
+```
+
+**Expected Result**:
+
+```console
+'cert-file' is present AND 'key-file' is present
+```
+
+**Returned Value**:
+
+```console
+cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key
+```
+
+### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
+node and set the below parameter.
+--peer-client-cert-auth=true
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 2.5
+```
+
+**Expected Result**:
+
+```console
+'--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+client-cert-auth: true
+```
+
+### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
+node and either remove the --peer-auto-tls parameter or set it to false.
+--peer-auto-tls=false
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 2.6
+```
+
+**Expected Result**:
+
+```console
+'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present
+```
+
+**Returned Value**:
+
+```console
+error: process ID list syntax error Usage: ps [options] Try 'ps --help ' or 'ps --help ' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory
+```
+
+### 2.7 Ensure that a unique Certificate Authority is used for etcd (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+[Manual test]
+Follow the etcd documentation and create a dedicated certificate authority setup for the
+etcd service.
+Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the
+master node and set the below parameter.
+--trusted-ca-file=
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 2.7
+```
+
+**Expected Result**:
+
+```console
+'trusted-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt
+```
+
+## 3.1 Authentication and Authorization
+### 3.1.1 Client certificate authentication should not be used for users (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
+implemented in place of client certificates.
+
+## 3.2 Logging
+### 3.2.1 Ensure that a minimal audit policy is created (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Create an audit policy file for your cluster.
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file'
+```
+
+### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Review the audit policy provided for the cluster and ensure that it covers
+at least the following areas,
+- Access to Secrets managed by the cluster. Care should be taken to only
+ log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
+ order to avoid risk of logging sensitive data.
+- Modification of Pod and Deployment objects.
+- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
+For most requests, minimally logging at the Metadata level is recommended
+(the most basic level of logging).
+
+## 4.1 Worker Node Configuration Files
+### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+
+### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+
+### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chmod 600 /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
+
+**Audit:**
+
+```bash
+stat -c %a /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'permissions' is present
+```
+
+**Returned Value**:
+
+```console
+600
+```
+
+### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example, chown root:root /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chmod 600 /var/lib/rancher/k3s/server/cred/admin.kubeconfig
+
+**Audit:**
+
+```bash
+stat -c %a /var/lib/rancher/k3s/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'600' is equal to '600'
+```
+
+**Returned Value**:
+
+```console
+600
+```
+
+### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chown root:root /var/lib/rancher/k3s/server/cred/admin.kubeconfig
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/k3s/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the following command to modify the file permissions of the
+--client-ca-file chmod 600
+
+**Audit:**
+
+```bash
+stat -c %a /var/lib/rancher/k3s/server/tls/server-ca.crt
+```
+
+**Expected Result**:
+
+```console
+'permissions' is present
+```
+
+**Returned Value**:
+
+```console
+644
+```
+
+### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the following command to modify the ownership of the --client-ca-file.
+chown root:root
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/k3s/server/tls/client-ca.crt
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the following command (using the config file location identified in the Audit step)
+chmod 600 /var/lib/kubelet/config.yaml
+
+### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the following command (using the config file location identified in the Audit step)
+chown root:root /var/lib/kubelet/config.yaml
+
+## 4.2 Kubelet
+### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
+`false`.
+If using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+`--anonymous-auth=false`
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" | grep -v grep; else echo "--anonymous-auth=false"; fi'
+```
+
+**Expected Result**:
+
+```console
+'--anonymous-auth' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
+using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_AUTHZ_ARGS variable.
+--authorization-mode=Webhook
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" | grep -v grep; else echo "--authorization-mode=Webhook"; fi'
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' does not have 'AlwaysAllow'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
+the location of the client CA file.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_AUTHZ_ARGS variable.
+--client-ca-file=
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file" | grep -v grep; else echo "--client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt"; fi'
+```
+
+**Expected Result**:
+
+```console
+'--client-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:31:58 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:31:58Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 4.2.4 Verify that the --read-only-port argument is set to 0 (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--read-only-port=0
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'read-only-port'
+```
+
+**Expected Result**:
+
+```console
+'--read-only-port' is equal to '0' OR '--read-only-port' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:32:02 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:32:02Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-3-32 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=cc5faddd-9c87-4a04-97c2-dcf4cbcbe10f --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
+```
+
+### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
+value other than 0.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--streaming-connection-idle-timeout=5m
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'streaming-connection-idle-timeout'
+```
+
+### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--protect-kernel-defaults=true
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'protect-kernel-defaults'
+```
+
+**Expected Result**:
+
+```console
+'--protect-kernel-defaults' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:32:02 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:32:02Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-3-32 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=cc5faddd-9c87-4a04-97c2-dcf4cbcbe10f --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
+```
+
+### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+remove the --make-iptables-util-chains argument from the
+KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'make-iptables-util-chains'
+```
+
+**Expected Result**:
+
+```console
+'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:32:02 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:32:02Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-3-32 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=cc5faddd-9c87-4a04-97c2-dcf4cbcbe10f --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
+```
+
+### 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+on each worker node and remove the --hostname-override argument from the
+KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+### 4.2.9 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC containerd
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/kubelet/config.yaml
+```
+
+**Expected Result**:
+
+```console
+'--event-qps' is present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 410 1 0 Sep11 ? 00:01:50 /usr/bin/containerd root 45213 45195 3 Sep11 ? 00:45:15 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
+```
+
+### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
+of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
+to the location of the corresponding private key file.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
+--tls-cert-file=
+--tls-private-key-file=
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1
+```
+
+**Expected Result**:
+
+```console
+'--tls-cert-file' is present AND '--tls-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:32:02 ip-172-31-3-32 k3s[45195]: time="2023-09-11T20:32:02Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-3-32 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=cc5faddd-9c87-4a04-97c2-dcf4cbcbe10f --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
+```
+
+### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
+remove it altogether to use the default value.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
+variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC containerd
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--rotate-certificates' is present OR '--rotate-certificates' is not present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 410 1 0 Sep11 ? 00:01:50 /usr/bin/containerd root 45213 45195 3 Sep11 ? 00:45:15 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
+```
+
+### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
+--feature-gates=RotateKubeletServerCertificate=true
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
+TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+or to a subset of these values.
+If using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the --tls-cipher-suites parameter as follows, or to a subset of these values.
+--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC containerd
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/kubelet/config.yaml
+```
+
+**Expected Result**:
+
+```console
+'--tls-cipher-suites' is present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 410 1 0 Sep11 ? 00:01:50 /usr/bin/containerd root 45213 45195 3 Sep11 ? 00:45:15 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
+```
+
+## 5.1 RBAC and Service Accounts
+### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
+if they need this role or if they could use a role with fewer privileges.
+Where possible, first bind users to a lower privileged role and then remove the
+clusterrolebinding to the cluster-admin role :
+kubectl delete clusterrolebinding [name]
+
+### 5.1.2 Minimize access to secrets (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove get, list and watch access to Secret objects in the cluster.
+
+### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible replace any use of wildcards in clusterroles and roles with specific
+objects or actions.
+
+### 5.1.4 Minimize access to create pods (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove create access to pod objects in the cluster.
+
+### 5.1.5 Ensure that default service accounts are not actively used. (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Create explicit service accounts wherever a Kubernetes workload requires specific access
+to the Kubernetes API server.
+Modify the configuration of each default service account to include this value
+automountServiceAccountToken: false
+
+### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Modify the definition of pods and service accounts which do not need to mount service
+account tokens to disable it.
+
+### 5.1.7 Avoid use of system:masters group (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Remove the system:masters group from all users in the cluster.
+
+### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove the impersonate, bind and escalate rights from subjects.
+
+## 5.2 Pod Security Standards
+### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that either Pod Security Admission or an external policy control system is in place
+for every namespace which contains user workloads.
+
+### 5.2.2 Minimize the admission of privileged containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of privileged containers.
+
+### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostPID` containers.
+
+### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostIPC` containers.
+
+### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostNetwork` containers.
+
+### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
+
+### 5.2.7 Minimize the admission of root containers (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
+or `MustRunAs` with the range of UIDs not including 0, is set.
+
+### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with the `NET_RAW` capability.
+
+### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that `allowedCapabilities` is not present in policies for the cluster unless
+it is set to an empty array.
+
+### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Review the use of capabilites in applications running on your cluster. Where a namespace
+contains applicaions which do not require any Linux capabities to operate consider adding
+a PSP which forbids the admission of containers which do not drop all capabilities.
+
+### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
+
+### 5.2.12 Minimize the admission of HostPath volumes (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `hostPath` volumes.
+
+### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers which use `hostPort` sections.
+
+## 5.3 Network Policies and CNI
+### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If the CNI plugin in use does not support network policies, consideration should be given to
+making use of a different plugin, or finding an alternate mechanism for restricting traffic
+in the Kubernetes cluster.
+
+### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the documentation and create NetworkPolicy objects as you need them.
+
+## 5.4 Secrets Management
+### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If possible, rewrite application code to read Secrets from mounted secret files, rather than
+from environment variables.
+
+### 5.4.2 Consider external secret storage (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Refer to the Secrets management options offered by your cloud provider or a third-party
+secrets management solution.
+
+## 5.5 Extensible Admission Control
+### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and setup image provenance.
+
+## 5.7 General Policies
+### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the documentation and create namespaces for objects in your deployment as you need
+them.
+
+### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
+An example is as below:
+ securityContext:
+ seccompProfile:
+ type: RuntimeDefault
+
+### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
+suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
+Containers.
+
+### 5.7.4 The default namespace should not be used (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
+resources and that all new resources are created in a specific namespace.
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md
new file mode 100644
index 00000000000..8443a80b530
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md
@@ -0,0 +1,3215 @@
+---
+title: K3s 自我评估指南 - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27
+---
+
+
+
+
+
+本文档是 [K3s 加固指南](k3s-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 K3s 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。
+
+本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes:
+
+| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 |
+|-----------------|-----------------------|--------------------|
+| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 |
+
+本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。
+
+有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.7 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。
+
+## 测试方法
+
+每个 CIS Kubernetes Benchmark 中的 control 都根据附带的加固指南评估了针对 K3s 集群的配置。
+
+当 control 审计与原始的 CIS benchmark 不同的时候,提供了针对 K3s 的特定审计命令,以供测试使用。
+
+以下是每个 control 可能的结果:
+
+- **Pass(通过)** - K3s 集群通过了 benchmark 中概述的审计。
+- **Not Applicable(不适用)** - 由于 K3s 的设计方式,该 control 不适用于 K3s。在补救措施部分解释了原因。
+- **Warn(警告)** - 在 CIS benchmark 中,该 control 是手动的,它取决于集群的使用情况或其他必须由集群操作员确定的因素。这些 control 措施已经过评估,以确保 K3s 不会阻止其实施,但尚未对集群进行进一步的配置或审计。
+
+本指南假设 K3s 作为 Systemd 单元运行。你的安装可能会有所不同。调整"审计"命令以适合你的场景。
+
+:::note
+
+本指南仅涵盖 `automated`(之前称为 `scored`)测试。
+
+:::
+
+### Controls
+
+
+## 1.1 Control Plane Node Configuration Files
+### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the
+control plane node.
+For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml
+Not Applicable.
+
+### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml
+Not Applicable.
+
+### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml
+Not Applicable.
+
+### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml
+Not Applicable.
+
+### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml
+Not Applicable.
+
+### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml
+Not Applicable.
+
+### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod 600 /etc/kubernetes/manifests/etcd.yaml
+Not Applicable.
+
+### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root /etc/kubernetes/manifests/etcd.yaml
+Not Applicable.
+
+### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600
+Not Applicable.
+
+### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root
+Not Applicable.
+
+### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above). For example,
+chmod 700 /var/lib/etcd
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 1.1.11
+```
+
+**Expected Result**:
+
+```console
+'700' is equal to '700'
+```
+
+**Returned Value**:
+
+```console
+700
+```
+
+### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above).
+For example, chown etcd:etcd /var/lib/etcd
+Not Applicable.
+
+### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600 /var/lib/rancher/k3s/server/cred/admin.kubeconfig
+
+### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /etc/kubernetes/admin.conf
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod 600 scheduler
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 600, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600
+```
+
+### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root scheduler
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod 600 controllermanager
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/controller.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/controller.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 600, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600
+```
+
+### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root controllermanager
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/k3s/server/cred/controller.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown -R root:root /var/lib/rancher/k3s/server/tls
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/k3s/server/tls
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod -R 600 /etc/kubernetes/pki/*.crt
+
+**Audit:**
+
+```bash
+stat -c %n %a /var/lib/rancher/k3s/server/tls/*.crt
+```
+
+### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod -R 600 /etc/kubernetes/pki/*.key
+
+**Audit:**
+
+```bash
+stat -c %n %a /var/lib/rancher/k3s/server/tls/*.key
+```
+
+## 1.2 API Server
+### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--anonymous-auth=false
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth'
+```
+
+**Expected Result**:
+
+```console
+'--anonymous-auth' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the documentation and configure alternate mechanisms for authentication. Then,
+edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and remove the --token-auth-file= parameter.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep containerd | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--token-auth-file' is not present
+```
+
+**Returned Value**:
+
+```console
+root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 663 1 0 Sep11 ? 00:00:08 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd root 3021 1 0 Sep11 ? 00:00:29 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4790d392966915d995e666002c56ed4cce6dd86f305ce2ee390547a1fcbf6c82 -address /run/k3s/containerd/containerd.sock root 3035 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9d73e6a160ccde6c7c7d4ba0df3b9f696e3de2ebfc0f19a1dcbdf13aea496427 -address /run/k3s/containerd/containerd.sock root 3235 1 0 Sep11 ? 00:00:31 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 587f6221ee9f36c877231ada8d816a799dfda186332c33378d1eb16c72cdc87d -address /run/k3s/containerd/containerd.sock root 4435 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a74c9ec7d99785c2f2d4e6826aa80c22eb8b38249e8f99679ece00a818e9b7b3 -address /run/k3s/containerd/containerd.sock root 4985 1 0 Sep11 ? 00:00:53 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b0c9784dbe0fcbe8be10c857976e70ec84a208cb814e87b5ca085a02d434f8c -address /run/k3s/containerd/containerd.sock root 5056 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id f3231ff35f18056e74eda14907f296be15f7ea1c6ae5ab7904e27d4d18183301 -address /run/k3s/containerd/containerd.sock root 5868 1 0 Sep11 ? 00:00:27 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3e908c4d0b10df275bdef6f72fbcfa09517d11cf749236ad020364e14d77bc93 -address /run/k3s/containerd/containerd.sock root 6158 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id bec5780a5c73fa3154aa3b5ee26cdf23202db821205893e7e66ae17e6103e97b -address /run/k3s/containerd/containerd.sock root 7366 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a81b78845bdcaef710314c93e5ea0d0617f37a8929472f7b570ab90c6667f57f -address /run/k3s/containerd/containerd.sock root 97274 1 0 16:13 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id c90652c935d9e79af45b9a9ac2b4fe315e2a761a0525737fdb95c680123a164c -address /run/k3s/containerd/containerd.sock root 98309 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 431a2763636488efd7d104c228d41f4f2ecd8c06a7fc375d8977ab0d238936a8 -address /run/k3s/containerd/containerd.sock root 98493 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b480d2fa55c8cc0105ec1902887040c017f03d5b1eb6c73c24bb9d523ad9b37 -address /run/k3s/containerd/containerd.sock
+```
+
+### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and remove the `DenyServiceExternalIPs`
+from enabled admission plugins.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep containerd | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' is present OR '--enable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 663 1 0 Sep11 ? 00:00:08 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd root 3021 1 0 Sep11 ? 00:00:29 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4790d392966915d995e666002c56ed4cce6dd86f305ce2ee390547a1fcbf6c82 -address /run/k3s/containerd/containerd.sock root 3035 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9d73e6a160ccde6c7c7d4ba0df3b9f696e3de2ebfc0f19a1dcbdf13aea496427 -address /run/k3s/containerd/containerd.sock root 3235 1 0 Sep11 ? 00:00:31 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 587f6221ee9f36c877231ada8d816a799dfda186332c33378d1eb16c72cdc87d -address /run/k3s/containerd/containerd.sock root 4435 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a74c9ec7d99785c2f2d4e6826aa80c22eb8b38249e8f99679ece00a818e9b7b3 -address /run/k3s/containerd/containerd.sock root 4985 1 0 Sep11 ? 00:00:53 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b0c9784dbe0fcbe8be10c857976e70ec84a208cb814e87b5ca085a02d434f8c -address /run/k3s/containerd/containerd.sock root 5056 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id f3231ff35f18056e74eda14907f296be15f7ea1c6ae5ab7904e27d4d18183301 -address /run/k3s/containerd/containerd.sock root 5868 1 0 Sep11 ? 00:00:27 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3e908c4d0b10df275bdef6f72fbcfa09517d11cf749236ad020364e14d77bc93 -address /run/k3s/containerd/containerd.sock root 6158 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id bec5780a5c73fa3154aa3b5ee26cdf23202db821205893e7e66ae17e6103e97b -address /run/k3s/containerd/containerd.sock root 7366 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a81b78845bdcaef710314c93e5ea0d0617f37a8929472f7b570ab90c6667f57f -address /run/k3s/containerd/containerd.sock root 97274 1 0 16:13 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id c90652c935d9e79af45b9a9ac2b4fe315e2a761a0525737fdb95c680123a164c -address /run/k3s/containerd/containerd.sock root 98309 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 431a2763636488efd7d104c228d41f4f2ecd8c06a7fc375d8977ab0d238936a8 -address /run/k3s/containerd/containerd.sock root 98493 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b480d2fa55c8cc0105ec1902887040c017f03d5b1eb6c73c24bb9d523ad9b37 -address /run/k3s/containerd/containerd.sock
+```
+
+### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the
+apiserver and kubelets. Then, edit API server pod specification file
+/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
+kubelet client certificate and key parameters as below.
+--kubelet-client-certificate=
+--kubelet-client-key=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'
+```
+
+**Expected Result**:
+
+```console
+'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Follow the Kubernetes documentation and setup the TLS connection between
+the apiserver and kubelets. Then, edit the API server pod specification file
+/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
+--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
+--kubelet-certificate-authority=
+Permissive - When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers.
+
+### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
+One such example could be as below.
+--authorization-mode=RBAC
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' does not have 'AlwaysAllow'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to a value that includes Node.
+--authorization-mode=Node,RBAC
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' has 'Node'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
+for example `--authorization-mode=Node,RBAC`.
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' has 'RBAC'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and set the desired limits in a configuration file.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+and set the below parameters.
+--enable-admission-plugins=...,EventRateLimit,...
+--admission-control-config-file=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'EventRateLimit'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
+value that does not include AlwaysAdmit.
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+AlwaysPullImages.
+--enable-admission-plugins=...,AlwaysPullImages,...
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep containerd | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' is present
+```
+
+**Returned Value**:
+
+```console
+root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 663 1 0 Sep11 ? 00:00:08 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd root 3021 1 0 Sep11 ? 00:00:29 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4790d392966915d995e666002c56ed4cce6dd86f305ce2ee390547a1fcbf6c82 -address /run/k3s/containerd/containerd.sock root 3035 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9d73e6a160ccde6c7c7d4ba0df3b9f696e3de2ebfc0f19a1dcbdf13aea496427 -address /run/k3s/containerd/containerd.sock root 3235 1 0 Sep11 ? 00:00:31 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 587f6221ee9f36c877231ada8d816a799dfda186332c33378d1eb16c72cdc87d -address /run/k3s/containerd/containerd.sock root 4435 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a74c9ec7d99785c2f2d4e6826aa80c22eb8b38249e8f99679ece00a818e9b7b3 -address /run/k3s/containerd/containerd.sock root 4985 1 0 Sep11 ? 00:00:53 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b0c9784dbe0fcbe8be10c857976e70ec84a208cb814e87b5ca085a02d434f8c -address /run/k3s/containerd/containerd.sock root 5056 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id f3231ff35f18056e74eda14907f296be15f7ea1c6ae5ab7904e27d4d18183301 -address /run/k3s/containerd/containerd.sock root 5868 1 0 Sep11 ? 00:00:27 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3e908c4d0b10df275bdef6f72fbcfa09517d11cf749236ad020364e14d77bc93 -address /run/k3s/containerd/containerd.sock root 6158 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id bec5780a5c73fa3154aa3b5ee26cdf23202db821205893e7e66ae17e6103e97b -address /run/k3s/containerd/containerd.sock root 7366 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a81b78845bdcaef710314c93e5ea0d0617f37a8929472f7b570ab90c6667f57f -address /run/k3s/containerd/containerd.sock root 97274 1 0 16:13 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id c90652c935d9e79af45b9a9ac2b4fe315e2a761a0525737fdb95c680123a164c -address /run/k3s/containerd/containerd.sock root 98309 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 431a2763636488efd7d104c228d41f4f2ecd8c06a7fc375d8977ab0d238936a8 -address /run/k3s/containerd/containerd.sock root 98493 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b480d2fa55c8cc0105ec1902887040c017f03d5b1eb6c73c24bb9d523ad9b37 -address /run/k3s/containerd/containerd.sock
+```
+
+### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+SecurityContextDeny, unless PodSecurityPolicy is already in place.
+--enable-admission-plugins=...,SecurityContextDeny,...
+Permissive - Enabling Pod Security Policy can cause applications to unexpectedly fail.
+
+### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the documentation and create ServiceAccount objects as per your environment.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
+value that does not include ServiceAccount.
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --disable-admission-plugins parameter to
+ensure it does not include NamespaceLifecycle.
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to a
+value that includes NodeRestriction.
+--enable-admission-plugins=...,NodeRestriction,...
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'NodeRestriction'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and either remove the --secure-port parameter or
+set it to a different (non-zero) desired port.
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'secure-port'
+```
+
+**Expected Result**:
+
+```console
+'--secure-port' is greater than 0 OR '--secure-port' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.17 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'profiling'
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.18 Ensure that the --audit-log-path argument is set (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-path parameter to a suitable path and
+file where you would like audit logs to be written, for example,
+--audit-log-path=/var/log/apiserver/audit.log
+Permissive.
+
+### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxage parameter to 30
+or as an appropriate number of days, for example,
+--audit-log-maxage=30
+Permissive.
+
+### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
+value. For example,
+--audit-log-maxbackup=10
+Permissive.
+
+### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
+For example, to set it as 100 MB, --audit-log-maxsize=100
+Permissive.
+
+### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+and set the below parameter as appropriate and if needed.
+For example, --request-timeout=300s
+Permissive.
+
+### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--service-account-lookup=true
+Alternatively, you can delete the --service-account-lookup parameter from this file so
+that the default takes effect.
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --service-account-key-file parameter
+to the public key file for service accounts. For example,
+--service-account-key-file=
+
+### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the etcd certificate and key file parameters.
+--etcd-certfile=
+--etcd-keyfile=
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 1.2.29
+```
+
+**Expected Result**:
+
+```console
+'--etcd-certfile' is present AND '--etcd-keyfile' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the TLS certificate and private key file parameters.
+--tls-cert-file=
+--tls-private-key-file=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep -A1 'Running kube-apiserver' | tail -n2
+```
+
+**Expected Result**:
+
+```console
+'--tls-cert-file' is present AND '--tls-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
+```
+
+### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the client certificate authority file.
+--client-ca-file=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'client-ca-file'
+```
+
+**Expected Result**:
+
+```console
+'--client-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the etcd certificate authority file parameter.
+--etcd-cafile=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-cafile'
+```
+
+**Expected Result**:
+
+```console
+'--etcd-cafile' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Follow the Kubernetes documentation and configure a EncryptionConfig file.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --encryption-provider-config parameter to the path of that file.
+For example, --encryption-provider-config=
+Permissive - Enabling encryption changes how data can be recovered as data is encrypted.
+
+### 1.2.30 Ensure that encryption providers are appropriately configured (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Follow the Kubernetes documentation and configure a EncryptionConfig file.
+In this file, choose aescbc, kms or secretbox as the encryption provider.
+Permissive - Enabling encryption changes how data can be recovered as data is encrypted.
+
+### 1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
+TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
+TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
+TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
+TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
+TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
+TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
+TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'tls-cipher-suites'
+```
+
+**Expected Result**:
+
+```console
+'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+## 1.3 Controller Manager
+### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
+for example, --terminated-pod-gc-threshold=10
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'terminated-pod-gc-threshold'
+```
+
+**Expected Result**:
+
+```console
+'--terminated-pod-gc-threshold' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
+```
+
+### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'profiling'
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
+```
+
+### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node to set the below parameter.
+--use-service-account-credentials=true
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'use-service-account-credentials'
+```
+
+**Expected Result**:
+
+```console
+'--use-service-account-credentials' is not equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
+```
+
+### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --service-account-private-key-file parameter
+to the private key file for service accounts.
+--service-account-private-key-file=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'service-account-private-key-file'
+```
+
+**Expected Result**:
+
+```console
+'--service-account-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
+```
+
+### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
+--root-ca-file=
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'root-ca-file'
+```
+
+**Expected Result**:
+
+```console
+'--root-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --terminated-pod-gc-threshold=10 --use-service-account-credentials=true"
+```
+
+### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
+--feature-gates=RotateKubeletServerCertificate=true
+Not Applicable.
+
+### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and ensure the correct value for the --bind-address parameter
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep containerd | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--bind-address' is present OR '--bind-address' is not present
+```
+
+**Returned Value**:
+
+```console
+root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 663 1 0 Sep11 ? 00:00:08 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd root 3021 1 0 Sep11 ? 00:00:29 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4790d392966915d995e666002c56ed4cce6dd86f305ce2ee390547a1fcbf6c82 -address /run/k3s/containerd/containerd.sock root 3035 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9d73e6a160ccde6c7c7d4ba0df3b9f696e3de2ebfc0f19a1dcbdf13aea496427 -address /run/k3s/containerd/containerd.sock root 3235 1 0 Sep11 ? 00:00:31 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 587f6221ee9f36c877231ada8d816a799dfda186332c33378d1eb16c72cdc87d -address /run/k3s/containerd/containerd.sock root 4435 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a74c9ec7d99785c2f2d4e6826aa80c22eb8b38249e8f99679ece00a818e9b7b3 -address /run/k3s/containerd/containerd.sock root 4985 1 0 Sep11 ? 00:00:53 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b0c9784dbe0fcbe8be10c857976e70ec84a208cb814e87b5ca085a02d434f8c -address /run/k3s/containerd/containerd.sock root 5056 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id f3231ff35f18056e74eda14907f296be15f7ea1c6ae5ab7904e27d4d18183301 -address /run/k3s/containerd/containerd.sock root 5868 1 0 Sep11 ? 00:00:27 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3e908c4d0b10df275bdef6f72fbcfa09517d11cf749236ad020364e14d77bc93 -address /run/k3s/containerd/containerd.sock root 6158 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id bec5780a5c73fa3154aa3b5ee26cdf23202db821205893e7e66ae17e6103e97b -address /run/k3s/containerd/containerd.sock root 7366 1 0 Sep11 ? 00:00:28 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id a81b78845bdcaef710314c93e5ea0d0617f37a8929472f7b570ab90c6667f57f -address /run/k3s/containerd/containerd.sock root 97274 1 0 16:13 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id c90652c935d9e79af45b9a9ac2b4fe315e2a761a0525737fdb95c680123a164c -address /run/k3s/containerd/containerd.sock root 98309 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 431a2763636488efd7d104c228d41f4f2ecd8c06a7fc375d8977ab0d238936a8 -address /run/k3s/containerd/containerd.sock root 98493 1 0 16:16 ? 00:00:00 /var/lib/rancher/k3s/data/3cdacaf539fc388d8e542a8d643948e3c7bfa4a7e91b7521102325e0ce8581b6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5b480d2fa55c8cc0105ec1902887040c017f03d5b1eb6c73c24bb9d523ad9b37 -address /run/k3s/containerd/containerd.sock
+```
+
+## 1.4 Scheduler
+### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
+```
+
+### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml
+on the control plane node and ensure the correct value for the --bind-address parameter
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'bind-address'
+```
+
+**Expected Result**:
+
+```console
+'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
+```
+
+## 2 Etcd Node Configuration
+### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the etcd service documentation and configure TLS encryption.
+Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
+on the master node and set the below parameters.
+--cert-file=
+--key-file=
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 2.1
+```
+
+**Expected Result**:
+
+```console
+'cert-file' is present AND 'key-file' is present
+```
+
+**Returned Value**:
+
+```console
+cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key
+```
+
+### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
+node and set the below parameter.
+--client-cert-auth="true"
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 2.2
+```
+
+**Expected Result**:
+
+```console
+'--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+client-cert-auth: true
+```
+
+### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
+node and either remove the --auto-tls parameter or set it to false.
+ --auto-tls=false
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 2.3
+```
+
+**Expected Result**:
+
+```console
+'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present
+```
+
+**Returned Value**:
+
+```console
+error: process ID list syntax error Usage: ps [options] Try 'ps --help ' or 'ps --help ' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory
+```
+
+### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the etcd service documentation and configure peer TLS encryption as appropriate
+for your etcd cluster.
+Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the
+master node and set the below parameters.
+--peer-client-file=
+--peer-key-file=
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 2.4
+```
+
+**Expected Result**:
+
+```console
+'cert-file' is present AND 'key-file' is present
+```
+
+**Returned Value**:
+
+```console
+cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key
+```
+
+### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
+node and set the below parameter.
+--peer-client-cert-auth=true
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 2.5
+```
+
+**Expected Result**:
+
+```console
+'--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+client-cert-auth: true
+```
+
+### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
+node and either remove the --peer-auto-tls parameter or set it to false.
+--peer-auto-tls=false
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 2.6
+```
+
+**Expected Result**:
+
+```console
+'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present
+```
+
+**Returned Value**:
+
+```console
+error: process ID list syntax error Usage: ps [options] Try 'ps --help ' or 'ps --help ' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory
+```
+
+### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+[Manual test]
+Follow the etcd documentation and create a dedicated certificate authority setup for the
+etcd service.
+Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the
+master node and set the below parameter.
+--trusted-ca-file=
+
+**Audit Script:** `check_for_k3s_etcd.sh`
+
+```bash
+#!/bin/bash
+
+# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
+# before it checks the requirement
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+
+if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
+ case $1 in
+ "1.1.11")
+ echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
+ "1.2.29")
+ echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
+ "2.1")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.2")
+ echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.3")
+ echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.4")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
+ "2.5")
+ echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
+ "2.6")
+ echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
+ "2.7")
+ echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
+ esac
+else
+# If another database is running, return whatever is required to pass the scan
+ case $1 in
+ "1.1.11")
+ echo "700";;
+ "1.2.29")
+ echo "--etcd-certfile AND --etcd-keyfile";;
+ "2.1")
+ echo "cert-file AND key-file";;
+ "2.2")
+ echo "--client-cert-auth=true";;
+ "2.3")
+ echo "false";;
+ "2.4")
+ echo "peer-cert-file AND peer-key-file";;
+ "2.5")
+ echo "--client-cert-auth=true";;
+ "2.6")
+ echo "--peer-auto-tls=false";;
+ "2.7")
+ echo "--trusted-ca-file";;
+ esac
+fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_k3s_etcd.sh 2.7
+```
+
+**Expected Result**:
+
+```console
+'trusted-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt
+```
+
+## 3.1 Authentication and Authorization
+### 3.1.1 Client certificate authentication should not be used for users (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
+implemented in place of client certificates.
+
+### 3.1.2 Service account token authentication should not be used for users (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
+in place of service account tokens.
+
+### 3.1.3 Bootstrap token authentication should not be used for users (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
+in place of bootstrap tokens.
+
+## 3.2 Logging
+### 3.2.1 Ensure that a minimal audit policy is created (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Create an audit policy file for your cluster.
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file'
+```
+
+**Expected Result**:
+
+```console
+'--audit-policy-file' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Review the audit policy provided for the cluster and ensure that it covers
+at least the following areas,
+- Access to Secrets managed by the cluster. Care should be taken to only
+ log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
+ order to avoid risk of logging sensitive data.
+- Modification of Pod and Deployment objects.
+- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
+For most requests, minimally logging at the Metadata level is recommended
+(the most basic level of logging).
+
+## 4.1 Worker Node Configuration Files
+### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+Not Applicable - All configuration is passed in as arguments at container run time.
+
+### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+Not Applicable.
+ All configuration is passed in as arguments at container run time.
+
+### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chmod 600 /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
+
+**Audit:**
+
+```bash
+stat -c %a /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'permissions' is present
+```
+
+**Returned Value**:
+
+```console
+600
+```
+
+### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example, chown root:root /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chmod 600 /var/lib/rancher/k3s/server/cred/admin.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 600, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600
+```
+
+### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chown root:root /var/lib/rancher/k3s/server/cred/admin.kubeconfig
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/k3s/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the following command to modify the file permissions of the
+--client-ca-file chmod 600
+
+**Audit:**
+
+```bash
+stat -c %a /var/lib/rancher/k3s/server/tls/server-ca.crt
+```
+
+**Expected Result**:
+
+```console
+'permissions' is present
+```
+
+**Returned Value**:
+
+```console
+644
+```
+
+### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the following command to modify the ownership of the --client-ca-file.
+chown root:root
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/k3s/server/tls/client-ca.crt
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the following command (using the config file location identified in the Audit step)
+chmod 600 /var/lib/kubelet/config.yaml
+
+### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the following command (using the config file location identified in the Audit step)
+chown root:root /var/lib/kubelet/config.yaml
+Not Applicable.
+All configuration is passed in as arguments at container run time.
+
+## 4.2 Kubelet
+### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
+`false`.
+If using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+`--anonymous-auth=false`
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" | grep -v grep; else echo "--anonymous-auth=false"; fi'
+```
+
+**Expected Result**:
+
+```console
+'--anonymous-auth' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
+using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_AUTHZ_ARGS variable.
+--authorization-mode=Webhook
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" | grep -v grep; else echo "--authorization-mode=Webhook"; fi'
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' does not have 'AlwaysAllow'
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
+the location of the client CA file.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_AUTHZ_ARGS variable.
+--client-ca-file=
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file" | grep -v grep; else echo "--client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt"; fi'
+```
+
+**Expected Result**:
+
+```console
+'--client-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:00 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:00Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
+```
+
+### 4.2.4 Verify that the --read-only-port argument is set to 0 (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--read-only-port=0
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'read-only-port'
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--read-only-port' is equal to '0' OR '--read-only-port' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:15 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:15Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-12-34 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=af02ecbc-1e4e-422e-8b4d-4b2aa24a9d46 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
+```
+
+### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
+value other than 0.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--streaming-connection-idle-timeout=5m
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'streaming-connection-idle-timeout'
+```
+
+### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+remove the --make-iptables-util-chains argument from the
+KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'make-iptables-util-chains'
+```
+
+**Expected Result**:
+
+```console
+'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present
+```
+
+**Returned Value**:
+
+```console
+Sep 11 20:52:15 ip-172-31-12-34 k3s[2340]: time="2023-09-11T20:52:15Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-12-34 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=af02ecbc-1e4e-422e-8b4d-4b2aa24a9d46 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
+```
+
+### 4.2.7 Ensure that the --hostname-override argument is not set (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+on each worker node and remove the --hostname-override argument from the
+KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+Not Applicable.
+
+### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC containerd
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--event-qps' is present OR '--event-qps' is not present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
+```
+
+### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
+of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
+to the location of the corresponding private key file.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
+--tls-cert-file=
+--tls-private-key-file=
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+Permissive - When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers.
+
+### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
+remove it altogether to use the default value.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
+variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC containerd
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--rotate-certificates' is present OR '--rotate-certificates' is not present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
+```
+
+### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
+--feature-gates=RotateKubeletServerCertificate=true
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+Not Applicable.
+
+**Audit:**
+
+```bash
+/bin/ps -fC containerd
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/kubelet/config.yaml
+```
+
+**Expected Result**:
+
+```console
+'RotateKubeletServerCertificate' is present OR 'RotateKubeletServerCertificate' is not present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
+```
+
+### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
+TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+or to a subset of these values.
+If using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the --tls-cipher-suites parameter as follows, or to a subset of these values.
+--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC containerd
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--tls-cipher-suites' is present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
+```
+
+### 4.2.13 Ensure that a limit is set on pod PIDs (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Decide on an appropriate level for this parameter and set it,
+either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting.
+
+**Audit:**
+
+```bash
+/bin/ps -fC containerd
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--pod-max-pids' is present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 527 1 0 Sep11 ? 00:01:28 /usr/bin/containerd root 2361 2340 3 Sep11 ? 00:40:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
+```
+
+## 5.1 RBAC and Service Accounts
+### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
+if they need this role or if they could use a role with fewer privileges.
+Where possible, first bind users to a lower privileged role and then remove the
+clusterrolebinding to the cluster-admin role :
+kubectl delete clusterrolebinding [name]
+
+### 5.1.2 Minimize access to secrets (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove get, list and watch access to Secret objects in the cluster.
+
+### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible replace any use of wildcards in clusterroles and roles with specific
+objects or actions.
+
+### 5.1.4 Minimize access to create pods (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove create access to pod objects in the cluster.
+
+### 5.1.5 Ensure that default service accounts are not actively used. (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Create explicit service accounts wherever a Kubernetes workload requires specific access
+to the Kubernetes API server.
+Modify the configuration of each default service account to include this value
+automountServiceAccountToken: false
+
+**Audit Script:** `check_for_default_sa.sh`
+
+```bash
+#!/bin/bash
+
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l)
+if [[ ${count_sa} -gt 0 ]]; then
+ echo "false"
+ exit
+fi
+
+for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name")
+do
+ for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"')
+ do
+ read kind name <<<$(IFS=","; echo $result)
+ resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l)
+ if [[ ${resource_count} -gt 0 ]]; then
+ echo "false"
+ exit
+ fi
+ done
+done
+
+
+echo "true"
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_default_sa.sh
+```
+
+**Expected Result**:
+
+```console
+'true' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+true
+```
+
+### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Modify the definition of pods and service accounts which do not need to mount service
+account tokens to disable it.
+
+### 5.1.7 Avoid use of system:masters group (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Remove the system:masters group from all users in the cluster.
+
+### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove the impersonate, bind and escalate rights from subjects.
+
+### 5.1.9 Minimize access to create persistent volumes (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove create access to PersistentVolume objects in the cluster.
+
+### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove access to the proxy sub-resource of node objects.
+
+### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove access to the approval sub-resource of certificatesigningrequest objects.
+
+### 5.1.12 Minimize access to webhook configuration objects (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
+
+### 5.1.13 Minimize access to the service account token creation (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove access to the token sub-resource of serviceaccount objects.
+
+## 5.2 Pod Security Standards
+### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that either Pod Security Admission or an external policy control system is in place
+for every namespace which contains user workloads.
+
+### 5.2.2 Minimize the admission of privileged containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of privileged containers.
+
+### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostPID` containers.
+
+### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostIPC` containers.
+
+### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostNetwork` containers.
+
+### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
+
+### 5.2.7 Minimize the admission of root containers (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
+or `MustRunAs` with the range of UIDs not including 0, is set.
+
+### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with the `NET_RAW` capability.
+
+### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that `allowedCapabilities` is not present in policies for the cluster unless
+it is set to an empty array.
+
+### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Review the use of capabilites in applications running on your cluster. Where a namespace
+contains applicaions which do not require any Linux capabities to operate consider adding
+a PSP which forbids the admission of containers which do not drop all capabilities.
+
+### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
+
+### 5.2.12 Minimize the admission of HostPath volumes (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `hostPath` volumes.
+
+### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers which use `hostPort` sections.
+
+## 5.3 Network Policies and CNI
+### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If the CNI plugin in use does not support network policies, consideration should be given to
+making use of a different plugin, or finding an alternate mechanism for restricting traffic
+in the Kubernetes cluster.
+
+### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the documentation and create NetworkPolicy objects as you need them.
+
+## 5.4 Secrets Management
+### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If possible, rewrite application code to read Secrets from mounted secret files, rather than
+from environment variables.
+
+### 5.4.2 Consider external secret storage (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Refer to the Secrets management options offered by your cloud provider or a third-party
+secrets management solution.
+
+## 5.5 Extensible Admission Control
+### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and setup image provenance.
+
+## 5.7 General Policies
+### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the documentation and create namespaces for objects in your deployment as you need
+them.
+
+### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
+An example is as below:
+ securityContext:
+ seccompProfile:
+ type: RuntimeDefault
+
+### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
+suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
+Containers.
+
+### 5.7.4 The default namespace should not be used (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
+resources and that all new resources are created in a specific namespace.
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.7-k8s-v1.25.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.7-k8s-v1.25.md
deleted file mode 100644
index 16614a7e520..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.7-k8s-v1.25.md
+++ /dev/null
@@ -1,3148 +0,0 @@
----
-title: K3s Self-Assessment Guide - CIS Benchmark v1.7 - K8s v1.25
----
-
-This document is a companion to the [K3s Hardening Guide](../../../../pages-for-subheaders/k3s-hardening-guide.md), which provides prescriptive guidance on how to harden K3s clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark.
-
-This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes:
-
-| Rancher Version | CIS Benchmark Version | Kubernetes Version |
-|-----------------|-----------------------|--------------------|
-| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 |
-
-This document is for Rancher operators, security teams, auditors and decision makers.
-
-For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.7. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
-
-## Testing Methodology
-
-Each control in the CIS Kubernetes Benchmark was evaluated against a K3s cluster that was configured according to the accompanying hardening guide.
-
-Where control audits differ from the original CIS benchmark, the audit commands specific to K3s are provided for testing.
-
-These are the possible results for each control:
-
-- **Pass** - The K3s cluster passes the audit outlined in the benchmark.
-- **Not Applicable** - The control is not applicable to K3s because of how it is designed to operate. The remediation section explains why.
-- **Warn** - The control is manual in the CIS benchmark and it depends on the cluster's use-case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure K3s doesn't prevent their implementation, but no further configuration or auditing of the cluster has been performed.
-
-This guide makes the assumption that K3s is running as a Systemd unit. Your installation may vary. Adjust the "audit" commands to fit your scenario.
-
-:::note
-
-This guide only covers `automated` (previously called `scored`) tests.
-
-:::
-
-### Controls
-
-## 1.1 Control Plane Node Configuration Files
-### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the
-control plane node.
-For example, chmod 644 /etc/kubernetes/manifests/kube-apiserver.yaml
-
-### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml
-
-### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644 /etc/kubernetes/manifests/kube-controller-manager.yaml
-
-### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml
-
-### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644 /etc/kubernetes/manifests/kube-scheduler.yaml
-
-### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml
-
-### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 /etc/kubernetes/manifests/etcd.yaml
-
-### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root /etc/kubernetes/manifests/etcd.yaml
-
-### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644
-
-### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root
-
-### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above). For example,
-chmod 700 /var/lib/etcd
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 1.1.11
-```
-
-**Expected Result**:
-
-```console
-'700' is equal to '700'
-```
-
-**Returned Value**:
-
-```console
-700
-```
-
-### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above).
-For example, chown etcd:etcd /var/lib/etcd
-
-### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 600 /var/lib/rancher/k3s/server/cred/admin.kubeconfig
-
-### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /etc/kubernetes/admin.conf
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 scheduler
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root scheduler
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 controllermanager
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/controller.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/controller.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root controllermanager
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/k3s/server/tls
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown -R root:root /etc/kubernetes/pki/
-
-**Audit:**
-
-```bash
-find /var/lib/rancher/k3s/server/tls | xargs stat -c %U:%G
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root root:root
-```
-
-### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod -R 644 /etc/kubernetes/pki/*.crt
-
-**Audit:**
-
-```bash
-stat -c %n %a /var/lib/rancher/k3s/server/tls/*.crt
-```
-
-### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod -R 600 /etc/kubernetes/pki/*.key
-
-**Audit:**
-
-```bash
-stat -c %n %a /var/lib/rancher/k3s/server/tls/*.key
-```
-
-## 1.2 API Server
-### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---anonymous-auth=false
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth'
-```
-
-### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and configure alternate mechanisms for authentication. Then,
-edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the --token-auth-file= parameter.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--token-auth-file' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the `DenyServiceExternalIPs`
-from enabled admission plugins.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep containerd | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' is present OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 519 1 0 22:09 ? 00:00:00 /usr/bin/containerd root 801 1 0 22:09 ? 00:00:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 3864 1 0 22:31 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id d00174abbc275f6bb85c7f0be1d3154b9c91982a10b9dba6b5cb280f4d4c531d -address /run/k3s/containerd/containerd.sock root 4105 1 0 22:31 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 7c2b546b4d2380bcb51278661f34cff94fad2ba06978e13f8f1b92dafcc89d43 -address /run/k3s/containerd/containerd.sock root 4206 1 0 22:31 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 68d8a55ff4663985be004608dbf78b0362f5522e18490c81d4c8dc9963de1556 -address /run/k3s/containerd/containerd.sock root 5374 1 0 22:31 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id ca0ae9e0b37dfd7b1ce05f72e1bc5a1be8f5cb08f2b4543081536de3bdbc925d -address /run/k3s/containerd/containerd.sock root 5443 1 0 22:31 ? 00:00:01 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3ea3c1cdbbd5adb8efd5c67a46aadd0fca9918dc0ad1f7cafe38b83171e3dc1b -address /run/k3s/containerd/containerd.sock root 7130 1 0 22:32 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4d838297d35a31003106ac5989c3547433985bb2964b47baad12cee6e375645e -address /run/k3s/containerd/containerd.sock root 7639 1 0 22:32 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 341cb9bcd8486aa2f1acb8e1ae51baebd630ac6ed266643266c34d677f61c7d0 -address /run/k3s/containerd/containerd.sock root 10308 1 0 23:17 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id c534fbee8e0d06fd9b29bf8fc70a138975c6b18db25f1faf2615677dfdb4199e -address /run/k3s/containerd/containerd.sock root 11370 1 0 23:18 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4ff4b8776dac7a35b83616d341dbe4d5a689ac7fb9b8eee8db5978e3968380ea -address /run/k3s/containerd/containerd.sock root 13736 13723 2 23:21 ? 00:00:10 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd root 16022 1 0 23:29 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9027256349086e458119478e5e00384b1b76fbf5e6dbee23699f596a88d9f2bc -address /run/k3s/containerd/containerd.sock root 16159 1 0 23:29 ? 00:00:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 929bf369fc5881654f4c1925624151ddb7cea51073267b8d213d966ba45406f3 -address /run/k3s/containerd/containerd.sock
-```
-
-### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the --kubelet-https parameter.
-
-### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the
-apiserver and kubelets. Then, edit API server pod specification file
-/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
-kubelet client certificate and key parameters as below.
---kubelet-client-certificate=
---kubelet-client-key=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and setup the TLS connection between
-the apiserver and kubelets. Then, edit the API server pod specification file
-/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
---kubelet-certificate-authority=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-certificate-authority' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
-One such example could be as below.
---authorization-mode=RBAC
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes Node.
---authorization-mode=Node,RBAC
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'Node'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
-for example `--authorization-mode=Node,RBAC`.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'RBAC'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and set the desired limits in a configuration file.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-and set the below parameters.
---enable-admission-plugins=...,EventRateLimit,...
---admission-control-config-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'EventRateLimit'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
-value that does not include AlwaysAdmit.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-AlwaysPullImages.
---enable-admission-plugins=...,AlwaysPullImages,...
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'AlwaysPullImages'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-SecurityContextDeny, unless PodSecurityPolicy is already in place.
---enable-admission-plugins=...,SecurityContextDeny,...
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and create ServiceAccount objects as per your environment.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
-value that does not include ServiceAccount.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'ServiceAccount'
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --disable-admission-plugins parameter to
-ensure it does not include NamespaceLifecycle.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.16 Ensure that the admission control plugin NodeRestriction is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to a
-value that includes NodeRestriction.
---enable-admission-plugins=...,NodeRestriction,...
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'NodeRestriction'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.17 Ensure that the --secure-port argument is not set to 0 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and either remove the --secure-port parameter or
-set it to a different (non-zero) desired port.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'secure-port'
-```
-
-**Expected Result**:
-
-```console
-'--secure-port' is greater than 0 OR '--secure-port' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.18 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'profiling'
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.19 Ensure that the --audit-log-path argument is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-path parameter to a suitable path and
-file where you would like audit logs to be written, for example,
---audit-log-path=/var/log/apiserver/audit.log
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-path'
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-path' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.20 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxage parameter to 30
-or as an appropriate number of days, for example,
---audit-log-maxage=30
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxage'
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxage' is greater or equal to 30
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.21 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
-value. For example,
---audit-log-maxbackup=10
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxbackup'
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxbackup' is greater or equal to 10
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.22 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
-For example, to set it as 100 MB, --audit-log-maxsize=100
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxsize'
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxsize' is greater or equal to 100
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.23 Ensure that the --request-timeout argument is set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-and set the below parameter as appropriate and if needed.
-For example, --request-timeout=300s
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'request-timeout'
-```
-
-**Expected Result**:
-
-```console
-'--request-timeout' is not present OR '--request-timeout' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.24 Ensure that the --service-account-lookup argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---service-account-lookup=true
-Alternatively, you can delete the --service-account-lookup parameter from this file so
-that the default takes effect.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'service-account-lookup'
-```
-
-**Expected Result**:
-
-```console
-'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.25 Ensure that the --service-account-key-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --service-account-key-file parameter
-to the public key file for service accounts. For example,
---service-account-key-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'service-account-key-file'
-```
-
-**Expected Result**:
-
-```console
-'--service-account-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.26 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate and key file parameters.
---etcd-certfile=
---etcd-keyfile=
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 1.2.29
-```
-
-**Expected Result**:
-
-```console
-'--etcd-certfile' is present AND '--etcd-keyfile' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.27 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the TLS certificate and private key file parameters.
---tls-cert-file=
---tls-private-key-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep -A1 'Running kube-apiserver' | tail -n2
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
-```
-
-### 1.2.28 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the client certificate authority file.
---client-ca-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'client-ca-file'
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.29 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate authority file parameter.
---etcd-cafile=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-cafile'
-```
-
-**Expected Result**:
-
-```console
-'--etcd-cafile' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.30 Ensure that the --encryption-provider-config argument is set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --encryption-provider-config parameter to the path of that file.
-For example, --encryption-provider-config=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'encryption-provider-config'
-```
-
-**Expected Result**:
-
-```console
-'--encryption-provider-config' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 1.2.31 Ensure that encryption providers are appropriately configured (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-In this file, choose aescbc, kms or secretbox as the encryption provider.
-
-**Audit:**
-
-```bash
-grep aescbc /path/to/encryption-config.json
-```
-
-### 1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
-TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
-TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
-TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'tls-cipher-suites'
-```
-
-**Expected Result**:
-
-```console
-'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-## 1.3 Controller Manager
-### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
-for example, --terminated-pod-gc-threshold=10
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'terminated-pod-gc-threshold'
-```
-
-### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'profiling'
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
-```
-
-### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node to set the below parameter.
---use-service-account-credentials=true
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'use-service-account-credentials'
-```
-
-**Expected Result**:
-
-```console
-'--use-service-account-credentials' is not equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
-```
-
-### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --service-account-private-key-file parameter
-to the private key file for service accounts.
---service-account-private-key-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'service-account-private-key-file'
-```
-
-**Expected Result**:
-
-```console
-'--service-account-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
-```
-
-### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
---root-ca-file=
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'root-ca-file'
-```
-
-**Expected Result**:
-
-```console
-'--root-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
-```
-
-### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
---feature-gates=RotateKubeletServerCertificate=true
-
-### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'bind-address'
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
-```
-
-## 1.4 Scheduler
-### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
-```
-
-### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'bind-address'
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
-```
-
-## 2 Etcd Node Configuration
-### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the etcd service documentation and configure TLS encryption.
-Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
-on the master node and set the below parameters.
---cert-file=
---key-file=
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.1
-```
-
-**Expected Result**:
-
-```console
-'cert-file' is present AND 'key-file' is present
-```
-
-**Returned Value**:
-
-```console
-cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key
-```
-
-### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
-node and set the below parameter.
---client-cert-auth="true"
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.2
-```
-
-**Expected Result**:
-
-```console
-'--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-client-cert-auth: true
-```
-
-### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
-node and either remove the --auto-tls parameter or set it to false.
---auto-tls=false
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.3
-```
-
-**Expected Result**:
-
-```console
-'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-error: process ID list syntax error Usage: ps [options] Try 'ps --help ' or 'ps --help ' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory
-```
-
-### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the etcd service documentation and configure peer TLS encryption as appropriate
-for your etcd cluster.
-Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the
-master node and set the below parameters.
---peer-client-file=
---peer-key-file=
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.4
-```
-
-**Expected Result**:
-
-```console
-'cert-file' is present AND 'key-file' is present
-```
-
-**Returned Value**:
-
-```console
-cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key
-```
-
-### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
-node and set the below parameter.
---peer-client-cert-auth=true
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.5
-```
-
-**Expected Result**:
-
-```console
-'--client-cert-auth' is present OR 'client-cert-auth' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-client-cert-auth: true
-```
-
-### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master
-node and either remove the --peer-auto-tls parameter or set it to false.
---peer-auto-tls=false
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.6
-```
-
-**Expected Result**:
-
-```console
-'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-error: process ID list syntax error Usage: ps [options] Try 'ps --help ' or 'ps --help ' for additional help text. For more details see ps(1). cat: /proc//environ: No such file or directory
-```
-
-### 2.7 Ensure that a unique Certificate Authority is used for etcd (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-[Manual test]
-Follow the etcd documentation and create a dedicated certificate authority setup for the
-etcd service.
-Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the
-master node and set the below parameter.
---trusted-ca-file=
-
-**Audit Script:** `check_for_k3s_etcd.sh`
-
-```bash
-#!/bin/bash
-
-# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3)
-# before it checks the requirement
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-
-if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd cluster initializing' | grep -v grep | wc -l)" -gt 0 ]]; then
- case $1 in
- "1.1.11")
- echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);;
- "1.2.29")
- echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');;
- "2.1")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.2")
- echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.3")
- echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.4")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');;
- "2.5")
- echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth');;
- "2.6")
- echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);;
- "2.7")
- echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);;
- esac
-else
-# If another database is running, return whatever is required to pass the scan
- case $1 in
- "1.1.11")
- echo "700";;
- "1.2.29")
- echo "--etcd-certfile AND --etcd-keyfile";;
- "2.1")
- echo "cert-file AND key-file";;
- "2.2")
- echo "--client-cert-auth=true";;
- "2.3")
- echo "false";;
- "2.4")
- echo "peer-cert-file AND peer-key-file";;
- "2.5")
- echo "--client-cert-auth=true";;
- "2.6")
- echo "--peer-auto-tls=false";;
- "2.7")
- echo "--trusted-ca-file";;
- esac
-fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_k3s_etcd.sh 2.7
-```
-
-**Expected Result**:
-
-```console
-'trusted-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt
-```
-
-## 3.1 Authentication and Authorization
-### 3.1.1 Client certificate authentication should not be used for users (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
-implemented in place of client certificates.
-
-## 3.2 Logging
-### 3.2.1 Ensure that a minimal audit policy is created (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Create an audit policy file for your cluster.
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file'
-```
-
-### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the audit policy provided for the cluster and ensure that it covers
-at least the following areas,
-- Access to Secrets managed by the cluster. Care should be taken to only
- log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
- order to avoid risk of logging sensitive data.
-- Modification of Pod and Deployment objects.
-- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
- For most requests, minimally logging at the Metadata level is recommended
- (the most basic level of logging).
-
-## 4.1 Worker Node Configuration Files
-### 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example, chmod 644 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-
-### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-
-### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
-
-**Audit:**
-
-```bash
-stat -c %a /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'permissions' is present OR '/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig' is not present
-```
-
-**Returned Value**:
-
-```console
-644
-```
-
-### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example, chown root:root /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present OR '/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig' is not present
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /var/lib/rancher/k3s/server/cred/admin.kubeconfig
-
-**Audit:**
-
-```bash
-stat -c %a /var/lib/rancher/k3s/agent/kubelet.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'644' is equal to '644'
-```
-
-**Returned Value**:
-
-```console
-644
-```
-
-### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chown root:root /var/lib/rancher/k3s/server/cred/admin.kubeconfig
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/k3s/agent/kubelet.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the file permissions of the
---client-ca-file chmod 644
-
-**Audit:**
-
-```bash
-stat -c %a /var/lib/rancher/k3s/server/tls/server-ca.crt
-```
-
-**Expected Result**:
-
-```console
-'644' is equal to '644' OR '640' is present OR '600' is present OR '444' is present OR '440' is present OR '400' is present OR '000' is present
-```
-
-**Returned Value**:
-
-```console
-644
-```
-
-### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the ownership of the --client-ca-file.
-chown root:root
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/k3s/server/tls/client-ca.crt
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chmod 644 /var/lib/kubelet/config.yaml
-
-### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chown root:root /var/lib/kubelet/config.yaml
-
-## 4.2 Kubelet
-### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
-`false`.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-`--anonymous-auth=false`
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" | grep -v grep; else echo "--anonymous-auth=false"; fi'
-```
-
-**Expected Result**:
-
-```console
-'--anonymous-auth' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
-using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---authorization-mode=Webhook
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" | grep -v grep; else echo "--authorization-mode=Webhook"; fi'
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
-the location of the client CA file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---client-ca-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file" | grep -v grep; else echo "--client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt"; fi'
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:42 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:42Z" level=info msg="Running kube-apiserver --admission-control-config-file=/etc/rancher/k3s/config/rancher-psact.yaml --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log --audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,ServiceAccount --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
-```
-
-### 4.2.4 Ensure that the --read-only-port argument is set to 0 (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---read-only-port=0
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'read-only-port'
-```
-
-**Expected Result**:
-
-```console
-'--read-only-port' is equal to '0' OR '--read-only-port' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:44 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:44Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-31-124 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c42d922-ed1e-4e15-8414-d399d179d897 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
-```
-
-### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
-value other than 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---streaming-connection-idle-timeout=5m
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'streaming-connection-idle-timeout'
-```
-
-### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---protect-kernel-defaults=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'protect-kernel-defaults'
-```
-
-**Expected Result**:
-
-```console
-'--protect-kernel-defaults' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:44 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:44Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-31-124 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c42d922-ed1e-4e15-8414-d399d179d897 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
-```
-
-### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove the --make-iptables-util-chains argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'make-iptables-util-chains'
-```
-
-**Expected Result**:
-
-```console
-'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:44 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:44Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-31-124 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c42d922-ed1e-4e15-8414-d399d179d897 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
-```
-
-### 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and remove the --hostname-override argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-### 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC containerd
-```
-
-### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
-of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
-to the location of the corresponding private key file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
---tls-cert-file=
---tls-private-key-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-Feb 27 23:21:44 ip-172-31-31-124 k3s[13723]: time="2023-02-27T23:21:44Z" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-31-124 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c42d922-ed1e-4e15-8414-d399d179d897 --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
-```
-
-### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
-remove it altogether to use the default value.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
-variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC containerd
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'{.rotateCertificates}' is present OR '{.rotateCertificates}' is not present
-```
-
-### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
---feature-gates=RotateKubeletServerCertificate=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
-TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-or to a subset of these values.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the --tls-cipher-suites parameter as follows, or to a subset of these values.
---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC containerd
-```
-
-## 5.1 RBAC and Service Accounts
-### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
-if they need this role or if they could use a role with fewer privileges.
-Where possible, first bind users to a lower privileged role and then remove the
-clusterrolebinding to the cluster-admin role :
-kubectl delete clusterrolebinding [name]
-
-### 5.1.2 Minimize access to secrets (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove get, list and watch access to Secret objects in the cluster.
-
-### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible replace any use of wildcards in clusterroles and roles with specific
-objects or actions.
-
-### 5.1.4 Minimize access to create pods (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove create access to pod objects in the cluster.
-
-### 5.1.5 Ensure that default service accounts are not actively used. (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Create explicit service accounts wherever a Kubernetes workload requires specific access
-to the Kubernetes API server.
-Modify the configuration of each default service account to include this value
-automountServiceAccountToken: false
-
-### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Modify the definition of pods and service accounts which do not need to mount service
-account tokens to disable it.
-
-### 5.1.7 Avoid use of system:masters group (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Remove the system:masters group from all users in the cluster.
-
-### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove the impersonate, bind and escalate rights from subjects.
-
-## 5.2 Pod Security Standards
-### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that either Pod Security Admission or an external policy control system is in place
-for every namespace which contains user workloads.
-
-### 5.2.2 Minimize the admission of privileged containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of privileged containers.
-
-### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostPID` containers.
-
-### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostIPC` containers.
-
-### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostNetwork` containers.
-
-### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
-
-### 5.2.7 Minimize the admission of root containers (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
-or `MustRunAs` with the range of UIDs not including 0, is set.
-
-### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with the `NET_RAW` capability.
-
-### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that `allowedCapabilities` is not present in policies for the cluster unless
-it is set to an empty array.
-
-### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the use of capabilites in applications running on your cluster. Where a namespace
-contains applicaions which do not require any Linux capabities to operate consider adding
-a PSP which forbids the admission of containers which do not drop all capabilities.
-
-### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
-
-### 5.2.12 Minimize the admission of HostPath volumes (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `hostPath` volumes.
-
-### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers which use `hostPort` sections.
-
-## 5.3 Network Policies and CNI
-### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If the CNI plugin in use does not support network policies, consideration should be given to
-making use of a different plugin, or finding an alternate mechanism for restricting traffic
-in the Kubernetes cluster.
-
-### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the documentation and create NetworkPolicy objects as you need them.
-
-## 5.4 Secrets Management
-### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If possible, rewrite application code to read Secrets from mounted secret files, rather than
-from environment variables.
-
-### 5.4.2 Consider external secret storage (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Refer to the Secrets management options offered by your cloud provider or a third-party
-secrets management solution.
-
-## 5.5 Extensible Admission Control
-### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and setup image provenance.
-
-## 5.7 General Policies
-### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the documentation and create namespaces for objects in your deployment as you need
-them.
-
-### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
-An example is as below:
-securityContext:
-seccompProfile:
-type: RuntimeDefault
-
-### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
-suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
-Containers.
-
-### 5.7.4 The default namespace should not be used (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
-resources and that all new resources are created in a specific namespace.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md
new file mode 100644
index 00000000000..903e42374f3
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-hardening-guide.md
@@ -0,0 +1,515 @@
+---
+title: RKE 加固指南
+---
+
+
+
+
+
+本文档提供了针对生产环境的 RKE 集群进行加固的具体指导,以便在使用 Rancher 部署之前进行配置。它概述了满足信息安全中心(Center for Information Security, CIS)Kubernetes benchmark controls 所需的配置和控制。
+
+:::note
+这份加固指南描述了如何确保你集群中的节点安全。我们建议你在安装 Kubernetes 之前遵循本指南。
+:::
+
+此加固指南适用于 RKE 集群,并与以下版本的 CIS Kubernetes Benchmark、Kubernetes 和 Rancher 相关联:
+
+| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 |
+|-----------------|-----------------------|------------------------------|
+| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 |
+| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 |
+| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 至 v1.26 |
+
+:::note
+- 在 Benchmark v1.24 及更高版本中,检查 id `4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)` 可能会失败,因为 `/etc/kubernetes/ssl/kube-ca.pem` 默认设置为 644。
+- 在 Benchmark v1.7 中,不再需要 `--protect-kernel-defaults` (`4.2.6`) 参数,并已被 CIS 删除。
+:::
+
+有关如何评估加固的 RKE 集群与官方 CIS benchmark 的更多细节,请参考特定 Kubernetes 和 CIS benchmark 版本的 RKE 自我评估指南。
+
+## 主机级别要求
+
+### 配置 Kernel 运行时参数
+
+建议对群集中的所有节点类型使用以下 `sysctl` 配置。在 `/etc/sysctl.d/90-kubelet.conf` 中设置以下参数:
+
+```ini
+vm.overcommit_memory=1
+vm.panic_on_oom=0
+kernel.panic=10
+kernel.panic_on_oops=1
+```
+
+运行 `sysctl -p /etc/sysctl.d/90-kubelet.conf` 以启用设置。
+
+### 配置 `etcd` 用户和组
+
+在安装 RKE 之前,需要设置 **etcd** 服务的用户帐户和组。
+
+#### 创建 `etcd` 用户和组
+
+要创建 **etcd** 用户和组,请运行以下控制台命令。
+下面的命令示例中使用 `52034` 作为 **uid** 和 **gid** 。
+任何有效且未使用的 **uid** 或 **gid** 都可以代替 `52034`。
+
+```bash
+groupadd --gid 52034 etcd
+useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd --shell /usr/sbin/nologin
+```
+
+在通过集群配置文件 `config.yml` 部署RKE时,请更新 `etcd` 用户的 `uid` 和 `gid`:
+
+```yaml
+services:
+ etcd:
+ gid: 52034
+ uid: 52034
+```
+
+## Kubernetes 运行时要求
+
+### 配置 `default` Service Account
+
+#### 设置 `automountServiceAccountToken` 为 `false` 用于 `default` service accounts
+
+Kubernetes 提供了一个 default service account,供集群工作负载使用,其中没有为 pod 分配特定的 service account。
+如果需要从 pod 访问 Kubernetes API,则应为该 pod 创建特定的 service account,并向该 service account 授予权限。
+应配置 default service account,使其不提供 service account 令牌,并且不应具有任何明确的权限分配。
+
+对于标准 RKE 安装上的每个命名空间(包括 `default` 和 `kube-system`),`default` service account 必须包含以下值:
+
+```yaml
+automountServiceAccountToken: false
+```
+
+将以下配置保存到名为 `account_update.yaml` 的文件中。
+
+```yaml
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: default
+automountServiceAccountToken: false
+```
+
+创建一个名为 `account_update.yaml` 的 bash 脚本文件。
+确保执行 `chmod +x account_update.sh` 命令,以赋予脚本执行权限。
+
+```bash
+#!/bin/bash -e
+
+for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do
+ kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)"
+done
+```
+
+执行此脚本将 `account_update.yaml` 配置应用到所有命名空间中的 `default` service account。
+
+### 配置网络策略
+
+#### 确保所有命名空间都定义了网络策略
+
+在同一个 Kubernetes 集群上运行不同的应用程序会带来风险,即某个受感染的应用程序可能会攻击相邻的应用程序。为确保容器只与其预期通信的容器进行通信,网络分段至关重要。网络策略规定了哪些 Pod 可以互相通信,以及与其他网络终端通信的方式。
+
+网络策略是命名空间范围的。当在特定命名空间引入网络策略时,所有未被策略允许的流量将被拒绝。然而,如果在命名空间中没有网络策略,那么所有流量将被允许进入和离开该命名空间中的 Pod。要强制执行网络策略,必须启用容器网络接口(container network interface, CNI)插件。本指南使用 [Canal](https://github.com/projectcalico/canal) 来提供策略执行。有关 CNI 提供程序的其他信息可以在[这里](https://www.suse.com/c/rancher_blog/comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/)找到。
+
+一旦在集群上启用了 CNI 提供程序,就可以应用默认的网络策略。下面提供了一个 **permissive** 的示例供参考。如果你希望允许匹配某个命名空间中所有 Pod 的所有入站和出站流量(即使添加了策略导致某些 Pod 被视为”隔离”),你可以创建一个明确允许该命名空间中所有流量的策略。请将以下配置保存为 `default-allow-all.yaml`。有关网络策略的其他[文档](https://kubernetes.io/docs/concepts/services-networking/network-policies/)可以在 Kubernetes 站点上找到。
+
+:::caution
+此网络策略只是一个示例,不建议用于生产用途。
+:::
+
+```yaml
+---
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: default-allow-all
+spec:
+ podSelector: {}
+ ingress:
+ - {}
+ egress:
+ - {}
+ policyTypes:
+ - Ingress
+ - Egress
+```
+
+创建一个名为 `apply_networkPolicy_to_all_ns.sh`的 Bash 脚本文件。
+
+确保运行 `chmod +x apply_networkPolicy_to_all_ns.sh` 命令,以赋予脚本执行权限。
+
+```bash
+#!/bin/bash -e
+
+for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do
+ kubectl apply -f default-allow-all.yaml -n ${namespace}
+done
+```
+
+执行此脚本以将 `default-allow-all.yaml` 配置和 **permissive** 的 `NetworkPolicy` 应用于所有命名空间。
+
+## 已知限制
+
+- 当注册自定义节点仅提供公共 IP 时,Rancher **exec shell** 和 **查看 pod 日志** 在加固设置中**不起作用**。 此功能需要在注册自定义节点时提供私有 IP。
+- 当根据 Rancher [提供](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/create-pod-security-policies.md)的 Pod 安全策略 (Pod Security Policies, PSP) 将 `default_pod_security_policy_template_id:` 设置为 `restricted` 或 `restricted-noroot` 时,Rancher 会在 `default` service accounts 上创建 `RoleBindings` 和 `ClusterRoleBindings`。CIS 检查 5.1.5 要求除了默认角色之外,`default` service accounts 不应绑定其他角色或集群角色。此外,`default` service accounts 应配置为不提供服务账户令牌,也不具有任何明确的权限分配。
+
+## 加固的 RKE `cluster.yml` 配置参考
+
+参考的 `cluster.yml` 文件是由 RKE CLI 使用的,它提供了实现 RKE 加固安装所需的配置。
+RKE [文档](https://rancher.com/docs/rke/latest/en/installation/)提供了有关配置项的更多详细信息。这里参考的 `cluster.yml` 不包括必需的 `nodes` 指令,因为它取决于你的环境。在 RKE 中有关节点配置的文档可以在[这里](https://rancher.com/docs/rke/latest/en/config-options/nodes/)找到。
+
+示例 `cluster.yml` 配置文件中包含了一个 Admission Configuration 策略,在 `services.kube-api.admission_configuration` 字段中指定。这个[示例](../../psa-restricted-exemptions.md)策略包含了命名空间的豁免规则,这对于在Rancher中正确运行导入的RKE集群非常必要,类似于Rancher预定义的 [`rancher-restricted`](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) 策略。
+
+如果你希望使用 RKE 的默认 `restricted` 策略,则将 `services.kube-api.admission_configuration` 字段留空,并将 `services.pod_security_configuration` 设置为 `restricted`。你可以在 [RKE 文档](https://rke.docs.rancher.com/config-options/services/pod-security-admission)中找到更多信息。
+
+
+
+
+:::note
+如果你打算将一个 RKE 集群导入到 Rancher 中,请参考此[文档](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md)以了解如何配置 PSA 以豁免 Rancher 系统命名空间。
+:::
+
+```yaml
+# 如果你打算在离线环境部署 Kubernetes,
+# 请查阅文档以了解如何配置自定义的 RKE 镜像。
+nodes: []
+kubernetes_version: # 定义 RKE 版本
+services:
+ etcd:
+ uid: 52034
+ gid: 52034
+ kube-api:
+ secrets_encryption_config:
+ enabled: true
+ audit_log:
+ enabled: true
+ event_rate_limit:
+ enabled: true
+ # 如果你在 `admission_configuration` 中设置了自定义策略,
+ # 请将 `pod_security_configuration` 字段留空。
+ # 否则,将其设置为 `restricted` 以使用 RKE 预定义的受限策略,
+ # 并删除 `admission_configuration` 字段中的所有内容。
+ #
+ # pod_security_configuration: restricted
+ #
+ admission_configuration:
+ apiVersion: apiserver.config.k8s.io/v1
+ kind: AdmissionConfiguration
+ plugins:
+ - name: PodSecurity
+ configuration:
+ apiVersion: pod-security.admission.config.k8s.io/v1
+ kind: PodSecurityConfiguration
+ defaults:
+ enforce: "restricted"
+ enforce-version: "latest"
+ audit: "restricted"
+ audit-version: "latest"
+ warn: "restricted"
+ warn-version: "latest"
+ exemptions:
+ usernames: []
+ runtimeClasses: []
+ namespaces: [calico-apiserver,
+ calico-system,
+ cattle-alerting,
+ cattle-csp-adapter-system,
+ cattle-elemental-system,
+ cattle-epinio-system,
+ cattle-externalip-system,
+ cattle-fleet-local-system,
+ cattle-fleet-system,
+ cattle-gatekeeper-system,
+ cattle-global-data,
+ cattle-global-nt,
+ cattle-impersonation-system,
+ cattle-istio,
+ cattle-istio-system,
+ cattle-logging,
+ cattle-logging-system,
+ cattle-monitoring-system,
+ cattle-neuvector-system,
+ cattle-prometheus,
+ cattle-provisioning-capi-system,
+ cattle-resources-system,
+ cattle-sriov-system,
+ cattle-system,
+ cattle-ui-plugin-system,
+ cattle-windows-gmsa-system,
+ cert-manager,
+ cis-operator-system,
+ fleet-default,
+ ingress-nginx,
+ istio-system,
+ kube-node-lease,
+ kube-public,
+ kube-system,
+ longhorn-system,
+ rancher-alerting-drivers,
+ security-scan,
+ tigera-operator]
+ kube-controller:
+ extra_args:
+ feature-gates: RotateKubeletServerCertificate=true
+ kubelet:
+ extra_args:
+ feature-gates: RotateKubeletServerCertificate=true
+ generate_serving_certificate: true
+addons: |
+ apiVersion: networking.k8s.io/v1
+ kind: NetworkPolicy
+ metadata:
+ name: default-allow-all
+ spec:
+ podSelector: {}
+ ingress:
+ - {}
+ egress:
+ - {}
+ policyTypes:
+ - Ingress
+ - Egress
+ ---
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: default
+ automountServiceAccountToken: false
+```
+
+
+
+
+```yaml
+# 如果你打算在离线环境部署 Kubernetes,
+# 请查阅文档以了解如何配置自定义的 RKE 镜像。
+nodes: []
+kubernetes_version: # 定义 RKE 版本
+services:
+ etcd:
+ uid: 52034
+ gid: 52034
+ kube-api:
+ secrets_encryption_config:
+ enabled: true
+ audit_log:
+ enabled: true
+ event_rate_limit:
+ enabled: true
+ pod_security_policy: true
+ kube-controller:
+ extra_args:
+ feature-gates: RotateKubeletServerCertificate=true
+ kubelet:
+ extra_args:
+ feature-gates: RotateKubeletServerCertificate=true
+ protect-kernel-defaults: true
+ generate_serving_certificate: true
+addons: |
+ # Upstream Kubernetes restricted PSP policy
+ # https://github.com/kubernetes/website/blob/564baf15c102412522e9c8fc6ef2b5ff5b6e766c/content/en/examples/policy/restricted-psp.yaml
+ apiVersion: policy/v1beta1
+ kind: PodSecurityPolicy
+ metadata:
+ name: restricted-noroot
+ spec:
+ privileged: false
+ # Required to prevent escalations to root.
+ allowPrivilegeEscalation: false
+ requiredDropCapabilities:
+ - ALL
+ # Allow core volume types.
+ volumes:
+ - 'configMap'
+ - 'emptyDir'
+ - 'projected'
+ - 'secret'
+ - 'downwardAPI'
+ # Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use.
+ - 'csi'
+ - 'persistentVolumeClaim'
+ - 'ephemeral'
+ hostNetwork: false
+ hostIPC: false
+ hostPID: false
+ runAsUser:
+ # Require the container to run without root privileges.
+ rule: 'MustRunAsNonRoot'
+ seLinux:
+ # This policy assumes the nodes are using AppArmor rather than SELinux.
+ rule: 'RunAsAny'
+ supplementalGroups:
+ rule: 'MustRunAs'
+ ranges:
+ # Forbid adding the root group.
+ - min: 1
+ max: 65535
+ fsGroup:
+ rule: 'MustRunAs'
+ ranges:
+ # Forbid adding the root group.
+ - min: 1
+ max: 65535
+ readOnlyRootFilesystem: false
+ ---
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRole
+ metadata:
+ name: psp:restricted-noroot
+ rules:
+ - apiGroups:
+ - extensions
+ resourceNames:
+ - restricted-noroot
+ resources:
+ - podsecuritypolicies
+ verbs:
+ - use
+ ---
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRoleBinding
+ metadata:
+ name: psp:restricted-noroot
+ roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: psp:restricted-noroot
+ subjects:
+ - apiGroup: rbac.authorization.k8s.io
+ kind: Group
+ name: system:serviceaccounts
+ - apiGroup: rbac.authorization.k8s.io
+ kind: Group
+ name: system:authenticated
+ ---
+ apiVersion: networking.k8s.io/v1
+ kind: NetworkPolicy
+ metadata:
+ name: default-allow-all
+ spec:
+ podSelector: {}
+ ingress:
+ - {}
+ egress:
+ - {}
+ policyTypes:
+ - Ingress
+ - Egress
+ ---
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: default
+ automountServiceAccountToken: false
+```
+
+
+
+
+## 加固后的 RKE 集群模板配置参考
+
+参考的 RKE 集群模板提供了实现 Kubernetes 加固安装所需的最低配置。RKE 模板用于提供 Kubernetes 并定义 Rancher 设置。有关安装 RKE 及其模板详情的其他信息,请参考 Rancher [文档](../../../../getting-started/installation-and-upgrade/installation-and-upgrade.md) 。
+
+
+
+
+```yaml
+#
+# 集群配置
+#
+default_pod_security_admission_configuration_template_name: rancher-restricted
+enable_network_policy: true
+local_cluster_auth_endpoint:
+ enabled: true
+name: # 定义集群名称
+
+#
+# Rancher 配置
+#
+rancher_kubernetes_engine_config:
+ addon_job_timeout: 45
+ authentication:
+ strategy: x509|webhook
+ kubernetes_version: # 定义 RKE 版本
+ services:
+ etcd:
+ uid: 52034
+ gid: 52034
+ kube-api:
+ audit_log:
+ enabled: true
+ event_rate_limit:
+ enabled: true
+ pod_security_policy: false
+ secrets_encryption_config:
+ enabled: true
+ kube-controller:
+ extra_args:
+ feature-gates: RotateKubeletServerCertificate=true
+ tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+ kubelet:
+ extra_args:
+ feature-gates: RotateKubeletServerCertificate=true
+ tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+ generate_serving_certificate: true
+ scheduler:
+ extra_args:
+ tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+```
+
+
+
+
+```yaml
+#
+# 集群配置
+#
+default_pod_security_policy_template_id: restricted-noroot
+enable_network_policy: true
+local_cluster_auth_endpoint:
+ enabled: true
+name: # 定义集群名称
+
+#
+# Rancher 配置
+#
+rancher_kubernetes_engine_config:
+ addon_job_timeout: 45
+ authentication:
+ strategy: x509|webhook
+ kubernetes_version: # 定义 RKE 版本
+ services:
+ etcd:
+ uid: 52034
+ gid: 52034
+ kube-api:
+ audit_log:
+ enabled: true
+ event_rate_limit:
+ enabled: true
+ pod_security_policy: true
+ secrets_encryption_config:
+ enabled: true
+ kube-controller:
+ extra_args:
+ feature-gates: RotateKubeletServerCertificate=true
+ tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+ kubelet:
+ extra_args:
+ feature-gates: RotateKubeletServerCertificate=true
+ protect-kernel-defaults: true
+ tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+ generate_serving_certificate: true
+ scheduler:
+ extra_args:
+ tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+```
+
+
+
+
+## 结论
+
+如果你按照本指南操作,由 Rancher 提供的 RKE 自定义集群将配置为通过 CIS Kubernetes Benchmark 测试。你可以查看我们的 RKE 自我评估指南,了解我们是如何验证每个 benchmarks 的,并且你可以在你的集群上执行相同的操作。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md
index 34bb63f89f0..7dfb1204916 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md
@@ -1,30 +1,34 @@
---
-title: RKE Self-Assessment Guide - CIS Benchmark v1.23 - K8s v1.23
+title: RKE 自我评估指南 - CIS Benchmark v1.23 - K8s v1.23
---
-This document is a companion to the [RKE Hardening Guide](../../../../pages-for-subheaders/rke1-hardening-guide.md), which provides prescriptive guidance on how to harden RKE clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark.
+
+
+
-This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes:
+本文档是 [RKE 加固指南](rke1-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 RKE 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。
-| Rancher Version | CIS Benchmark Version | Kubernetes Version |
+本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes:
+
+| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 |
|-----------------|-----------------------|--------------------|
| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 |
-This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`.
+本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群中的合规性。由于 Rancher 和 RKE 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多 control 验证检查不适用。这些检查将返回 `Not Applicable` 的结果。
-This document is for Rancher operators, security teams, auditors and decision makers.
+本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。
-For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.23. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
+有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.23 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。
-## Testing Methodology
+## 测试方法
-Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files.
+Rancher 和 RKE 通过 Docker 容器安装 Kubernetes 服务。配置是通过初始化时传递给容器的参数定义的,而不是通过配置文件。
-Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results.
+在 control 审计与原始 CIS benchmark 不同时,提供了针对 Rancher 的特定审计命令以进行测试。在执行测试时,你将需要访问所有 RKE 节点主机上的命令行。这些命令还使用了 [kubectl](https://kubernetes.io/docs/tasks/tools/)(带有有效的配置文件)和 [jq](https://stedolan.github.io/jq/) 工具,在测试和评估测试结果时这些工具是必需的。
:::note
-This guide only covers `automated` (previously called `scored`) tests.
+本指南仅涵盖 `automated`(之前称为 `scored`)测试。
:::
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.24.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.24.md
deleted file mode 100644
index 25108b848b5..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.24.md
+++ /dev/null
@@ -1,3084 +0,0 @@
----
-title: RKE Self-Assessment Guide - CIS Benchmark v1.23 - K8s v1.24
----
-
-This document is a companion to the [RKE Hardening Guide](../../../../pages-for-subheaders/rke1-hardening-guide.md), which provides prescriptive guidance on how to harden RKE clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark.
-
-This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes:
-
-| Rancher Version | CIS Benchmark Version | Kubernetes Version |
-|-----------------|-----------------------|--------------------|
-| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.24 |
-
-This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`.
-
-This document is for Rancher operators, security teams, auditors and decision makers.
-
-For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.23. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
-
-## Testing Methodology
-
-Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files.
-
-Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results.
-
-:::note
-
-This guide only covers `automated` (previously called `scored`) tests.
-
-:::
-
-### Controls
-
-## 1.1 Control Plane Node Configuration Files
-### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644
-
-**Audit:**
-
-```bash
-ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644 permissions=600
-```
-
-### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root
-
-**Audit:**
-
-```bash
-ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root root:root
-```
-
-### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above). For example,
-chmod 700 /var/lib/etcd
-
-**Audit:**
-
-```bash
-stat -c %a /node/var/lib/etcd
-```
-
-**Expected Result**:
-
-```console
-'700' is equal to '700'
-```
-
-**Returned Value**:
-
-```console
-700
-```
-
-### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above).
-For example, chown etcd:etcd /var/lib/etcd
-
-### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes.
-
-### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes.
-
-### 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the master node.
-For example,
-chown -R root:root /etc/kubernetes/pki/
-
-**Audit Script:** `check_files_owner_in_dir.sh`
-
-```bash
-#!/usr/bin/env bash
-
-# This script is used to ensure the owner is set to root:root for
-# the given directory and all the files in it
-#
-# inputs:
-# $1 = /full/path/to/directory
-#
-# outputs:
-# true/false
-
-INPUT_DIR=$1
-
-if [[ "${INPUT_DIR}" == "" ]]; then
- echo "false"
- exit
-fi
-
-if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then
- echo "false"
- exit
-fi
-
-statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*)
-while read -r statInfoLine; do
- f=$(echo ${statInfoLine} | cut -d' ' -f1)
- p=$(echo ${statInfoLine} | cut -d' ' -f2)
-
- if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then
- if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then
- echo "false"
- exit
- fi
- else
- if [[ "$p" != "root:root" ]]; then
- echo "false"
- exit
- fi
- fi
-done <<< "${statInfoLines}"
-
-
-echo "true"
-exit
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-true
-```
-
-### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the master node.
-For example,
-chmod -R 644 /etc/kubernetes/pki/*.crt
-
-**Audit Script:** `check_files_permissions.sh`
-
-```bash
-#!/usr/bin/env bash
-
-# This script is used to ensure the file permissions are set to 644 or
-# more restrictive for all files in a given directory or a wildcard
-# selection of files
-#
-# inputs:
-# $1 = /full/path/to/directory or /path/to/fileswithpattern
-# ex: !(*key).pem
-#
-# $2 (optional) = permission (ex: 600)
-#
-# outputs:
-# true/false
-
-# Turn on "extended glob" for use of '!' in wildcard
-shopt -s extglob
-
-# Turn off history to avoid surprises when using '!'
-set -H
-
-USER_INPUT=$1
-
-if [[ "${USER_INPUT}" == "" ]]; then
- echo "false"
- exit
-fi
-
-
-if [[ -d ${USER_INPUT} ]]; then
- PATTERN="${USER_INPUT}/*"
-else
- PATTERN="${USER_INPUT}"
-fi
-
-PERMISSION=""
-if [[ "$2" != "" ]]; then
- PERMISSION=$2
-fi
-
-FILES_PERMISSIONS=$(stat -c %n\ %a ${PATTERN})
-
-while read -r fileInfo; do
- p=$(echo ${fileInfo} | cut -d' ' -f2)
-
- if [[ "${PERMISSION}" != "" ]]; then
- if [[ "$p" != "${PERMISSION}" ]]; then
- echo "false"
- exit
- fi
- else
- if [[ "$p" != "644" && "$p" != "640" && "$p" != "600" ]]; then
- echo "false"
- exit
- fi
- fi
-done <<< "${FILES_PERMISSIONS}"
-
-
-echo "true"
-exit
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_files_permissions.sh '/node/etc/kubernetes/ssl/!(*key).pem'
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-true
-```
-
-### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod -R 600 /etc/kubernetes/ssl/*key.pem
-
-**Audit Script:** `check_files_permissions.sh`
-
-```bash
-#!/usr/bin/env bash
-
-# This script is used to ensure the file permissions are set to 644 or
-# more restrictive for all files in a given directory or a wildcard
-# selection of files
-#
-# inputs:
-# $1 = /full/path/to/directory or /path/to/fileswithpattern
-# ex: !(*key).pem
-#
-# $2 (optional) = permission (ex: 600)
-#
-# outputs:
-# true/false
-
-# Turn on "extended glob" for use of '!' in wildcard
-shopt -s extglob
-
-# Turn off history to avoid surprises when using '!'
-set -H
-
-USER_INPUT=$1
-
-if [[ "${USER_INPUT}" == "" ]]; then
- echo "false"
- exit
-fi
-
-
-if [[ -d ${USER_INPUT} ]]; then
- PATTERN="${USER_INPUT}/*"
-else
- PATTERN="${USER_INPUT}"
-fi
-
-PERMISSION=""
-if [[ "$2" != "" ]]; then
- PERMISSION=$2
-fi
-
-FILES_PERMISSIONS=$(stat -c %n\ %a ${PATTERN})
-
-while read -r fileInfo; do
- p=$(echo ${fileInfo} | cut -d' ' -f2)
-
- if [[ "${PERMISSION}" != "" ]]; then
- if [[ "$p" != "${PERMISSION}" ]]; then
- echo "false"
- exit
- fi
- else
- if [[ "$p" != "644" && "$p" != "640" && "$p" != "600" ]]; then
- echo "false"
- exit
- fi
- fi
-done <<< "${FILES_PERMISSIONS}"
-
-
-echo "true"
-exit
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_files_permissions.sh '/node/etc/kubernetes/ssl/*key.pem'
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-true
-```
-
-## 1.2 API Server
-### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---anonymous-auth=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--anonymous-auth' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and configure alternate mechanisms for authentication. Then,
-edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the --token-auth-file= parameter.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--token-auth-file' is not present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the `DenyServiceExternalIPs`
-from enabled admission plugins.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the --kubelet-https parameter.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-https' is present OR '--kubelet-https' is not present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the
-apiserver and kubelets. Then, edit API server pod specification file
-/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
-kubelet client certificate and key parameters as below.
---kubelet-client-certificate=
---kubelet-client-key=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and setup the TLS connection between
-the apiserver and kubelets. Then, edit the API server pod specification file
-/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
---kubelet-certificate-authority=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-certificate-authority' is present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
-One such example could be as below.
---authorization-mode=RBAC
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes Node.
---authorization-mode=Node,RBAC
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'Node'
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
-for example `--authorization-mode=Node,RBAC`.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'RBAC'
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set the desired limits in a configuration file.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-and set the below parameters.
---enable-admission-plugins=...,EventRateLimit,...
---admission-control-config-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'EventRateLimit'
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
-value that does not include AlwaysAdmit.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-AlwaysPullImages.
---enable-admission-plugins=...,AlwaysPullImages,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-SecurityContextDeny, unless PodSecurityPolicy is already in place.
---enable-admission-plugins=...,SecurityContextDeny,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and create ServiceAccount objects as per your environment.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
-value that does not include ServiceAccount.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --disable-admission-plugins parameter to
-ensure it does not include NamespaceLifecycle.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.16 Ensure that the admission control plugin NodeRestriction is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to a
-value that includes NodeRestriction.
---enable-admission-plugins=...,NodeRestriction,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'NodeRestriction'
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.17 Ensure that the --secure-port argument is not set to 0 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and either remove the --secure-port parameter or
-set it to a different (non-zero) desired port.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--secure-port' is greater than 0 OR '--secure-port' is not present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.18 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.19 Ensure that the --audit-log-path argument is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-path parameter to a suitable path and
-file where you would like audit logs to be written, for example,
---audit-log-path=/var/log/apiserver/audit.log
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-path' is present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.20 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxage parameter to 30
-or as an appropriate number of days, for example,
---audit-log-maxage=30
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxage' is greater or equal to 30
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.21 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
-value. For example,
---audit-log-maxbackup=10
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxbackup' is greater or equal to 10
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.22 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
-For example, to set it as 100 MB, --audit-log-maxsize=100
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxsize' is greater or equal to 100
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.24 Ensure that the --service-account-lookup argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---service-account-lookup=true
-Alternatively, you can delete the --service-account-lookup parameter from this file so
-that the default takes effect.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.25 Ensure that the --request-timeout argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --service-account-key-file parameter
-to the public key file for service accounts. For example,
---service-account-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.26 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate and key file parameters.
---etcd-certfile=
---etcd-keyfile=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--etcd-certfile' is present AND '--etcd-keyfile' is present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.27 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the TLS certificate and private key file parameters.
---tls-cert-file=
---tls-private-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.28 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the client certificate authority file.
---client-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.29 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate authority file parameter.
---etcd-cafile=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--etcd-cafile' is present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.30 Ensure that the --encryption-provider-config argument is set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --encryption-provider-config parameter to the path of that file.
-For example, --encryption-provider-config=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--encryption-provider-config' is present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 1.2.31 Ensure that encryption providers are appropriately configured (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-In this file, choose aescbc, kms or secretbox as the encryption provider.
-
-**Audit Script:** `check_encryption_provider_config.sh`
-
-```bash
-#!/usr/bin/env bash
-
-# This script is used to check the encrption provider config is set to aesbc
-#
-# outputs:
-# true/false
-
-# TODO: Figure out the file location from the kube-apiserver commandline args
-ENCRYPTION_CONFIG_FILE="/node/etc/kubernetes/ssl/encryption.yaml"
-
-if [[ ! -f "${ENCRYPTION_CONFIG_FILE}" ]]; then
- echo "false"
- exit
-fi
-
-for provider in "$@"
-do
- if grep "$provider" "${ENCRYPTION_CONFIG_FILE}"; then
- echo "true"
- exit
- fi
-done
-
-echo "false"
-exit
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_encryption_provider_config.sh aescbc
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-- aescbc: true
-```
-
-### 1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
-TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
-TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
-TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384'
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-## 1.3 Controller Manager
-### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
-for example, --terminated-pod-gc-threshold=10
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--terminated-pod-gc-threshold' is present
-```
-
-**Returned Value**:
-
-```console
-root 5056 5035 2 18:52 ? 00:00:05 kube-controller-manager --cloud-provider= --terminated-pod-gc-threshold=1000 --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --node-monitor-grace-period=40s --allow-untagged-cloud=true --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --configure-cloud-routes=false --leader-elect=true --profiling=false --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --pod-eviction-timeout=5m0s --v=2 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --enable-hostpath-provisioner=false --use-service-account-credentials=true
-```
-
-### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 5056 5035 2 18:52 ? 00:00:05 kube-controller-manager --cloud-provider= --terminated-pod-gc-threshold=1000 --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --node-monitor-grace-period=40s --allow-untagged-cloud=true --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --configure-cloud-routes=false --leader-elect=true --profiling=false --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --pod-eviction-timeout=5m0s --v=2 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --enable-hostpath-provisioner=false --use-service-account-credentials=true
-```
-
-### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node to set the below parameter.
---use-service-account-credentials=true
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--use-service-account-credentials' is not equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 5056 5035 2 18:52 ? 00:00:05 kube-controller-manager --cloud-provider= --terminated-pod-gc-threshold=1000 --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --node-monitor-grace-period=40s --allow-untagged-cloud=true --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --configure-cloud-routes=false --leader-elect=true --profiling=false --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --pod-eviction-timeout=5m0s --v=2 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --enable-hostpath-provisioner=false --use-service-account-credentials=true
-```
-
-### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --service-account-private-key-file parameter
-to the private key file for service accounts.
---service-account-private-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 5056 5035 2 18:52 ? 00:00:05 kube-controller-manager --cloud-provider= --terminated-pod-gc-threshold=1000 --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --node-monitor-grace-period=40s --allow-untagged-cloud=true --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --configure-cloud-routes=false --leader-elect=true --profiling=false --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --pod-eviction-timeout=5m0s --v=2 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --enable-hostpath-provisioner=false --use-service-account-credentials=true
-```
-
-### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
---root-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--root-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 5056 5035 2 18:52 ? 00:00:05 kube-controller-manager --cloud-provider= --terminated-pod-gc-threshold=1000 --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --node-monitor-grace-period=40s --allow-untagged-cloud=true --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --configure-cloud-routes=false --leader-elect=true --profiling=false --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --pod-eviction-timeout=5m0s --v=2 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --enable-hostpath-provisioner=false --use-service-account-credentials=true
-```
-
-### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
---feature-gates=RotateKubeletServerCertificate=true
-
-Cluster provisioned by RKE handles certificate rotation directly through RKE.
-
-### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is present OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5056 5035 2 18:52 ? 00:00:05 kube-controller-manager --cloud-provider= --terminated-pod-gc-threshold=1000 --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --node-monitor-grace-period=40s --allow-untagged-cloud=true --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --configure-cloud-routes=false --leader-elect=true --profiling=false --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --pod-eviction-timeout=5m0s --v=2 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --enable-hostpath-provisioner=false --use-service-account-credentials=true
-```
-
-## 1.4 Scheduler
-### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-scheduler | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 5209 5190 0 18:53 ? 00:00:01 kube-scheduler --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml
-```
-
-### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-scheduler | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is present OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5209 5190 0 18:53 ? 00:00:01 kube-scheduler --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml
-```
-
-## 2 Etcd Node Configuration
-### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the etcd service documentation and configure TLS encryption.
-Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
-on the master node and set the below parameters.
---cert-file=
---key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--cert-file' is present AND '--key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 4720 4700 3 18:52 ? 00:00:08 /usr/local/bin/etcd --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105-key.pem --advertise-client-urls=https://172.31.26.105:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --peer-client-cert-auth=true --initial-cluster-token=etcd-cluster-1 --listen-peer-urls=https://172.31.26.105:2380 --initial-cluster=etcd-ip-172-31-26-105=https://172.31.26.105:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105-key.pem --heartbeat-interval=500 --data-dir=/var/lib/rancher/etcd/ --listen-client-urls=https://172.31.26.105:2379 --name=etcd-ip-172-31-26-105 --initial-cluster-state=new --client-cert-auth=true --election-timeout=5000 --initial-advertise-peer-urls=https://172.31.26.105:2380 root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false root 18033 17938 2 18:57 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.23-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
-```
-
-### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
-node and set the below parameter.
---client-cert-auth="true"
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--client-cert-auth' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-root 4720 4700 3 18:52 ? 00:00:08 /usr/local/bin/etcd --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105-key.pem --advertise-client-urls=https://172.31.26.105:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --peer-client-cert-auth=true --initial-cluster-token=etcd-cluster-1 --listen-peer-urls=https://172.31.26.105:2380 --initial-cluster=etcd-ip-172-31-26-105=https://172.31.26.105:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105-key.pem --heartbeat-interval=500 --data-dir=/var/lib/rancher/etcd/ --listen-client-urls=https://172.31.26.105:2379 --name=etcd-ip-172-31-26-105 --initial-cluster-state=new --client-cert-auth=true --election-timeout=5000 --initial-advertise-peer-urls=https://172.31.26.105:2380 root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false root 18033 17938 4 18:57 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.23-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
-```
-
-### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
-node and either remove the --auto-tls parameter or set it to false.
---auto-tls=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-26-105 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-26-105.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-26-105-key.pem ETCDCTL_ENDPOINTS=https://172.31.26.105:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/root
-```
-
-### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the etcd service documentation and configure peer TLS encryption as appropriate
-for your etcd cluster.
-Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the
-master node and set the below parameters.
---peer-client-file=
---peer-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--peer-cert-file' is present AND '--peer-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 4720 4700 3 18:52 ? 00:00:08 /usr/local/bin/etcd --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105-key.pem --advertise-client-urls=https://172.31.26.105:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --peer-client-cert-auth=true --initial-cluster-token=etcd-cluster-1 --listen-peer-urls=https://172.31.26.105:2380 --initial-cluster=etcd-ip-172-31-26-105=https://172.31.26.105:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105-key.pem --heartbeat-interval=500 --data-dir=/var/lib/rancher/etcd/ --listen-client-urls=https://172.31.26.105:2379 --name=etcd-ip-172-31-26-105 --initial-cluster-state=new --client-cert-auth=true --election-timeout=5000 --initial-advertise-peer-urls=https://172.31.26.105:2380 root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false root 18033 17938 3 18:57 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.23-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
-```
-
-### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
-node and set the below parameter.
---peer-client-cert-auth=true
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--peer-client-cert-auth' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-root 4720 4700 3 18:52 ? 00:00:08 /usr/local/bin/etcd --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105-key.pem --advertise-client-urls=https://172.31.26.105:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --peer-client-cert-auth=true --initial-cluster-token=etcd-cluster-1 --listen-peer-urls=https://172.31.26.105:2380 --initial-cluster=etcd-ip-172-31-26-105=https://172.31.26.105:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105-key.pem --heartbeat-interval=500 --data-dir=/var/lib/rancher/etcd/ --listen-client-urls=https://172.31.26.105:2379 --name=etcd-ip-172-31-26-105 --initial-cluster-state=new --client-cert-auth=true --election-timeout=5000 --initial-advertise-peer-urls=https://172.31.26.105:2380 root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false root 18033 17938 2 18:57 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.23-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
-```
-
-### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
-node and either remove the --peer-auto-tls parameter or set it to false.
---peer-auto-tls=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-26-105 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-26-105.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-26-105-key.pem ETCDCTL_ENDPOINTS=https://172.31.26.105:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/root
-```
-
-### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-[Manual test]
-Follow the etcd documentation and create a dedicated certificate authority setup for the
-etcd service.
-Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the
-master node and set the below parameter.
---trusted-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--trusted-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 4720 4700 3 18:52 ? 00:00:08 /usr/local/bin/etcd --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105-key.pem --advertise-client-urls=https://172.31.26.105:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --peer-client-cert-auth=true --initial-cluster-token=etcd-cluster-1 --listen-peer-urls=https://172.31.26.105:2380 --initial-cluster=etcd-ip-172-31-26-105=https://172.31.26.105:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-26-105-key.pem --heartbeat-interval=500 --data-dir=/var/lib/rancher/etcd/ --listen-client-urls=https://172.31.26.105:2379 --name=etcd-ip-172-31-26-105 --initial-cluster-state=new --client-cert-auth=true --election-timeout=5000 --initial-advertise-peer-urls=https://172.31.26.105:2380 root 4882 4861 15 18:52 ? 00:00:42 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false root 18033 17938 2 18:57 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.23-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
-```
-
-## 3.1 Authentication and Authorization
-### 3.1.1 Client certificate authentication should not be used for users (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
-implemented in place of client certificates.
-
-## 3.2 Logging
-### 3.2.1 Ensure that a minimal audit policy is created (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Create an audit policy file for your cluster.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-policy-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 4882 4861 15 18:52 ? 00:00:43 kube-apiserver --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --api-audiences=unknown --audit-log-maxsize=100 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-servers=https://172.31.26.105:2379 --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --anonymous-auth=false --secure-port=6443 --runtime-config=policy/v1beta1/podsecuritypolicy=true --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-node-port-range=30000-32767 --service-account-issuer=rke --service-account-lookup=true --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --authentication-token-webhook-cache-ttl=5s --advertise-address=172.31.26.105 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-log-maxbackup=10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --authorization-mode=Node,RBAC --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --cloud-provider= --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --bind-address=0.0.0.0 --audit-log-maxage=30 --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allow-privileged=true --profiling=false
-```
-
-### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the audit policy provided for the cluster and ensure that it covers
-at least the following areas,
-- Access to Secrets managed by the cluster. Care should be taken to only
- log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
- order to avoid risk of logging sensitive data.
-- Modification of Pod and Deployment objects.
-- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
- For most requests, minimally logging at the Metadata level is recommended
- (the most basic level of logging).
-
-## 4.1 Worker Node Configuration Files
-### 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service.
-All configuration is passed in as arguments at container run time.
-
-### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service.
-All configuration is passed in as arguments at container run time.
-
-### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive OR '/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml' is not present
-```
-
-**Returned Value**:
-
-```console
-permissions=600
-```
-
-### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example, chown root:root /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present OR '/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml' is not present
-```
-
-### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /etc/kubernetes/ssl/kubecfg-kube-node.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=600
-```
-
-### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chown root:root /etc/kubernetes/ssl/kubecfg-kube-node.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the file permissions of the
---client-ca-file chmod 644
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=600
-```
-
-### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the ownership of the --client-ca-file.
-chown root:root
-
-**Audit:**
-
-```bash
-stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chmod 644 /var/lib/kubelet/config.yaml
-
-Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet.
-All configuration is passed in as arguments at container run time.
-
-### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chown root:root /var/lib/kubelet/config.yaml
-
-Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet.
-All configuration is passed in as arguments at container run time.
-
-## 4.2 Kubelet
-### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
-`false`.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-`--anonymous-auth=false`
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--anonymous-auth' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 5796 5377 2 18:53 ? 00:00:06 kubelet --anonymous-auth=false --cloud-provider= --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --root-dir=/var/lib/kubelet --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --fail-swap-on=false --hostname-override=ip-172-31-26-105 --authentication-token-webhook=true --cgroups-per-qos=True --cluster-dns=10.43.0.10 --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --address=0.0.0.0 --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105.pem --streaming-connection-idle-timeout=30m --v=2 --protect-kernel-defaults=true --container-runtime=remote --event-qps=0 --feature-gates=RotateKubeletServerCertificate=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --cluster-domain=cluster.local --pod-infra-container-image=rancher/mirrored-pause:3.6 --read-only-port=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --node-ip=172.31.26.105 --resolv-conf=/etc/resolv.conf --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
-using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---authorization-mode=Webhook
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 5796 5377 2 18:53 ? 00:00:06 kubelet --anonymous-auth=false --cloud-provider= --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --root-dir=/var/lib/kubelet --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --fail-swap-on=false --hostname-override=ip-172-31-26-105 --authentication-token-webhook=true --cgroups-per-qos=True --cluster-dns=10.43.0.10 --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --address=0.0.0.0 --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105.pem --streaming-connection-idle-timeout=30m --v=2 --protect-kernel-defaults=true --container-runtime=remote --event-qps=0 --feature-gates=RotateKubeletServerCertificate=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --cluster-domain=cluster.local --pod-infra-container-image=rancher/mirrored-pause:3.6 --read-only-port=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --node-ip=172.31.26.105 --resolv-conf=/etc/resolv.conf --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
-the location of the client CA file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---client-ca-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 5796 5377 2 18:53 ? 00:00:06 kubelet --anonymous-auth=false --cloud-provider= --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --root-dir=/var/lib/kubelet --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --fail-swap-on=false --hostname-override=ip-172-31-26-105 --authentication-token-webhook=true --cgroups-per-qos=True --cluster-dns=10.43.0.10 --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --address=0.0.0.0 --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105.pem --streaming-connection-idle-timeout=30m --v=2 --protect-kernel-defaults=true --container-runtime=remote --event-qps=0 --feature-gates=RotateKubeletServerCertificate=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --cluster-domain=cluster.local --pod-infra-container-image=rancher/mirrored-pause:3.6 --read-only-port=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --node-ip=172.31.26.105 --resolv-conf=/etc/resolv.conf --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.4 Ensure that the --read-only-port argument is set to 0 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---read-only-port=0
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--read-only-port' is equal to '0' OR '--read-only-port' is not present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 5796 5377 2 18:53 ? 00:00:06 kubelet --anonymous-auth=false --cloud-provider= --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --root-dir=/var/lib/kubelet --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --fail-swap-on=false --hostname-override=ip-172-31-26-105 --authentication-token-webhook=true --cgroups-per-qos=True --cluster-dns=10.43.0.10 --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --address=0.0.0.0 --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105.pem --streaming-connection-idle-timeout=30m --v=2 --protect-kernel-defaults=true --container-runtime=remote --event-qps=0 --feature-gates=RotateKubeletServerCertificate=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --cluster-domain=cluster.local --pod-infra-container-image=rancher/mirrored-pause:3.6 --read-only-port=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --node-ip=172.31.26.105 --resolv-conf=/etc/resolv.conf --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
-value other than 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---streaming-connection-idle-timeout=5m
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 5796 5377 2 18:53 ? 00:00:06 kubelet --anonymous-auth=false --cloud-provider= --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --root-dir=/var/lib/kubelet --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --fail-swap-on=false --hostname-override=ip-172-31-26-105 --authentication-token-webhook=true --cgroups-per-qos=True --cluster-dns=10.43.0.10 --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --address=0.0.0.0 --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105.pem --streaming-connection-idle-timeout=30m --v=2 --protect-kernel-defaults=true --container-runtime=remote --event-qps=0 --feature-gates=RotateKubeletServerCertificate=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --cluster-domain=cluster.local --pod-infra-container-image=rancher/mirrored-pause:3.6 --read-only-port=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --node-ip=172.31.26.105 --resolv-conf=/etc/resolv.conf --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---protect-kernel-defaults=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--protect-kernel-defaults' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 5796 5377 2 18:53 ? 00:00:06 kubelet --anonymous-auth=false --cloud-provider= --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --root-dir=/var/lib/kubelet --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --fail-swap-on=false --hostname-override=ip-172-31-26-105 --authentication-token-webhook=true --cgroups-per-qos=True --cluster-dns=10.43.0.10 --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --address=0.0.0.0 --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105.pem --streaming-connection-idle-timeout=30m --v=2 --protect-kernel-defaults=true --container-runtime=remote --event-qps=0 --feature-gates=RotateKubeletServerCertificate=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --cluster-domain=cluster.local --pod-infra-container-image=rancher/mirrored-pause:3.6 --read-only-port=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --node-ip=172.31.26.105 --resolv-conf=/etc/resolv.conf --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove the --make-iptables-util-chains argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 5796 5377 2 18:53 ? 00:00:06 kubelet --anonymous-auth=false --cloud-provider= --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --root-dir=/var/lib/kubelet --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --fail-swap-on=false --hostname-override=ip-172-31-26-105 --authentication-token-webhook=true --cgroups-per-qos=True --cluster-dns=10.43.0.10 --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --address=0.0.0.0 --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105.pem --streaming-connection-idle-timeout=30m --v=2 --protect-kernel-defaults=true --container-runtime=remote --event-qps=0 --feature-gates=RotateKubeletServerCertificate=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --cluster-domain=cluster.local --pod-infra-container-image=rancher/mirrored-pause:3.6 --read-only-port=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --node-ip=172.31.26.105 --resolv-conf=/etc/resolv.conf --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and remove the --hostname-override argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors
-
-### 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--event-qps' is equal to '0'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 5796 5377 2 18:53 ? 00:00:06 kubelet --anonymous-auth=false --cloud-provider= --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --root-dir=/var/lib/kubelet --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --fail-swap-on=false --hostname-override=ip-172-31-26-105 --authentication-token-webhook=true --cgroups-per-qos=True --cluster-dns=10.43.0.10 --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --address=0.0.0.0 --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105.pem --streaming-connection-idle-timeout=30m --v=2 --protect-kernel-defaults=true --container-runtime=remote --event-qps=0 --feature-gates=RotateKubeletServerCertificate=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --cluster-domain=cluster.local --pod-infra-container-image=rancher/mirrored-pause:3.6 --read-only-port=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --node-ip=172.31.26.105 --resolv-conf=/etc/resolv.conf --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
-of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
-to the location of the corresponding private key file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
---tls-cert-file=
---tls-private-key-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 5796 5377 2 18:53 ? 00:00:06 kubelet --anonymous-auth=false --cloud-provider= --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --root-dir=/var/lib/kubelet --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --fail-swap-on=false --hostname-override=ip-172-31-26-105 --authentication-token-webhook=true --cgroups-per-qos=True --cluster-dns=10.43.0.10 --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --address=0.0.0.0 --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105.pem --streaming-connection-idle-timeout=30m --v=2 --protect-kernel-defaults=true --container-runtime=remote --event-qps=0 --feature-gates=RotateKubeletServerCertificate=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --cluster-domain=cluster.local --pod-infra-container-image=rancher/mirrored-pause:3.6 --read-only-port=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --node-ip=172.31.26.105 --resolv-conf=/etc/resolv.conf --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
-remove it altogether to use the default value.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
-variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'{.rotateCertificates}' is present OR '{.rotateCertificates}' is not present
-```
-
-### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
---feature-gates=RotateKubeletServerCertificate=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-Clusters provisioned by RKE handles certificate rotation directly through RKE.
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
-TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-or to a subset of these values.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the --tls-cipher-suites parameter as follows, or to a subset of these values.
---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 5796 5377 2 18:53 ? 00:00:06 kubelet --anonymous-auth=false --cloud-provider= --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --root-dir=/var/lib/kubelet --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --fail-swap-on=false --hostname-override=ip-172-31-26-105 --authentication-token-webhook=true --cgroups-per-qos=True --cluster-dns=10.43.0.10 --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --address=0.0.0.0 --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-26-105.pem --streaming-connection-idle-timeout=30m --v=2 --protect-kernel-defaults=true --container-runtime=remote --event-qps=0 --feature-gates=RotateKubeletServerCertificate=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --cluster-domain=cluster.local --pod-infra-container-image=rancher/mirrored-pause:3.6 --read-only-port=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --node-ip=172.31.26.105 --resolv-conf=/etc/resolv.conf --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-## 5.1 RBAC and Service Accounts
-### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
-if they need this role or if they could use a role with fewer privileges.
-Where possible, first bind users to a lower privileged role and then remove the
-clusterrolebinding to the cluster-admin role :
-kubectl delete clusterrolebinding [name]
-
-### 5.1.2 Minimize access to secrets (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove get, list and watch access to Secret objects in the cluster.
-
-### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible replace any use of wildcards in clusterroles and roles with specific
-objects or actions.
-
-### 5.1.4 Minimize access to create pods (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove create access to pod objects in the cluster.
-
-### 5.1.5 Ensure that default service accounts are not actively used. (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Create explicit service accounts wherever a Kubernetes workload requires specific access
-to the Kubernetes API server.
-Modify the configuration of each default service account to include this value
-automountServiceAccountToken: false
-
-**Audit Script:** `check_for_default_sa.sh`
-
-```bash
-#!/bin/bash
-
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l)
-if [[ ${count_sa} -gt 0 ]]; then
- echo "false"
- exit
-fi
-
-for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name")
-do
- for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[].kind=="ServiceAccount" and .subjects[].name=="default") or (.subjects[].kind=="Group" and .subjects[].name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"')
- do
- read kind name <<<$(IFS=","; echo $result)
- resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[] != "podsecuritypolicies")' | wc -l)
- if [[ ${resource_count} -gt 0 ]]; then
- echo "false"
- exit
- fi
- done
-done
-
-
-echo "true"
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_default_sa.sh
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-Error from server (Forbidden): serviceaccounts is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "serviceaccounts" in API group "" at the cluster scope Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-fleet-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-impersonation-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cis-operator-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "default" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "ingress-nginx" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-node-lease" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-public" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "local" true
-```
-
-### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Modify the definition of pods and service accounts which do not need to mount service
-account tokens to disable it.
-
-### 5.1.7 Avoid use of system:masters group (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Remove the system:masters group from all users in the cluster.
-
-### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove the impersonate, bind and escalate rights from subjects.
-
-## 5.2 Pod Security Standards
-### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that either Pod Security Admission or an external policy control system is in place
-for every namespace which contains user workloads.
-
-### 5.2.2 Minimize the admission of privileged containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of privileged containers.
-
-### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostPID` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
---count=1
-```
-
-### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostIPC` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
---count=1
-```
-
-### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostNetwork` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
---count=1
-```
-
-### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
---count=1
-```
-
-### 5.2.7 Minimize the admission of root containers (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
-or `MustRunAs` with the range of UIDs not including 0, is set.
-
-### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with the `NET_RAW` capability.
-
-### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that `allowedCapabilities` is not present in policies for the cluster unless
-it is set to an empty array.
-
-### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the use of capabilites in applications running on your cluster. Where a namespace
-contains applicaions which do not require any Linux capabities to operate consider adding
-a PSP which forbids the admission of containers which do not drop all capabilities.
-
-### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
-
-### 5.2.12 Minimize the admission of HostPath volumes (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `hostPath` volumes.
-
-### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers which use `hostPort` sections.
-
-## 5.3 Network Policies and CNI
-### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If the CNI plugin in use does not support network policies, consideration should be given to
-making use of a different plugin, or finding an alternate mechanism for restricting traffic
-in the Kubernetes cluster.
-
-### 5.3.2 Ensure that all Namespaces have Network Policies defined (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and create NetworkPolicy objects as you need them.
-
-**Audit Script:** `check_for_network_policies.sh`
-
-```bash
-#!/bin/bash
-
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-for namespace in $(kubectl get namespaces --all-namespaces -o json | jq -r '.items[].metadata.name'); do
- policy_count=$(kubectl get networkpolicy -n ${namespace} -o json | jq '.items | length')
- if [[ ${policy_count} -eq 0 ]]; then
- echo "false"
- exit
- fi
-done
-
-echo "true"
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_network_policies.sh
-```
-
-**Expected Result**:
-
-```console
-'true' is present
-```
-
-**Returned Value**:
-
-```console
-true
-```
-
-## 5.4 Secrets Management
-### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If possible, rewrite application code to read Secrets from mounted secret files, rather than
-from environment variables.
-
-### 5.4.2 Consider external secret storage (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Refer to the Secrets management options offered by your cloud provider or a third-party
-secrets management solution.
-
-## 5.5 Extensible Admission Control
-### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and setup image provenance.
-
-## 5.7 General Policies
-### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the documentation and create namespaces for objects in your deployment as you need
-them.
-
-### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
-An example is as below:
-securityContext:
-seccompProfile:
-type: RuntimeDefault
-
-### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
-suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
-Containers.
-
-### 5.7.4 The default namespace should not be used (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
-resources and that all new resources are created in a specific namespace.
-
-**Audit Script:** `check_for_default_ns.sh`
-
-```bash
-#!/bin/bash
-
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-count=$(kubectl get all -n default -o json | jq .items[] | jq -r 'select((.metadata.name!="kubernetes"))' | jq .metadata.name | wc -l)
-if [[ ${count} -gt 0 ]]; then
- echo "false"
- exit
-fi
-
-echo "true"
-
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_default_ns.sh
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-Error from server (Forbidden): replicationcontrollers is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "replicationcontrollers" in API group "" in the namespace "default" Error from server (Forbidden): services is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "services" in API group "" in the namespace "default" Error from server (Forbidden): daemonsets.apps is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "daemonsets" in API group "apps" in the namespace "default" Error from server (Forbidden): deployments.apps is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "deployments" in API group "apps" in the namespace "default" Error from server (Forbidden): replicasets.apps is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "replicasets" in API group "apps" in the namespace "default" Error from server (Forbidden): statefulsets.apps is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "statefulsets" in API group "apps" in the namespace "default" Error from server (Forbidden): horizontalpodautoscalers.autoscaling is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "default" Error from server (Forbidden): cronjobs.batch is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "cronjobs" in API group "batch" in the namespace "default" Error from server (Forbidden): jobs.batch is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "jobs" in API group "batch" in the namespace "default" true
-```
-
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.25.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.25.md
deleted file mode 100644
index 0cec041061f..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.25.md
+++ /dev/null
@@ -1,3085 +0,0 @@
----
-title: RKE Self-Assessment Guide - CIS Benchmark v1.23 - K8s v1.25
----
-
-This document is a companion to the [RKE Hardening Guide](../../../../pages-for-subheaders/rke1-hardening-guide.md), which provides prescriptive guidance on how to harden RKE clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark.
-
-
-This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes:
-
-| Rancher Version | CIS Benchmark Version | Kubernetes Version |
-|-----------------|-----------------------|--------------------|
-| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.25 |
-
-This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`.
-
-This document is for Rancher operators, security teams, auditors and decision makers.
-
-For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.23. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
-
-## Testing Methodology
-
-Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files.
-
-Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results.
-
-:::note
-
-This guide only covers `automated` (previously called `scored`) tests.
-
-:::
-
-### Controls
-
-## 1.1 Control Plane Node Configuration Files
-### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644
-
-**Audit:**
-
-```bash
-ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644 permissions=600
-```
-
-### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root
-
-**Audit:**
-
-```bash
-ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root root:root
-```
-
-### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above). For example,
-chmod 700 /var/lib/etcd
-
-**Audit:**
-
-```bash
-stat -c %a /node/var/lib/etcd
-```
-
-**Expected Result**:
-
-```console
-'700' is equal to '700'
-```
-
-**Returned Value**:
-
-```console
-700
-```
-
-### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above).
-For example, chown etcd:etcd /var/lib/etcd
-
-### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes.
-
-### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes.
-
-### 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the master node.
-For example,
-chown -R root:root /etc/kubernetes/pki/
-
-**Audit Script:** `check_files_owner_in_dir.sh`
-
-```bash
-#!/usr/bin/env bash
-
-# This script is used to ensure the owner is set to root:root for
-# the given directory and all the files in it
-#
-# inputs:
-# $1 = /full/path/to/directory
-#
-# outputs:
-# true/false
-
-INPUT_DIR=$1
-
-if [[ "${INPUT_DIR}" == "" ]]; then
- echo "false"
- exit
-fi
-
-if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then
- echo "false"
- exit
-fi
-
-statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*)
-while read -r statInfoLine; do
- f=$(echo ${statInfoLine} | cut -d' ' -f1)
- p=$(echo ${statInfoLine} | cut -d' ' -f2)
-
- if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then
- if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then
- echo "false"
- exit
- fi
- else
- if [[ "$p" != "root:root" ]]; then
- echo "false"
- exit
- fi
- fi
-done <<< "${statInfoLines}"
-
-
-echo "true"
-exit
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-true
-```
-
-### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the master node.
-For example,
-chmod -R 644 /etc/kubernetes/pki/*.crt
-
-**Audit Script:** `check_files_permissions.sh`
-
-```bash
-#!/usr/bin/env bash
-
-# This script is used to ensure the file permissions are set to 644 or
-# more restrictive for all files in a given directory or a wildcard
-# selection of files
-#
-# inputs:
-# $1 = /full/path/to/directory or /path/to/fileswithpattern
-# ex: !(*key).pem
-#
-# $2 (optional) = permission (ex: 600)
-#
-# outputs:
-# true/false
-
-# Turn on "extended glob" for use of '!' in wildcard
-shopt -s extglob
-
-# Turn off history to avoid surprises when using '!'
-set -H
-
-USER_INPUT=$1
-
-if [[ "${USER_INPUT}" == "" ]]; then
- echo "false"
- exit
-fi
-
-
-if [[ -d ${USER_INPUT} ]]; then
- PATTERN="${USER_INPUT}/*"
-else
- PATTERN="${USER_INPUT}"
-fi
-
-PERMISSION=""
-if [[ "$2" != "" ]]; then
- PERMISSION=$2
-fi
-
-FILES_PERMISSIONS=$(stat -c %n\ %a ${PATTERN})
-
-while read -r fileInfo; do
- p=$(echo ${fileInfo} | cut -d' ' -f2)
-
- if [[ "${PERMISSION}" != "" ]]; then
- if [[ "$p" != "${PERMISSION}" ]]; then
- echo "false"
- exit
- fi
- else
- if [[ "$p" != "644" && "$p" != "640" && "$p" != "600" ]]; then
- echo "false"
- exit
- fi
- fi
-done <<< "${FILES_PERMISSIONS}"
-
-
-echo "true"
-exit
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_files_permissions.sh '/node/etc/kubernetes/ssl/!(*key).pem'
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-true
-```
-
-### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod -R 600 /etc/kubernetes/ssl/*key.pem
-
-**Audit Script:** `check_files_permissions.sh`
-
-```bash
-#!/usr/bin/env bash
-
-# This script is used to ensure the file permissions are set to 644 or
-# more restrictive for all files in a given directory or a wildcard
-# selection of files
-#
-# inputs:
-# $1 = /full/path/to/directory or /path/to/fileswithpattern
-# ex: !(*key).pem
-#
-# $2 (optional) = permission (ex: 600)
-#
-# outputs:
-# true/false
-
-# Turn on "extended glob" for use of '!' in wildcard
-shopt -s extglob
-
-# Turn off history to avoid surprises when using '!'
-set -H
-
-USER_INPUT=$1
-
-if [[ "${USER_INPUT}" == "" ]]; then
- echo "false"
- exit
-fi
-
-
-if [[ -d ${USER_INPUT} ]]; then
- PATTERN="${USER_INPUT}/*"
-else
- PATTERN="${USER_INPUT}"
-fi
-
-PERMISSION=""
-if [[ "$2" != "" ]]; then
- PERMISSION=$2
-fi
-
-FILES_PERMISSIONS=$(stat -c %n\ %a ${PATTERN})
-
-while read -r fileInfo; do
- p=$(echo ${fileInfo} | cut -d' ' -f2)
-
- if [[ "${PERMISSION}" != "" ]]; then
- if [[ "$p" != "${PERMISSION}" ]]; then
- echo "false"
- exit
- fi
- else
- if [[ "$p" != "644" && "$p" != "640" && "$p" != "600" ]]; then
- echo "false"
- exit
- fi
- fi
-done <<< "${FILES_PERMISSIONS}"
-
-
-echo "true"
-exit
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_files_permissions.sh '/node/etc/kubernetes/ssl/*key.pem'
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-true
-```
-
-## 1.2 API Server
-### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---anonymous-auth=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--anonymous-auth' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and configure alternate mechanisms for authentication. Then,
-edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the --token-auth-file= parameter.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--token-auth-file' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the `DenyServiceExternalIPs`
-from enabled admission plugins.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the --kubelet-https parameter.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-https' is present OR '--kubelet-https' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the
-apiserver and kubelets. Then, edit API server pod specification file
-/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
-kubelet client certificate and key parameters as below.
---kubelet-client-certificate=
---kubelet-client-key=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and setup the TLS connection between
-the apiserver and kubelets. Then, edit the API server pod specification file
-/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
---kubelet-certificate-authority=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-certificate-authority' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
-One such example could be as below.
---authorization-mode=RBAC
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes Node.
---authorization-mode=Node,RBAC
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'Node'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
-for example `--authorization-mode=Node,RBAC`.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'RBAC'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set the desired limits in a configuration file.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-and set the below parameters.
---enable-admission-plugins=...,EventRateLimit,...
---admission-control-config-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'EventRateLimit'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
-value that does not include AlwaysAdmit.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-AlwaysPullImages.
---enable-admission-plugins=...,AlwaysPullImages,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-SecurityContextDeny, unless PodSecurityPolicy is already in place.
---enable-admission-plugins=...,SecurityContextDeny,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and create ServiceAccount objects as per your environment.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
-value that does not include ServiceAccount.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --disable-admission-plugins parameter to
-ensure it does not include NamespaceLifecycle.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.16 Ensure that the admission control plugin NodeRestriction is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to a
-value that includes NodeRestriction.
---enable-admission-plugins=...,NodeRestriction,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'NodeRestriction'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.17 Ensure that the --secure-port argument is not set to 0 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and either remove the --secure-port parameter or
-set it to a different (non-zero) desired port.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--secure-port' is greater than 0 OR '--secure-port' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.18 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.19 Ensure that the --audit-log-path argument is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-path parameter to a suitable path and
-file where you would like audit logs to be written, for example,
---audit-log-path=/var/log/apiserver/audit.log
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-path' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.20 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxage parameter to 30
-or as an appropriate number of days, for example,
---audit-log-maxage=30
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxage' is greater or equal to 30
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.21 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
-value. For example,
---audit-log-maxbackup=10
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxbackup' is greater or equal to 10
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.22 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
-For example, to set it as 100 MB, --audit-log-maxsize=100
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxsize' is greater or equal to 100
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.24 Ensure that the --service-account-lookup argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---service-account-lookup=true
-Alternatively, you can delete the --service-account-lookup parameter from this file so
-that the default takes effect.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.25 Ensure that the --request-timeout argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --service-account-key-file parameter
-to the public key file for service accounts. For example,
---service-account-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.26 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate and key file parameters.
---etcd-certfile=
---etcd-keyfile=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--etcd-certfile' is present AND '--etcd-keyfile' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.27 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the TLS certificate and private key file parameters.
---tls-cert-file=
---tls-private-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.28 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the client certificate authority file.
---client-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.29 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate authority file parameter.
---etcd-cafile=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--etcd-cafile' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.30 Ensure that the --encryption-provider-config argument is set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --encryption-provider-config parameter to the path of that file.
-For example, --encryption-provider-config=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--encryption-provider-config' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.31 Ensure that encryption providers are appropriately configured (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-In this file, choose aescbc, kms or secretbox as the encryption provider.
-
-**Audit Script:** `check_encryption_provider_config.sh`
-
-```bash
-#!/usr/bin/env bash
-
-# This script is used to check the encrption provider config is set to aesbc
-#
-# outputs:
-# true/false
-
-# TODO: Figure out the file location from the kube-apiserver commandline args
-ENCRYPTION_CONFIG_FILE="/node/etc/kubernetes/ssl/encryption.yaml"
-
-if [[ ! -f "${ENCRYPTION_CONFIG_FILE}" ]]; then
- echo "false"
- exit
-fi
-
-for provider in "$@"
-do
- if grep "$provider" "${ENCRYPTION_CONFIG_FILE}"; then
- echo "true"
- exit
- fi
-done
-
-echo "false"
-exit
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_encryption_provider_config.sh aescbc
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-- aescbc: true
-```
-
-### 1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
-TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
-TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
-TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-## 1.3 Controller Manager
-### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
-for example, --terminated-pod-gc-threshold=10
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--terminated-pod-gc-threshold' is present
-```
-
-**Returned Value**:
-
-```console
-root 5506 5484 2 22:01 ? 00:00:05 kube-controller-manager --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --v=2 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --allow-untagged-cloud=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --cluster-cidr=10.42.0.0/16 --node-monitor-grace-period=40s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --profiling=false --configure-cloud-routes=false --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true
-```
-
-### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 5506 5484 2 22:01 ? 00:00:05 kube-controller-manager --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --v=2 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --allow-untagged-cloud=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --cluster-cidr=10.42.0.0/16 --node-monitor-grace-period=40s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --profiling=false --configure-cloud-routes=false --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true
-```
-
-### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node to set the below parameter.
---use-service-account-credentials=true
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--use-service-account-credentials' is not equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 5506 5484 2 22:01 ? 00:00:05 kube-controller-manager --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --v=2 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --allow-untagged-cloud=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --cluster-cidr=10.42.0.0/16 --node-monitor-grace-period=40s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --profiling=false --configure-cloud-routes=false --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true
-```
-
-### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --service-account-private-key-file parameter
-to the private key file for service accounts.
---service-account-private-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 5506 5484 2 22:01 ? 00:00:05 kube-controller-manager --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --v=2 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --allow-untagged-cloud=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --cluster-cidr=10.42.0.0/16 --node-monitor-grace-period=40s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --profiling=false --configure-cloud-routes=false --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true
-```
-
-### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
---root-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--root-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 5506 5484 2 22:01 ? 00:00:05 kube-controller-manager --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --v=2 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --allow-untagged-cloud=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --cluster-cidr=10.42.0.0/16 --node-monitor-grace-period=40s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --profiling=false --configure-cloud-routes=false --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true
-```
-
-### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
---feature-gates=RotateKubeletServerCertificate=true
-
-Cluster provisioned by RKE handles certificate rotation directly through RKE.
-
-### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is present OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5506 5484 2 22:01 ? 00:00:05 kube-controller-manager --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --v=2 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --allow-untagged-cloud=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --cluster-cidr=10.42.0.0/16 --node-monitor-grace-period=40s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --profiling=false --configure-cloud-routes=false --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true
-```
-
-## 1.4 Scheduler
-### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-scheduler | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 5671 5649 0 22:01 ? 00:00:01 kube-scheduler --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml
-```
-
-### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-scheduler | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is present OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5671 5649 0 22:01 ? 00:00:01 kube-scheduler --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml
-```
-
-## 2 Etcd Node Configuration
-### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the etcd service documentation and configure TLS encryption.
-Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
-on the master node and set the below parameters.
---cert-file=
---key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--cert-file' is present AND '--key-file' is present
-```
-
-**Returned Value**:
-
-```console
-etcd 5188 5167 3 22:01 ? 00:00:08 /usr/local/bin/etcd --client-cert-auth=true --data-dir=/var/lib/rancher/etcd/ --initial-advertise-peer-urls=https://172.31.31.51:2380 --listen-peer-urls=https://172.31.31.51:2380 --initial-cluster=etcd-ip-172-31-31-51=https://172.31.31.51:2380 --initial-cluster-state=new --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --peer-client-cert-auth=true --listen-client-urls=https://172.31.31.51:2379 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --advertise-client-urls=https://172.31.31.51:2379 --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-31-51 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --election-timeout=5000 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --heartbeat-interval=500 root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem root 19036 18926 5 22:05 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.23-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
-```
-
-### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
-node and set the below parameter.
---client-cert-auth="true"
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--client-cert-auth' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-etcd 5188 5167 3 22:01 ? 00:00:08 /usr/local/bin/etcd --client-cert-auth=true --data-dir=/var/lib/rancher/etcd/ --initial-advertise-peer-urls=https://172.31.31.51:2380 --listen-peer-urls=https://172.31.31.51:2380 --initial-cluster=etcd-ip-172-31-31-51=https://172.31.31.51:2380 --initial-cluster-state=new --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --peer-client-cert-auth=true --listen-client-urls=https://172.31.31.51:2379 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --advertise-client-urls=https://172.31.31.51:2379 --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-31-51 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --election-timeout=5000 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --heartbeat-interval=500 root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem root 19036 18926 4 22:05 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.23-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
-```
-
-### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
-node and either remove the --auto-tls parameter or set it to false.
---auto-tls=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-31-51 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem ETCDCTL_ENDPOINTS=https://172.31.31.51:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/
-```
-
-### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the etcd service documentation and configure peer TLS encryption as appropriate
-for your etcd cluster.
-Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the
-master node and set the below parameters.
---peer-client-file=
---peer-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--peer-cert-file' is present AND '--peer-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-etcd 5188 5167 3 22:01 ? 00:00:08 /usr/local/bin/etcd --client-cert-auth=true --data-dir=/var/lib/rancher/etcd/ --initial-advertise-peer-urls=https://172.31.31.51:2380 --listen-peer-urls=https://172.31.31.51:2380 --initial-cluster=etcd-ip-172-31-31-51=https://172.31.31.51:2380 --initial-cluster-state=new --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --peer-client-cert-auth=true --listen-client-urls=https://172.31.31.51:2379 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --advertise-client-urls=https://172.31.31.51:2379 --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-31-51 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --election-timeout=5000 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --heartbeat-interval=500 root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem root 19036 18926 2 22:05 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.23-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
-```
-
-### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
-node and set the below parameter.
---peer-client-cert-auth=true
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--peer-client-cert-auth' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-etcd 5188 5167 3 22:01 ? 00:00:08 /usr/local/bin/etcd --client-cert-auth=true --data-dir=/var/lib/rancher/etcd/ --initial-advertise-peer-urls=https://172.31.31.51:2380 --listen-peer-urls=https://172.31.31.51:2380 --initial-cluster=etcd-ip-172-31-31-51=https://172.31.31.51:2380 --initial-cluster-state=new --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --peer-client-cert-auth=true --listen-client-urls=https://172.31.31.51:2379 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --advertise-client-urls=https://172.31.31.51:2379 --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-31-51 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --election-timeout=5000 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --heartbeat-interval=500 root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem root 19036 18926 3 22:05 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.23-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
-```
-
-### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
-node and either remove the --peer-auto-tls parameter or set it to false.
---peer-auto-tls=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-31-51 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem ETCDCTL_ENDPOINTS=https://172.31.31.51:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/
-```
-
-### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-[Manual test]
-Follow the etcd documentation and create a dedicated certificate authority setup for the
-etcd service.
-Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the
-master node and set the below parameter.
---trusted-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--trusted-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-etcd 5188 5167 3 22:01 ? 00:00:08 /usr/local/bin/etcd --client-cert-auth=true --data-dir=/var/lib/rancher/etcd/ --initial-advertise-peer-urls=https://172.31.31.51:2380 --listen-peer-urls=https://172.31.31.51:2380 --initial-cluster=etcd-ip-172-31-31-51=https://172.31.31.51:2380 --initial-cluster-state=new --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --peer-client-cert-auth=true --listen-client-urls=https://172.31.31.51:2379 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --advertise-client-urls=https://172.31.31.51:2379 --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-31-51 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --election-timeout=5000 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --heartbeat-interval=500 root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem root 19036 18926 2 22:05 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.23-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
-```
-
-## 3.1 Authentication and Authorization
-### 3.1.1 Client certificate authentication should not be used for users (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
-implemented in place of client certificates.
-
-## 3.2 Logging
-### 3.2.1 Ensure that a minimal audit policy is created (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Create an audit policy file for your cluster.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-policy-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:34 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the audit policy provided for the cluster and ensure that it covers
-at least the following areas,
-- Access to Secrets managed by the cluster. Care should be taken to only
- log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
- order to avoid risk of logging sensitive data.
-- Modification of Pod and Deployment objects.
-- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
- For most requests, minimally logging at the Metadata level is recommended
- (the most basic level of logging).
-
-## 4.1 Worker Node Configuration Files
-### 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service.
-All configuration is passed in as arguments at container run time.
-
-### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service.
-All configuration is passed in as arguments at container run time.
-
-### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive OR '/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml' is not present
-```
-
-**Returned Value**:
-
-```console
-permissions=600
-```
-
-### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example, chown root:root /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present OR '/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml' is not present
-```
-
-### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /etc/kubernetes/ssl/kubecfg-kube-node.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=600
-```
-
-### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chown root:root /etc/kubernetes/ssl/kubecfg-kube-node.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the file permissions of the
---client-ca-file chmod 644
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the ownership of the --client-ca-file.
-chown root:root
-
-**Audit:**
-
-```bash
-stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chmod 644 /var/lib/kubelet/config.yaml
-
-Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet.
-All configuration is passed in as arguments at container run time.
-
-### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chown root:root /var/lib/kubelet/config.yaml
-
-Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet.
-All configuration is passed in as arguments at container run time.
-
-## 4.2 Kubelet
-### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
-`false`.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-`--anonymous-auth=false`
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--anonymous-auth' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
-using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---authorization-mode=Webhook
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
-the location of the client CA file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---client-ca-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.4 Ensure that the --read-only-port argument is set to 0 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---read-only-port=0
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--read-only-port' is equal to '0' OR '--read-only-port' is not present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
-value other than 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---streaming-connection-idle-timeout=5m
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---protect-kernel-defaults=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--protect-kernel-defaults' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove the --make-iptables-util-chains argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and remove the --hostname-override argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors
-
-### 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--event-qps' is equal to '0'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
-of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
-to the location of the corresponding private key file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
---tls-cert-file=
---tls-private-key-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
-remove it altogether to use the default value.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
-variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'{.rotateCertificates}' is present OR '{.rotateCertificates}' is not present
-```
-
-### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
---feature-gates=RotateKubeletServerCertificate=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-Clusters provisioned by RKE handles certificate rotation directly through RKE.
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
-TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-or to a subset of these values.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the --tls-cipher-suites parameter as follows, or to a subset of these values.
---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-## 5.1 RBAC and Service Accounts
-### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
-if they need this role or if they could use a role with fewer privileges.
-Where possible, first bind users to a lower privileged role and then remove the
-clusterrolebinding to the cluster-admin role :
-kubectl delete clusterrolebinding [name]
-
-### 5.1.2 Minimize access to secrets (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove get, list and watch access to Secret objects in the cluster.
-
-### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible replace any use of wildcards in clusterroles and roles with specific
-objects or actions.
-
-### 5.1.4 Minimize access to create pods (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove create access to pod objects in the cluster.
-
-### 5.1.5 Ensure that default service accounts are not actively used. (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Create explicit service accounts wherever a Kubernetes workload requires specific access
-to the Kubernetes API server.
-Modify the configuration of each default service account to include this value
-automountServiceAccountToken: false
-
-**Audit Script:** `check_for_default_sa.sh`
-
-```bash
-#!/bin/bash
-
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l)
-if [[ ${count_sa} -gt 0 ]]; then
- echo "false"
- exit
-fi
-
-for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name")
-do
- for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[].kind=="ServiceAccount" and .subjects[].name=="default") or (.subjects[].kind=="Group" and .subjects[].name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"')
- do
- read kind name <<<$(IFS=","; echo $result)
- resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[] != "podsecuritypolicies")' | wc -l)
- if [[ ${resource_count} -gt 0 ]]; then
- echo "false"
- exit
- fi
- done
-done
-
-
-echo "true"
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_default_sa.sh
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-Error from server (Forbidden): serviceaccounts is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "serviceaccounts" in API group "" at the cluster scope Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-fleet-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-impersonation-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cis-operator-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "default" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "ingress-nginx" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-node-lease" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-public" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "local" true
-```
-
-### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Modify the definition of pods and service accounts which do not need to mount service
-account tokens to disable it.
-
-### 5.1.7 Avoid use of system:masters group (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Remove the system:masters group from all users in the cluster.
-
-### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove the impersonate, bind and escalate rights from subjects.
-
-## 5.2 Pod Security Standards
-### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that either Pod Security Admission or an external policy control system is in place
-for every namespace which contains user workloads.
-
-### 5.2.2 Minimize the admission of privileged containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of privileged containers.
-
-### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostPID` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostIPC` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostNetwork` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.7 Minimize the admission of root containers (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
-or `MustRunAs` with the range of UIDs not including 0, is set.
-
-### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with the `NET_RAW` capability.
-
-### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that `allowedCapabilities` is not present in policies for the cluster unless
-it is set to an empty array.
-
-### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the use of capabilites in applications running on your cluster. Where a namespace
-contains applicaions which do not require any Linux capabities to operate consider adding
-a PSP which forbids the admission of containers which do not drop all capabilities.
-
-### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
-
-### 5.2.12 Minimize the admission of HostPath volumes (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `hostPath` volumes.
-
-### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers which use `hostPort` sections.
-
-## 5.3 Network Policies and CNI
-### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If the CNI plugin in use does not support network policies, consideration should be given to
-making use of a different plugin, or finding an alternate mechanism for restricting traffic
-in the Kubernetes cluster.
-
-### 5.3.2 Ensure that all Namespaces have Network Policies defined (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and create NetworkPolicy objects as you need them.
-
-**Audit Script:** `check_for_network_policies.sh`
-
-```bash
-#!/bin/bash
-
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-for namespace in $(kubectl get namespaces --all-namespaces -o json | jq -r '.items[].metadata.name'); do
- policy_count=$(kubectl get networkpolicy -n ${namespace} -o json | jq '.items | length')
- if [[ ${policy_count} -eq 0 ]]; then
- echo "false"
- exit
- fi
-done
-
-echo "true"
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_network_policies.sh
-```
-
-**Expected Result**:
-
-```console
-'true' is present
-```
-
-**Returned Value**:
-
-```console
-true
-```
-
-## 5.4 Secrets Management
-### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If possible, rewrite application code to read Secrets from mounted secret files, rather than
-from environment variables.
-
-### 5.4.2 Consider external secret storage (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Refer to the Secrets management options offered by your cloud provider or a third-party
-secrets management solution.
-
-## 5.5 Extensible Admission Control
-### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and setup image provenance.
-
-## 5.7 General Policies
-### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the documentation and create namespaces for objects in your deployment as you need
-them.
-
-### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
-An example is as below:
-securityContext:
-seccompProfile:
-type: RuntimeDefault
-
-### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
-suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
-Containers.
-
-### 5.7.4 The default namespace should not be used (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
-resources and that all new resources are created in a specific namespace.
-
-**Audit Script:** `check_for_default_ns.sh`
-
-```bash
-#!/bin/bash
-
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-count=$(kubectl get all -n default -o json | jq .items[] | jq -r 'select((.metadata.name!="kubernetes"))' | jq .metadata.name | wc -l)
-if [[ ${count} -gt 0 ]]; then
- echo "false"
- exit
-fi
-
-echo "true"
-
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_default_ns.sh
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-Error from server (Forbidden): replicationcontrollers is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "replicationcontrollers" in API group "" in the namespace "default" Error from server (Forbidden): services is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "services" in API group "" in the namespace "default" Error from server (Forbidden): daemonsets.apps is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "daemonsets" in API group "apps" in the namespace "default" Error from server (Forbidden): deployments.apps is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "deployments" in API group "apps" in the namespace "default" Error from server (Forbidden): replicasets.apps is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "replicasets" in API group "apps" in the namespace "default" Error from server (Forbidden): statefulsets.apps is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "statefulsets" in API group "apps" in the namespace "default" Error from server (Forbidden): horizontalpodautoscalers.autoscaling is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "default" Error from server (Forbidden): cronjobs.batch is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "cronjobs" in API group "batch" in the namespace "default" Error from server (Forbidden): jobs.batch is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "jobs" in API group "batch" in the namespace "default" true
-```
-
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.24-k8s-v1.24.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.24-k8s-v1.24.md
new file mode 100644
index 00000000000..2246a13f6d0
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.24-k8s-v1.24.md
@@ -0,0 +1,3046 @@
+---
+title: RKE 自我评估指南 - CIS Benchmark v1.24 - K8s v1.24
+---
+
+
+
+
+
+本文档是 [RKE 加固指南](rke1-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 RKE 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。
+
+本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes:
+
+| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 |
+|-----------------|-----------------------|--------------------|
+| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 |
+
+本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群中的合规性。由于 Rancher 和 RKE 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多 control 验证检查不适用。这些检查将返回 `Not Applicable` 的结果。
+
+本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。
+
+有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.24 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。
+
+## 测试方法
+
+Rancher 和 RKE 通过 Docker 容器安装 Kubernetes 服务。配置是通过初始化时传递给容器的参数定义的,而不是通过配置文件。
+
+在 control 审计与原始 CIS benchmark 不同时,提供了针对 Rancher 的特定审计命令以进行测试。在执行测试时,你将需要访问所有 RKE 节点主机上的命令行。这些命令还使用了 [kubectl](https://kubernetes.io/docs/tasks/tools/)(带有有效的配置文件)和 [jq](https://stedolan.github.io/jq/) 工具,在测试和评估测试结果时这些工具是必需的。
+
+:::note
+
+本指南仅涵盖 `automated`(之前称为 `scored`)测试。
+
+:::
+
+### Controls
+
+## 1.1 Control Plane Node Configuration Files
+### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600
+
+**Audit:**
+
+```bash
+ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600 permissions=644
+```
+
+### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root
+
+**Audit:**
+
+```bash
+ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root root:root
+```
+
+### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above). For example,
+chmod 700 /var/lib/etcd
+
+**Audit:**
+
+```bash
+stat -c %a /node/var/lib/etcd
+```
+
+**Expected Result**:
+
+```console
+'700' is equal to '700'
+```
+
+**Returned Value**:
+
+```console
+700
+```
+
+### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above).
+For example, chown etcd:etcd /var/lib/etcd
+
+### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes.
+
+### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes.
+
+### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the master node.
+For example,
+chown -R root:root /etc/kubernetes/pki/
+
+**Audit Script:** `check_files_owner_in_dir.sh`
+
+```bash
+#!/usr/bin/env bash
+
+# This script is used to ensure the owner is set to root:root for
+# the given directory and all the files in it
+#
+# inputs:
+# $1 = /full/path/to/directory
+#
+# outputs:
+# true/false
+
+INPUT_DIR=$1
+
+if [[ "${INPUT_DIR}" == "" ]]; then
+ echo "false"
+ exit
+fi
+
+if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then
+ echo "false"
+ exit
+fi
+
+statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*)
+while read -r statInfoLine; do
+ f=$(echo ${statInfoLine} | cut -d' ' -f1)
+ p=$(echo ${statInfoLine} | cut -d' ' -f2)
+
+ if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then
+ if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then
+ echo "false"
+ exit
+ fi
+ else
+ if [[ "$p" != "root:root" ]]; then
+ echo "false"
+ exit
+ fi
+ fi
+done <<< "${statInfoLines}"
+
+
+echo "true"
+exit
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl
+```
+
+**Expected Result**:
+
+```console
+'true' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+true
+```
+
+### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} +
+
+**Audit:**
+
+```bash
+find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600
+```
+
+### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod -R 600 /etc/kubernetes/ssl/*key.pem
+
+**Audit Script:** `check_files_permissions.sh`
+
+```bash
+#!/usr/bin/env bash
+
+# This script is used to ensure the file permissions are set to 644 or
+# more restrictive for all files in a given directory or a wildcard
+# selection of files
+#
+# inputs:
+# $1 = /full/path/to/directory or /path/to/fileswithpattern
+# ex: !(*key).pem
+#
+# $2 (optional) = permission (ex: 600)
+#
+# outputs:
+# true/false
+
+# Turn on "extended glob" for use of '!' in wildcard
+shopt -s extglob
+
+# Turn off history to avoid surprises when using '!'
+set -H
+
+USER_INPUT=$1
+
+if [[ "${USER_INPUT}" == "" ]]; then
+ echo "false"
+ exit
+fi
+
+
+if [[ -d ${USER_INPUT} ]]; then
+ PATTERN="${USER_INPUT}/*"
+else
+ PATTERN="${USER_INPUT}"
+fi
+
+PERMISSION=""
+if [[ "$2" != "" ]]; then
+ PERMISSION=$2
+fi
+
+FILES_PERMISSIONS=$(stat -c %n\ %a ${PATTERN})
+
+while read -r fileInfo; do
+ p=$(echo ${fileInfo} | cut -d' ' -f2)
+
+ if [[ "${PERMISSION}" != "" ]]; then
+ if [[ "$p" != "${PERMISSION}" ]]; then
+ echo "false"
+ exit
+ fi
+ else
+ if [[ "$p" != "644" && "$p" != "640" && "$p" != "600" ]]; then
+ echo "false"
+ exit
+ fi
+ fi
+done <<< "${FILES_PERMISSIONS}"
+
+
+echo "true"
+exit
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_files_permissions.sh '/node/etc/kubernetes/ssl/*key.pem'
+```
+
+**Expected Result**:
+
+```console
+'true' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+true
+```
+
+## 1.2 API Server
+### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--anonymous-auth=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--anonymous-auth' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the documentation and configure alternate mechanisms for authentication. Then,
+edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and remove the --token-auth-file= parameter.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--token-auth-file' is not present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and remove the `DenyServiceExternalIPs`
+from enabled admission plugins.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and remove the --kubelet-https parameter.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--kubelet-https' is present OR '--kubelet-https' is not present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the
+apiserver and kubelets. Then, edit API server pod specification file
+/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
+kubelet client certificate and key parameters as below.
+--kubelet-client-certificate=
+--kubelet-client-key=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and setup the TLS connection between
+the apiserver and kubelets. Then, edit the API server pod specification file
+/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
+--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
+--kubelet-certificate-authority=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--kubelet-certificate-authority' is present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
+One such example could be as below.
+--authorization-mode=RBAC
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' does not have 'AlwaysAllow'
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to a value that includes Node.
+--authorization-mode=Node,RBAC
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' has 'Node'
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
+for example `--authorization-mode=Node,RBAC`.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' has 'RBAC'
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set the desired limits in a configuration file.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+and set the below parameters.
+--enable-admission-plugins=...,EventRateLimit,...
+--admission-control-config-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'EventRateLimit'
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
+value that does not include AlwaysAdmit.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+AlwaysPullImages.
+--enable-admission-plugins=...,AlwaysPullImages,...
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'AlwaysPullImages'
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+SecurityContextDeny, unless PodSecurityPolicy is already in place.
+--enable-admission-plugins=...,SecurityContextDeny,...
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the documentation and create ServiceAccount objects as per your environment.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
+value that does not include ServiceAccount.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --disable-admission-plugins parameter to
+ensure it does not include NamespaceLifecycle.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.16 Ensure that the admission control plugin NodeRestriction is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to a
+value that includes NodeRestriction.
+--enable-admission-plugins=...,NodeRestriction,...
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'NodeRestriction'
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.17 Ensure that the --secure-port argument is not set to 0 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and either remove the --secure-port parameter or
+set it to a different (non-zero) desired port.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--secure-port' is greater than 0 OR '--secure-port' is not present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.18 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.19 Ensure that the --audit-log-path argument is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-path parameter to a suitable path and
+file where you would like audit logs to be written, for example,
+--audit-log-path=/var/log/apiserver/audit.log
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-path' is present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.20 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxage parameter to 30
+or as an appropriate number of days, for example,
+--audit-log-maxage=30
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-maxage' is greater or equal to 30
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.21 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
+value. For example,
+--audit-log-maxbackup=10
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-maxbackup' is greater or equal to 10
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.22 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
+For example, to set it as 100 MB, --audit-log-maxsize=100
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-maxsize' is greater or equal to 100
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.24 Ensure that the --service-account-lookup argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--service-account-lookup=true
+Alternatively, you can delete the --service-account-lookup parameter from this file so
+that the default takes effect.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.25 Ensure that the --request-timeout argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --service-account-key-file parameter
+to the public key file for service accounts. For example,
+--service-account-key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--service-account-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.26 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the etcd certificate and key file parameters.
+--etcd-certfile=
+--etcd-keyfile=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--etcd-certfile' is present AND '--etcd-keyfile' is present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.27 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the TLS certificate and private key file parameters.
+--tls-cert-file=
+--tls-private-key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--tls-cert-file' is present AND '--tls-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.28 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the client certificate authority file.
+--client-ca-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--client-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.29 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the etcd certificate authority file parameter.
+--etcd-cafile=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--etcd-cafile' is present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.30 Ensure that the --encryption-provider-config argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and configure a EncryptionConfig file.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --encryption-provider-config parameter to the path of that file.
+For example, --encryption-provider-config=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--encryption-provider-config' is present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 1.2.31 Ensure that encryption providers are appropriately configured (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and configure a EncryptionConfig file.
+In this file, choose aescbc, kms or secretbox as the encryption provider.
+
+**Audit Script:** `check_encryption_provider_config.sh`
+
+```bash
+#!/usr/bin/env bash
+
+# This script is used to check the encrption provider config is set to aesbc
+#
+# outputs:
+# true/false
+
+# TODO: Figure out the file location from the kube-apiserver commandline args
+ENCRYPTION_CONFIG_FILE="/node/etc/kubernetes/ssl/encryption.yaml"
+
+if [[ ! -f "${ENCRYPTION_CONFIG_FILE}" ]]; then
+ echo "false"
+ exit
+fi
+
+for provider in "$@"
+do
+ if grep "$provider" "${ENCRYPTION_CONFIG_FILE}"; then
+ echo "true"
+ exit
+ fi
+done
+
+echo "false"
+exit
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_encryption_provider_config.sh aescbc
+```
+
+**Expected Result**:
+
+```console
+'true' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+- aescbc: true
+```
+
+### 1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
+TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
+TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
+TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
+TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
+TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
+TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
+TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384'
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+## 1.3 Controller Manager
+### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
+for example, --terminated-pod-gc-threshold=10
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--terminated-pod-gc-threshold' is present
+```
+
+**Returned Value**:
+
+```console
+root 3690 3671 1 Sep11 ? 00:20:42 kube-controller-manager --service-cluster-ip-range=10.43.0.0/16 --configure-cloud-routes=false --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --v=2 --pod-eviction-timeout=5m0s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --feature-gates=RotateKubeletServerCertificate=true --leader-elect=true --profiling=false --node-monitor-grace-period=40s --allow-untagged-cloud=true --use-service-account-credentials=true
+```
+
+### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 3690 3671 1 Sep11 ? 00:20:42 kube-controller-manager --service-cluster-ip-range=10.43.0.0/16 --configure-cloud-routes=false --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --v=2 --pod-eviction-timeout=5m0s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --feature-gates=RotateKubeletServerCertificate=true --leader-elect=true --profiling=false --node-monitor-grace-period=40s --allow-untagged-cloud=true --use-service-account-credentials=true
+```
+
+### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node to set the below parameter.
+--use-service-account-credentials=true
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--use-service-account-credentials' is not equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 3690 3671 1 Sep11 ? 00:20:42 kube-controller-manager --service-cluster-ip-range=10.43.0.0/16 --configure-cloud-routes=false --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --v=2 --pod-eviction-timeout=5m0s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --feature-gates=RotateKubeletServerCertificate=true --leader-elect=true --profiling=false --node-monitor-grace-period=40s --allow-untagged-cloud=true --use-service-account-credentials=true
+```
+
+### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --service-account-private-key-file parameter
+to the private key file for service accounts.
+--service-account-private-key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--service-account-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 3690 3671 1 Sep11 ? 00:20:42 kube-controller-manager --service-cluster-ip-range=10.43.0.0/16 --configure-cloud-routes=false --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --v=2 --pod-eviction-timeout=5m0s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --feature-gates=RotateKubeletServerCertificate=true --leader-elect=true --profiling=false --node-monitor-grace-period=40s --allow-untagged-cloud=true --use-service-account-credentials=true
+```
+
+### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
+--root-ca-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--root-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 3690 3671 1 Sep11 ? 00:20:42 kube-controller-manager --service-cluster-ip-range=10.43.0.0/16 --configure-cloud-routes=false --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --v=2 --pod-eviction-timeout=5m0s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --feature-gates=RotateKubeletServerCertificate=true --leader-elect=true --profiling=false --node-monitor-grace-period=40s --allow-untagged-cloud=true --use-service-account-credentials=true
+```
+
+### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
+--feature-gates=RotateKubeletServerCertificate=true
+
+Cluster provisioned by RKE handles certificate rotation directly through RKE.
+
+### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and ensure the correct value for the --bind-address parameter
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--bind-address' is present OR '--bind-address' is not present
+```
+
+**Returned Value**:
+
+```console
+root 3690 3671 1 Sep11 ? 00:20:42 kube-controller-manager --service-cluster-ip-range=10.43.0.0/16 --configure-cloud-routes=false --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --v=2 --pod-eviction-timeout=5m0s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --feature-gates=RotateKubeletServerCertificate=true --leader-elect=true --profiling=false --node-monitor-grace-period=40s --allow-untagged-cloud=true --use-service-account-credentials=true
+```
+
+## 1.4 Scheduler
+### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-scheduler | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 3859 3838 0 Sep11 ? 00:03:44 kube-scheduler --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+```
+
+### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml
+on the control plane node and ensure the correct value for the --bind-address parameter
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-scheduler | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--bind-address' is present OR '--bind-address' is not present
+```
+
+**Returned Value**:
+
+```console
+root 3859 3838 0 Sep11 ? 00:03:44 kube-scheduler --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+```
+
+## 2 Etcd Node Configuration
+### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the etcd service documentation and configure TLS encryption.
+Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
+on the master node and set the below parameters.
+--cert-file=
+--key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--cert-file' is present AND '--key-file' is present
+```
+
+**Returned Value**:
+
+```console
+etcd 3369 3348 2 Sep11 ? 00:26:05 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-cluster-token=etcd-cluster-1 --initial-cluster=etcd-ip-172-31-7-100=https://172.31.7.100:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100-key.pem --client-cert-auth=true --election-timeout=5000 --name=etcd-ip-172-31-7-100 --listen-client-urls=https://0.0.0.0:2379 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-state=new --advertise-client-urls=https://172.31.7.100:2379 --heartbeat-interval=500 --initial-advertise-peer-urls=https://172.31.7.100:2380 --listen-peer-urls=https://0.0.0.0:2380 --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100-key.pem --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 root 3528 3509 7 Sep11 ? 01:24:08 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s root 1057543 1057522 5 16:15 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.24-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
+```
+
+### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
+node and set the below parameter.
+--client-cert-auth="true"
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--client-cert-auth' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+etcd 3369 3348 2 Sep11 ? 00:26:05 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-cluster-token=etcd-cluster-1 --initial-cluster=etcd-ip-172-31-7-100=https://172.31.7.100:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100-key.pem --client-cert-auth=true --election-timeout=5000 --name=etcd-ip-172-31-7-100 --listen-client-urls=https://0.0.0.0:2379 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-state=new --advertise-client-urls=https://172.31.7.100:2379 --heartbeat-interval=500 --initial-advertise-peer-urls=https://172.31.7.100:2380 --listen-peer-urls=https://0.0.0.0:2380 --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100-key.pem --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 root 3528 3509 7 Sep11 ? 01:24:08 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s root 1057543 1057522 4 16:15 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.24-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
+```
+
+### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
+node and either remove the --auto-tls parameter or set it to false.
+ --auto-tls=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present
+```
+
+**Returned Value**:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-7-100 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-7-100.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-7-100-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/
+```
+
+### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the etcd service documentation and configure peer TLS encryption as appropriate
+for your etcd cluster.
+Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the
+master node and set the below parameters.
+--peer-client-file=
+--peer-key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--peer-cert-file' is present AND '--peer-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+etcd 3369 3348 2 Sep11 ? 00:26:05 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-cluster-token=etcd-cluster-1 --initial-cluster=etcd-ip-172-31-7-100=https://172.31.7.100:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100-key.pem --client-cert-auth=true --election-timeout=5000 --name=etcd-ip-172-31-7-100 --listen-client-urls=https://0.0.0.0:2379 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-state=new --advertise-client-urls=https://172.31.7.100:2379 --heartbeat-interval=500 --initial-advertise-peer-urls=https://172.31.7.100:2380 --listen-peer-urls=https://0.0.0.0:2380 --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100-key.pem --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s root 1057543 1057522 2 16:15 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.24-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
+```
+
+### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
+node and set the below parameter.
+--peer-client-cert-auth=true
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--peer-client-cert-auth' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+etcd 3369 3348 2 Sep11 ? 00:26:05 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-cluster-token=etcd-cluster-1 --initial-cluster=etcd-ip-172-31-7-100=https://172.31.7.100:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100-key.pem --client-cert-auth=true --election-timeout=5000 --name=etcd-ip-172-31-7-100 --listen-client-urls=https://0.0.0.0:2379 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-state=new --advertise-client-urls=https://172.31.7.100:2379 --heartbeat-interval=500 --initial-advertise-peer-urls=https://172.31.7.100:2380 --listen-peer-urls=https://0.0.0.0:2380 --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100-key.pem --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s root 1057543 1057522 2 16:15 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.24-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
+```
+
+### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
+node and either remove the --peer-auto-tls parameter or set it to false.
+--peer-auto-tls=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present
+```
+
+**Returned Value**:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-7-100 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-7-100.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-7-100-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/
+```
+
+### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+[Manual test]
+Follow the etcd documentation and create a dedicated certificate authority setup for the
+etcd service.
+Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the
+master node and set the below parameter.
+--trusted-ca-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--trusted-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+etcd 3369 3348 2 Sep11 ? 00:26:05 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-cluster-token=etcd-cluster-1 --initial-cluster=etcd-ip-172-31-7-100=https://172.31.7.100:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100-key.pem --client-cert-auth=true --election-timeout=5000 --name=etcd-ip-172-31-7-100 --listen-client-urls=https://0.0.0.0:2379 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-state=new --advertise-client-urls=https://172.31.7.100:2379 --heartbeat-interval=500 --initial-advertise-peer-urls=https://172.31.7.100:2380 --listen-peer-urls=https://0.0.0.0:2380 --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-7-100-key.pem --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 root 3528 3509 7 Sep11 ? 01:24:09 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s root 1057543 1057522 2 16:15 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.24-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
+```
+
+## 3.1 Authentication and Authorization
+### 3.1.1 Client certificate authentication should not be used for users (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
+implemented in place of client certificates.
+
+## 3.2 Logging
+### 3.2.1 Ensure that a minimal audit policy is created (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Create an audit policy file for your cluster.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-policy-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 3528 3509 7 Sep11 ? 01:24:10 kube-apiserver --service-cluster-ip-range=10.43.0.0/16 --anonymous-auth=false --profiling=false --advertise-address=172.31.7.100 --audit-log-maxsize=100 --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --etcd-prefix=/registry --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --audit-policy-file=/etc/kubernetes/audit-policy.yaml --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --storage-backend=etcd3 --service-node-port-range=30000-32767 --bind-address=0.0.0.0 --api-audiences=unknown --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --runtime-config=policy/v1beta1/podsecuritypolicy=true --allow-privileged=true --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --service-account-issuer=rke --requestheader-allowed-names=kube-apiserver-proxy-client --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-format=json --authorization-mode=Node,RBAC --etcd-servers=https://172.31.7.100:2379 --service-account-lookup=true --secure-port=6443 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --audit-log-maxage=30 --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --audit-log-maxbackup=10 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --authentication-token-webhook-cache-ttl=5s
+```
+
+### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Review the audit policy provided for the cluster and ensure that it covers
+at least the following areas,
+- Access to Secrets managed by the cluster. Care should be taken to only
+ log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
+ order to avoid risk of logging sensitive data.
+- Modification of Pod and Deployment objects.
+- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
+For most requests, minimally logging at the Metadata level is recommended
+(the most basic level of logging).
+
+## 4.1 Worker Node Configuration Files
+### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service.
+All configuration is passed in as arguments at container run time.
+
+### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service.
+All configuration is passed in as arguments at container run time.
+
+### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 600, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600
+```
+
+### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 600, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600
+```
+
+### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the following command to modify the file permissions of the
+--client-ca-file chmod 600
+
+**Audit:**
+
+```bash
+stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the following command to modify the ownership of the --client-ca-file.
+chown root:root
+
+**Audit:**
+
+```bash
+stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the following command (using the config file location identified in the Audit step)
+chmod 600 /var/lib/kubelet/config.yaml
+
+Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet.
+All configuration is passed in as arguments at container run time.
+
+### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the following command (using the config file location identified in the Audit step)
+chown root:root /var/lib/kubelet/config.yaml
+
+Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet.
+All configuration is passed in as arguments at container run time.
+
+## 4.2 Kubelet
+### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
+`false`.
+If using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+`--anonymous-auth=false`
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--anonymous-auth' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4429 4031 3 Sep11 ? 00:38:25 kubelet --read-only-port=0 --event-qps=0 --root-dir=/var/lib/kubelet --pod-infra-container-image=rancher/mirrored-pause:3.7 --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --cloud-provider= --address=0.0.0.0 --protect-kernel-defaults=true --hostname-override=ip-172-31-7-100 --resolv-conf=/etc/resolv.conf --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100.pem --make-iptables-util-chains=true --feature-gates=RotateKubeletServerCertificate=true --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --container-runtime=remote --v=2 --anonymous-auth=false --authentication-token-webhook=true --fail-swap-on=false --cgroups-per-qos=True --authorization-mode=Webhook --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
+using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_AUTHZ_ARGS variable.
+--authorization-mode=Webhook
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' does not have 'AlwaysAllow'
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4429 4031 3 Sep11 ? 00:38:25 kubelet --read-only-port=0 --event-qps=0 --root-dir=/var/lib/kubelet --pod-infra-container-image=rancher/mirrored-pause:3.7 --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --cloud-provider= --address=0.0.0.0 --protect-kernel-defaults=true --hostname-override=ip-172-31-7-100 --resolv-conf=/etc/resolv.conf --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100.pem --make-iptables-util-chains=true --feature-gates=RotateKubeletServerCertificate=true --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --container-runtime=remote --v=2 --anonymous-auth=false --authentication-token-webhook=true --fail-swap-on=false --cgroups-per-qos=True --authorization-mode=Webhook --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
+the location of the client CA file.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_AUTHZ_ARGS variable.
+--client-ca-file=
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--client-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4429 4031 3 Sep11 ? 00:38:25 kubelet --read-only-port=0 --event-qps=0 --root-dir=/var/lib/kubelet --pod-infra-container-image=rancher/mirrored-pause:3.7 --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --cloud-provider= --address=0.0.0.0 --protect-kernel-defaults=true --hostname-override=ip-172-31-7-100 --resolv-conf=/etc/resolv.conf --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100.pem --make-iptables-util-chains=true --feature-gates=RotateKubeletServerCertificate=true --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --container-runtime=remote --v=2 --anonymous-auth=false --authentication-token-webhook=true --fail-swap-on=false --cgroups-per-qos=True --authorization-mode=Webhook --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--read-only-port=0
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--read-only-port' is equal to '0' OR '--read-only-port' is not present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4429 4031 3 Sep11 ? 00:38:25 kubelet --read-only-port=0 --event-qps=0 --root-dir=/var/lib/kubelet --pod-infra-container-image=rancher/mirrored-pause:3.7 --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --cloud-provider= --address=0.0.0.0 --protect-kernel-defaults=true --hostname-override=ip-172-31-7-100 --resolv-conf=/etc/resolv.conf --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100.pem --make-iptables-util-chains=true --feature-gates=RotateKubeletServerCertificate=true --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --container-runtime=remote --v=2 --anonymous-auth=false --authentication-token-webhook=true --fail-swap-on=false --cgroups-per-qos=True --authorization-mode=Webhook --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
+value other than 0.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--streaming-connection-idle-timeout=5m
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4429 4031 3 Sep11 ? 00:38:25 kubelet --read-only-port=0 --event-qps=0 --root-dir=/var/lib/kubelet --pod-infra-container-image=rancher/mirrored-pause:3.7 --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --cloud-provider= --address=0.0.0.0 --protect-kernel-defaults=true --hostname-override=ip-172-31-7-100 --resolv-conf=/etc/resolv.conf --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100.pem --make-iptables-util-chains=true --feature-gates=RotateKubeletServerCertificate=true --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --container-runtime=remote --v=2 --anonymous-auth=false --authentication-token-webhook=true --fail-swap-on=false --cgroups-per-qos=True --authorization-mode=Webhook --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--protect-kernel-defaults=true
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--protect-kernel-defaults' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4429 4031 3 Sep11 ? 00:38:25 kubelet --read-only-port=0 --event-qps=0 --root-dir=/var/lib/kubelet --pod-infra-container-image=rancher/mirrored-pause:3.7 --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --cloud-provider= --address=0.0.0.0 --protect-kernel-defaults=true --hostname-override=ip-172-31-7-100 --resolv-conf=/etc/resolv.conf --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100.pem --make-iptables-util-chains=true --feature-gates=RotateKubeletServerCertificate=true --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --container-runtime=remote --v=2 --anonymous-auth=false --authentication-token-webhook=true --fail-swap-on=false --cgroups-per-qos=True --authorization-mode=Webhook --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+remove the --make-iptables-util-chains argument from the
+KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4429 4031 3 Sep11 ? 00:38:25 kubelet --read-only-port=0 --event-qps=0 --root-dir=/var/lib/kubelet --pod-infra-container-image=rancher/mirrored-pause:3.7 --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --cloud-provider= --address=0.0.0.0 --protect-kernel-defaults=true --hostname-override=ip-172-31-7-100 --resolv-conf=/etc/resolv.conf --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100.pem --make-iptables-util-chains=true --feature-gates=RotateKubeletServerCertificate=true --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --container-runtime=remote --v=2 --anonymous-auth=false --authentication-token-webhook=true --fail-swap-on=false --cgroups-per-qos=True --authorization-mode=Webhook --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+on each worker node and remove the --hostname-override argument from the
+KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors
+
+### 4.2.9 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--event-qps' is equal to '0'
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4429 4031 3 Sep11 ? 00:38:25 kubelet --read-only-port=0 --event-qps=0 --root-dir=/var/lib/kubelet --pod-infra-container-image=rancher/mirrored-pause:3.7 --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --cloud-provider= --address=0.0.0.0 --protect-kernel-defaults=true --hostname-override=ip-172-31-7-100 --resolv-conf=/etc/resolv.conf --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100.pem --make-iptables-util-chains=true --feature-gates=RotateKubeletServerCertificate=true --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --container-runtime=remote --v=2 --anonymous-auth=false --authentication-token-webhook=true --fail-swap-on=false --cgroups-per-qos=True --authorization-mode=Webhook --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
+of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
+to the location of the corresponding private key file.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
+--tls-cert-file=
+--tls-private-key-file=
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--tls-cert-file' is present AND '--tls-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4429 4031 3 Sep11 ? 00:38:25 kubelet --read-only-port=0 --event-qps=0 --root-dir=/var/lib/kubelet --pod-infra-container-image=rancher/mirrored-pause:3.7 --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --cloud-provider= --address=0.0.0.0 --protect-kernel-defaults=true --hostname-override=ip-172-31-7-100 --resolv-conf=/etc/resolv.conf --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100.pem --make-iptables-util-chains=true --feature-gates=RotateKubeletServerCertificate=true --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --container-runtime=remote --v=2 --anonymous-auth=false --authentication-token-webhook=true --fail-swap-on=false --cgroups-per-qos=True --authorization-mode=Webhook --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
+remove it altogether to use the default value.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
+variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--rotate-certificates' is present OR '--rotate-certificates' is not present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4429 4031 3 Sep11 ? 00:38:25 kubelet --read-only-port=0 --event-qps=0 --root-dir=/var/lib/kubelet --pod-infra-container-image=rancher/mirrored-pause:3.7 --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --cloud-provider= --address=0.0.0.0 --protect-kernel-defaults=true --hostname-override=ip-172-31-7-100 --resolv-conf=/etc/resolv.conf --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100.pem --make-iptables-util-chains=true --feature-gates=RotateKubeletServerCertificate=true --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --container-runtime=remote --v=2 --anonymous-auth=false --authentication-token-webhook=true --fail-swap-on=false --cgroups-per-qos=True --authorization-mode=Webhook --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
+--feature-gates=RotateKubeletServerCertificate=true
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+Clusters provisioned by RKE handles certificate rotation directly through RKE.
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
+TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+or to a subset of these values.
+If using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the --tls-cipher-suites parameter as follows, or to a subset of these values.
+--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256'
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4429 4031 3 Sep11 ? 00:38:25 kubelet --read-only-port=0 --event-qps=0 --root-dir=/var/lib/kubelet --pod-infra-container-image=rancher/mirrored-pause:3.7 --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100-key.pem --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --cloud-provider= --address=0.0.0.0 --protect-kernel-defaults=true --hostname-override=ip-172-31-7-100 --resolv-conf=/etc/resolv.conf --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-7-100.pem --make-iptables-util-chains=true --feature-gates=RotateKubeletServerCertificate=true --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --container-runtime=remote --v=2 --anonymous-auth=false --authentication-token-webhook=true --fail-swap-on=false --cgroups-per-qos=True --authorization-mode=Webhook --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+## 5.1 RBAC and Service Accounts
+### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
+if they need this role or if they could use a role with fewer privileges.
+Where possible, first bind users to a lower privileged role and then remove the
+clusterrolebinding to the cluster-admin role :
+kubectl delete clusterrolebinding [name]
+
+### 5.1.2 Minimize access to secrets (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove get, list and watch access to Secret objects in the cluster.
+
+### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible replace any use of wildcards in clusterroles and roles with specific
+objects or actions.
+
+### 5.1.4 Minimize access to create pods (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove create access to pod objects in the cluster.
+
+### 5.1.5 Ensure that default service accounts are not actively used. (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Create explicit service accounts wherever a Kubernetes workload requires specific access
+to the Kubernetes API server.
+Modify the configuration of each default service account to include this value
+automountServiceAccountToken: false
+
+**Audit Script:** `check_for_default_sa.sh`
+
+```bash
+#!/bin/bash
+
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l)
+if [[ ${count_sa} -gt 0 ]]; then
+ echo "false"
+ exit
+fi
+
+for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name")
+do
+ for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"')
+ do
+ read kind name <<<$(IFS=","; echo $result)
+ resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l)
+ if [[ ${resource_count} -gt 0 ]]; then
+ echo "false"
+ exit
+ fi
+ done
+done
+
+
+echo "true"
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_default_sa.sh
+```
+
+**Expected Result**:
+
+```console
+'true' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+Error from server (Forbidden): roles.rbac.authorization.k8s.io "default-psp-role" is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot get resource "roles" in API group "rbac.authorization.k8s.io" in the namespace "ingress-nginx" Error from server (Forbidden): roles.rbac.authorization.k8s.io "default-psp-role" is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot get resource "roles" in API group "rbac.authorization.k8s.io" in the namespace "ingress-nginx" Error from server (Forbidden): roles.rbac.authorization.k8s.io "default-psp-role" is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot get resource "roles" in API group "rbac.authorization.k8s.io" in the namespace "ingress-nginx" Error from server (Forbidden): roles.rbac.authorization.k8s.io "default-psp-role" is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot get resource "roles" in API group "rbac.authorization.k8s.io" in the namespace "ingress-nginx" Error from server (Forbidden): roles.rbac.authorization.k8s.io "default-psp-role" is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot get resource "roles" in API group "rbac.authorization.k8s.io" in the namespace "kube-system" Error from server (Forbidden): roles.rbac.authorization.k8s.io "default-psp-role" is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot get resource "roles" in API group "rbac.authorization.k8s.io" in the namespace "kube-system" Error from server (Forbidden): roles.rbac.authorization.k8s.io "default-psp-role" is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot get resource "roles" in API group "rbac.authorization.k8s.io" in the namespace "kube-system" Error from server (Forbidden): roles.rbac.authorization.k8s.io "default-psp-role" is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot get resource "roles" in API group "rbac.authorization.k8s.io" in the namespace "kube-system" true
+```
+
+### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Modify the definition of pods and service accounts which do not need to mount service
+account tokens to disable it.
+
+### 5.1.7 Avoid use of system:masters group (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Remove the system:masters group from all users in the cluster.
+
+### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove the impersonate, bind and escalate rights from subjects.
+
+## 5.2 Pod Security Standards
+### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that either Pod Security Admission or an external policy control system is in place
+for every namespace which contains user workloads.
+
+### 5.2.2 Minimize the admission of privileged containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of privileged containers.
+
+### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostPID` containers.
+
+**Audit:**
+
+```bash
+kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result**:
+
+```console
+'count' is greater than 0
+```
+
+**Returned Value**:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ --count=1
+```
+
+### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostIPC` containers.
+
+**Audit:**
+
+```bash
+kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result**:
+
+```console
+'count' is greater than 0
+```
+
+**Returned Value**:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ --count=1
+```
+
+### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostNetwork` containers.
+
+**Audit:**
+
+```bash
+kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result**:
+
+```console
+'count' is greater than 0
+```
+
+**Returned Value**:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ --count=1
+```
+
+### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
+
+**Audit:**
+
+```bash
+kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result**:
+
+```console
+'count' is greater than 0
+```
+
+**Returned Value**:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ --count=1
+```
+
+### 5.2.7 Minimize the admission of root containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
+or `MustRunAs` with the range of UIDs not including 0, is set.
+
+### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with the `NET_RAW` capability.
+
+### 5.2.9 Minimize the admission of containers with added capabilities (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that `allowedCapabilities` is not present in policies for the cluster unless
+it is set to an empty array.
+
+### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Review the use of capabilites in applications running on your cluster. Where a namespace
+contains applicaions which do not require any Linux capabities to operate consider adding
+a PSP which forbids the admission of containers which do not drop all capabilities.
+
+### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
+
+### 5.2.12 Minimize the admission of HostPath volumes (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `hostPath` volumes.
+
+### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers which use `hostPort` sections.
+
+## 5.3 Network Policies and CNI
+### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If the CNI plugin in use does not support network policies, consideration should be given to
+making use of a different plugin, or finding an alternate mechanism for restricting traffic
+in the Kubernetes cluster.
+
+### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the documentation and create NetworkPolicy objects as you need them.
+
+**Audit Script:** `check_for_network_policies.sh`
+
+```bash
+#!/bin/bash
+
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+for namespace in $(kubectl get namespaces --all-namespaces -o json | jq -r '.items[].metadata.name'); do
+ policy_count=$(kubectl get networkpolicy -n ${namespace} -o json | jq '.items | length')
+ if [[ ${policy_count} -eq 0 ]]; then
+ echo "false"
+ exit
+ fi
+done
+
+echo "true"
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_network_policies.sh
+```
+
+**Expected Result**:
+
+```console
+'true' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+true
+```
+
+## 5.4 Secrets Management
+### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If possible, rewrite application code to read Secrets from mounted secret files, rather than
+from environment variables.
+
+### 5.4.2 Consider external secret storage (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Refer to the Secrets management options offered by your cloud provider or a third-party
+secrets management solution.
+
+## 5.5 Extensible Admission Control
+### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and setup image provenance.
+
+## 5.7 General Policies
+### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the documentation and create namespaces for objects in your deployment as you need
+them.
+
+### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
+An example is as below:
+ securityContext:
+ seccompProfile:
+ type: RuntimeDefault
+
+### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
+suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
+Containers.
+
+### 5.7.4 The default namespace should not be used (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
+resources and that all new resources are created in a specific namespace.
+
+**Audit Script:** `check_for_default_ns.sh`
+
+```bash
+#!/bin/bash
+
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+count=$(kubectl get all -n default -o json | jq .items[] | jq -r 'select((.metadata.name!="kubernetes"))' | jq .metadata.name | wc -l)
+if [[ ${count} -gt 0 ]]; then
+ echo "false"
+ exit
+fi
+
+echo "true"
+
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_default_ns.sh
+```
+
+**Expected Result**:
+
+```console
+'true' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+true
+```
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md
new file mode 100644
index 00000000000..5e5a6b1eed2
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md
@@ -0,0 +1,2862 @@
+---
+title: RKE 自我评估指南 - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27
+---
+
+
+
+
+
+本文档是 [RKE 加固指南](rke1-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 RKE 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。
+
+本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes:
+
+| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 |
+|-----------------|-----------------------|--------------------|
+| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 |
+
+本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群中的合规性。由于 Rancher 和 RKE 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多 control 验证检查不适用。这些检查将返回 `Not Applicable` 的结果。
+
+本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。
+
+有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.7 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。
+
+## 测试方法
+
+Rancher 和 RKE 通过 Docker 容器安装 Kubernetes 服务。配置是通过初始化时传递给容器的参数定义的,而不是通过配置文件。
+
+在 control 审计与原始 CIS benchmark 不同时,提供了针对 Rancher 的特定审计命令以进行测试。在执行测试时,你将需要访问所有 RKE 节点主机上的命令行。这些命令还使用了 [kubectl](https://kubernetes.io/docs/tasks/tools/)(带有有效的配置文件)和 [jq](https://stedolan.github.io/jq/) 工具,在测试和评估测试结果时这些工具是必需的。
+
+:::note
+
+本指南仅涵盖 `automated`(之前称为 `scored`)测试。
+
+:::
+
+### Controls
+
+## 1.1 Control Plane Node Configuration Files
+### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the
+control plane node.
+For example, chmod 600 /etc/kubernetes/manifests/kube-apiserver.yaml
+Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml
+Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600 /etc/kubernetes/manifests/kube-controller-manager.yaml
+Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml
+Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600 /etc/kubernetes/manifests/kube-scheduler.yaml
+Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml
+Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod 600 /etc/kubernetes/manifests/etcd.yaml
+Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root /etc/kubernetes/manifests/etcd.yaml
+Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600
+
+**Audit:**
+
+```bash
+ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
+```
+
+**Expected Result**:
+
+```console
+'permissions' is present
+```
+
+### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root
+
+**Audit:**
+
+```bash
+ps -ef | grep kubelet | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above). For example,
+chmod 700 /var/lib/etcd
+
+**Audit:**
+
+```bash
+stat -c %a /node/var/lib/etcd
+```
+
+**Expected Result**:
+
+```console
+'700' is equal to '700'
+```
+
+**Returned Value**:
+
+```console
+700
+```
+
+### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above).
+For example, chown etcd:etcd /var/lib/etcd
+
+**Audit:**
+
+```bash
+stat -c %U:%G /node/var/lib/etcd
+```
+
+**Expected Result**:
+
+```console
+'etcd:etcd' is present
+```
+
+**Returned Value**:
+
+```console
+etcd:etcd
+```
+
+### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600 /etc/kubernetes/admin.conf
+Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes.
+
+### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /etc/kubernetes/admin.conf
+Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes.
+
+### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod 600 scheduler
+Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root scheduler
+Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod 600 controllermanager
+Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root controllermanager
+Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
+All configuration is passed in as arguments at container run time.
+
+### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown -R root:root /etc/kubernetes/pki/
+
+**Audit Script:** `check_files_owner_in_dir.sh`
+
+```bash
+#!/usr/bin/env bash
+
+# This script is used to ensure the owner is set to root:root for
+# the given directory and all the files in it
+#
+# inputs:
+# $1 = /full/path/to/directory
+#
+# outputs:
+# true/false
+
+INPUT_DIR=$1
+
+if [[ "${INPUT_DIR}" == "" ]]; then
+ echo "false"
+ exit
+fi
+
+if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then
+ echo "false"
+ exit
+fi
+
+statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*)
+while read -r statInfoLine; do
+ f=$(echo ${statInfoLine} | cut -d' ' -f1)
+ p=$(echo ${statInfoLine} | cut -d' ' -f2)
+
+ if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then
+ if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then
+ echo "false"
+ exit
+ fi
+ else
+ if [[ "$p" != "root:root" ]]; then
+ echo "false"
+ exit
+ fi
+ fi
+done <<< "${statInfoLines}"
+
+
+echo "true"
+exit
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl
+```
+
+**Expected Result**:
+
+```console
+'true' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+true
+```
+
+### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} +
+
+**Audit:**
+
+```bash
+find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=644 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600
+```
+
+### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} +
+
+**Audit:**
+
+```bash
+find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 600, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600
+```
+
+## 1.2 API Server
+### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--anonymous-auth=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--anonymous-auth' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the documentation and configure alternate mechanisms for authentication. Then,
+edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and remove the --token-auth-file= parameter.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--token-auth-file' is not present
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and remove the `DenyServiceExternalIPs`
+from enabled admission plugins.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the
+apiserver and kubelets. Then, edit API server pod specification file
+/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
+kubelet client certificate and key parameters as below.
+--kubelet-client-certificate=
+--kubelet-client-key=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Follow the Kubernetes documentation and setup the TLS connection between
+the apiserver and kubelets. Then, edit the API server pod specification file
+/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
+--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
+--kubelet-certificate-authority=
+When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers.
+
+### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
+One such example could be as below.
+--authorization-mode=RBAC
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' does not have 'AlwaysAllow'
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to a value that includes Node.
+--authorization-mode=Node,RBAC
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' has 'Node'
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
+for example `--authorization-mode=Node,RBAC`.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' has 'RBAC'
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set the desired limits in a configuration file.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+and set the below parameters.
+--enable-admission-plugins=...,EventRateLimit,...
+--admission-control-config-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'EventRateLimit'
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
+value that does not include AlwaysAdmit.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+AlwaysPullImages.
+--enable-admission-plugins=...,AlwaysPullImages,...
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'AlwaysPullImages'
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+SecurityContextDeny, unless PodSecurityPolicy is already in place.
+--enable-admission-plugins=...,SecurityContextDeny,...
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy'
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the documentation and create ServiceAccount objects as per your environment.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
+value that does not include ServiceAccount.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --disable-admission-plugins parameter to
+ensure it does not include NamespaceLifecycle.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to a
+value that includes NodeRestriction.
+--enable-admission-plugins=...,NodeRestriction,...
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'NodeRestriction'
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and either remove the --secure-port parameter or
+set it to a different (non-zero) desired port.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--secure-port' is greater than 0 OR '--secure-port' is not present
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.17 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.18 Ensure that the --audit-log-path argument is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-path parameter to a suitable path and
+file where you would like audit logs to be written, for example,
+--audit-log-path=/var/log/apiserver/audit.log
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-path' is present
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxage parameter to 30
+or as an appropriate number of days, for example,
+--audit-log-maxage=30
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-maxage' is greater or equal to 30
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
+value. For example,
+--audit-log-maxbackup=10
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-maxbackup' is greater or equal to 10
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
+For example, to set it as 100 MB, --audit-log-maxsize=100
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-maxsize' is greater or equal to 100
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+and set the below parameter as appropriate and if needed.
+For example, --request-timeout=300s
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--service-account-lookup=true
+Alternatively, you can delete the --service-account-lookup parameter from this file so
+that the default takes effect.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --service-account-key-file parameter
+to the public key file for service accounts. For example,
+--service-account-key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--service-account-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the etcd certificate and key file parameters.
+--etcd-certfile=
+--etcd-keyfile=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--etcd-certfile' is present AND '--etcd-keyfile' is present
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the TLS certificate and private key file parameters.
+--tls-cert-file=
+--tls-private-key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--tls-cert-file' is present AND '--tls-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the client certificate authority file.
+--client-ca-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--client-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the etcd certificate authority file parameter.
+--etcd-cafile=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--etcd-cafile' is present
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and configure a EncryptionConfig file.
+Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the --encryption-provider-config parameter to the path of that file.
+For example, --encryption-provider-config=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--encryption-provider-config' is present
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 1.2.30 Ensure that encryption providers are appropriately configured (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and configure a EncryptionConfig file.
+In this file, choose aescbc, kms or secretbox as the encryption provider.
+
+**Audit:**
+
+```bash
+ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi
+```
+
+**Expected Result**:
+
+```console
+'provider' is present
+```
+
+### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
+TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
+TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
+TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
+TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
+TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
+TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
+TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384'
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+## 1.3 Controller Manager
+### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
+for example, --terminated-pod-gc-threshold=10
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--terminated-pod-gc-threshold' is present
+```
+
+**Returned Value**:
+
+```console
+root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true
+```
+
+### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true
+```
+
+### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node to set the below parameter.
+--use-service-account-credentials=true
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--use-service-account-credentials' is not equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true
+```
+
+### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --service-account-private-key-file parameter
+to the private key file for service accounts.
+--service-account-private-key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--service-account-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true
+```
+
+### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
+--root-ca-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--root-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true
+```
+
+### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
+--feature-gates=RotateKubeletServerCertificate=true
+Cluster provisioned by RKE handles certificate rotation directly through RKE.
+
+### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and ensure the correct value for the --bind-address parameter
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--bind-address' is present OR '--bind-address' is not present
+```
+
+**Returned Value**:
+
+```console
+root 4184 4163 1 Sep11 ? 00:20:06 kube-controller-manager --configure-cloud-routes=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-cluster-ip-range=10.43.0.0/16 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --node-monitor-grace-period=40s --v=2 --profiling=false --cloud-provider= --allow-untagged-cloud=true --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --use-service-account-credentials=true
+```
+
+## 1.4 Scheduler
+### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-scheduler | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true
+```
+
+### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml
+on the control plane node and ensure the correct value for the --bind-address parameter
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-scheduler | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--bind-address' is present OR '--bind-address' is not present
+```
+
+**Returned Value**:
+
+```console
+root 4339 4318 0 Sep11 ? 00:03:28 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --profiling=false --v=2 --leader-elect=true
+```
+
+## 2 Etcd Node Configuration
+### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the etcd service documentation and configure TLS encryption.
+Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
+on the master node and set the below parameters.
+--cert-file=
+--key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--cert-file' is present AND '--key-file' is present
+```
+
+**Returned Value**:
+
+```console
+etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
+```
+
+### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
+node and set the below parameter.
+--client-cert-auth="true"
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--client-cert-auth' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
+```
+
+### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
+node and either remove the --auto-tls parameter or set it to false.
+ --auto-tls=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present
+```
+
+**Returned Value**:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/
+```
+
+### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the etcd service documentation and configure peer TLS encryption as appropriate
+for your etcd cluster.
+Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the
+master node and set the below parameters.
+--peer-client-file=
+--peer-key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--peer-cert-file' is present AND '--peer-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 2 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
+```
+
+### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
+node and set the below parameter.
+--peer-client-cert-auth=true
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--peer-client-cert-auth' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
+```
+
+### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
+node and either remove the --peer-auto-tls parameter or set it to false.
+--peer-auto-tls=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present
+```
+
+**Returned Value**:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-4-224 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem ETCDCTL_ENDPOINTS=https://127.0.0.1:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/
+```
+
+### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+[Manual test]
+Follow the etcd documentation and create a dedicated certificate authority setup for the
+etcd service.
+Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the
+master node and set the below parameter.
+--trusted-ca-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--trusted-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+etcd 3847 3824 2 Sep11 ? 00:29:36 /usr/local/bin/etcd --peer-client-cert-auth=true --initial-advertise-peer-urls=https://172.31.4.224:2380 --initial-cluster=etcd-ip-172-31-4-224=https://172.31.4.224:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --listen-client-urls=https://0.0.0.0:2379 --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-4-224-key.pem --data-dir=/var/lib/rancher/etcd/ --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-4-224 --advertise-client-urls=https://172.31.4.224:2379 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --election-timeout=5000 root 4018 3998 5 Sep11 ? 01:03:21 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml root 1034677 1034607 1 16:16 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.7-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
+```
+
+## 3.1 Authentication and Authorization
+### 3.1.1 Client certificate authentication should not be used for users (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
+implemented in place of client certificates.
+
+### 3.1.2 Service account token authentication should not be used for users (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
+in place of service account tokens.
+
+### 3.1.3 Bootstrap token authentication should not be used for users (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
+in place of bootstrap tokens.
+
+## 3.2 Logging
+### 3.2.1 Ensure that a minimal audit policy is created (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Create an audit policy file for your cluster.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-policy-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 4018 3998 5 Sep11 ? 01:03:22 kube-apiserver --advertise-address=172.31.4.224 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --requestheader-allowed-names=kube-apiserver-proxy-client --service-cluster-ip-range=10.43.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --requestheader-extra-headers-prefix=X-Remote-Extra- --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --storage-backend=etcd3 --anonymous-auth=false --bind-address=0.0.0.0 --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --profiling=false --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --runtime-config=authorization.k8s.io/v1beta1=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --service-account-lookup=true --etcd-servers=https://172.31.4.224:2379 --api-audiences=unknown --requestheader-group-headers=X-Remote-Group --service-account-issuer=rke --audit-log-maxsize=100 --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --secure-port=6443 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --authorization-mode=Node,RBAC --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-maxage=30 --audit-log-format=json --etcd-prefix=/registry --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authentication-token-webhook-cache-ttl=5s --admission-control-config-file=/etc/kubernetes/admission.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --requestheader-username-headers=X-Remote-User --allow-privileged=true --audit-policy-file=/etc/kubernetes/audit-policy.yaml
+```
+
+### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Review the audit policy provided for the cluster and ensure that it covers
+at least the following areas,
+- Access to Secrets managed by the cluster. Care should be taken to only
+ log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
+ order to avoid risk of logging sensitive data.
+- Modification of Pod and Deployment objects.
+- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
+For most requests, minimally logging at the Metadata level is recommended
+(the most basic level of logging).
+
+## 4.1 Worker Node Configuration Files
+### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service.
+All configuration is passed in as arguments at container run time.
+
+### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service.
+ All configuration is passed in as arguments at container run time.
+
+### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 600, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600
+```
+
+### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example, chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chmod 600 /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 600, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600
+```
+
+### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chown root:root /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the following command to modify the file permissions of the
+--client-ca-file chmod 600
+
+**Audit:**
+
+```bash
+stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the following command to modify the ownership of the --client-ca-file.
+chown root:root
+
+**Audit:**
+
+```bash
+stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the following command (using the config file location identified in the Audit step)
+chmod 600 /var/lib/kubelet/config.yaml
+Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet.
+All configuration is passed in as arguments at container run time.
+
+### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the following command (using the config file location identified in the Audit step)
+chown root:root /var/lib/kubelet/config.yaml
+Not Applicable - Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet.
+All configuration is passed in as arguments at container run time.
+
+## 4.2 Kubelet
+### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
+`false`.
+If using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+`--anonymous-auth=false`
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--anonymous-auth' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
+using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_AUTHZ_ARGS variable.
+--authorization-mode=Webhook
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' does not have 'AlwaysAllow'
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
+the location of the client CA file.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_AUTHZ_ARGS variable.
+--client-ca-file=
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--client-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--read-only-port=0
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--read-only-port' is equal to '0' OR '--read-only-port' is not present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
+value other than 0.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--streaming-connection-idle-timeout=5m
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+remove the --make-iptables-util-chains argument from the
+KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.7 Ensure that the --hostname-override argument is not set (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+on each worker node and remove the --hostname-override argument from the
+KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors
+
+### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--event-qps' is greater or equal to 0 OR '--event-qps' is not present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
+of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
+to the location of the corresponding private key file.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
+--tls-cert-file=
+--tls-private-key-file=
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--tls-cert-file' is present AND '--tls-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
+remove it altogether to use the default value.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
+variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--rotate-certificates' is present OR '--rotate-certificates' is not present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
+--feature-gates=RotateKubeletServerCertificate=true
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE.
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
+TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+or to a subset of these values.
+If using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the --tls-cipher-suites parameter as follows, or to a subset of these values.
+--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256'
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+### 4.2.13 Ensure that a limit is set on pod PIDs (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Decide on an appropriate level for this parameter and set it,
+either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting.
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'--pod-max-pids' is present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 4903 4499 3 Sep11 ? 00:36:52 kubelet --v=2 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224-key.pem --event-qps=0 --address=0.0.0.0 --cgroups-per-qos=True --pod-infra-container-image=rancher/mirrored-pause:3.7 --root-dir=/var/lib/kubelet --container-runtime=remote --make-iptables-util-chains=true --authorization-mode=Webhook --resolv-conf=/etc/resolv.conf --cloud-provider= --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --authentication-token-webhook=true --anonymous-auth=false --read-only-port=0 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --protect-kernel-defaults=true --feature-gates=RotateKubeletServerCertificate=true --cluster-dns=10.43.0.10 --fail-swap-on=false --hostname-override=ip-172-31-4-224 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --cluster-domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-4-224.pem --streaming-connection-idle-timeout=30m --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
+```
+
+## 5.1 RBAC and Service Accounts
+### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
+if they need this role or if they could use a role with fewer privileges.
+Where possible, first bind users to a lower privileged role and then remove the
+clusterrolebinding to the cluster-admin role :
+kubectl delete clusterrolebinding [name]
+
+### 5.1.2 Minimize access to secrets (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove get, list and watch access to Secret objects in the cluster.
+
+### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible replace any use of wildcards in clusterroles and roles with specific
+objects or actions.
+
+### 5.1.4 Minimize access to create pods (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove create access to pod objects in the cluster.
+
+### 5.1.5 Ensure that default service accounts are not actively used. (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Create explicit service accounts wherever a Kubernetes workload requires specific access
+to the Kubernetes API server.
+Modify the configuration of each default service account to include this value
+automountServiceAccountToken: false
+
+**Audit Script:** `check_for_default_sa.sh`
+
+```bash
+#!/bin/bash
+
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l)
+if [[ ${count_sa} -gt 0 ]]; then
+ echo "false"
+ exit
+fi
+
+for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name")
+do
+ for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"')
+ do
+ read kind name <<<$(IFS=","; echo $result)
+ resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l)
+ if [[ ${resource_count} -gt 0 ]]; then
+ echo "false"
+ exit
+ fi
+ done
+done
+
+
+echo "true"
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_default_sa.sh
+```
+
+**Expected Result**:
+
+```console
+'true' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+true
+```
+
+### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Modify the definition of pods and service accounts which do not need to mount service
+account tokens to disable it.
+
+### 5.1.7 Avoid use of system:masters group (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Remove the system:masters group from all users in the cluster.
+
+### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove the impersonate, bind and escalate rights from subjects.
+
+### 5.1.9 Minimize access to create persistent volumes (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove create access to PersistentVolume objects in the cluster.
+
+### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove access to the proxy sub-resource of node objects.
+
+### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove access to the approval sub-resource of certificatesigningrequest objects.
+
+### 5.1.12 Minimize access to webhook configuration objects (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
+
+### 5.1.13 Minimize access to the service account token creation (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove access to the token sub-resource of serviceaccount objects.
+
+## 5.2 Pod Security Standards
+### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that either Pod Security Admission or an external policy control system is in place
+for every namespace which contains user workloads.
+
+### 5.2.2 Minimize the admission of privileged containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of privileged containers.
+
+### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostPID` containers.
+
+### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostIPC` containers.
+
+### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostNetwork` containers.
+
+### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
+
+### 5.2.7 Minimize the admission of root containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
+or `MustRunAs` with the range of UIDs not including 0, is set.
+
+### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with the `NET_RAW` capability.
+
+### 5.2.9 Minimize the admission of containers with added capabilities (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that `allowedCapabilities` is not present in policies for the cluster unless
+it is set to an empty array.
+
+### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Review the use of capabilites in applications running on your cluster. Where a namespace
+contains applicaions which do not require any Linux capabities to operate consider adding
+a PSP which forbids the admission of containers which do not drop all capabilities.
+
+### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
+
+### 5.2.12 Minimize the admission of HostPath volumes (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `hostPath` volumes.
+
+### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers which use `hostPort` sections.
+
+## 5.3 Network Policies and CNI
+### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If the CNI plugin in use does not support network policies, consideration should be given to
+making use of a different plugin, or finding an alternate mechanism for restricting traffic
+in the Kubernetes cluster.
+
+### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the documentation and create NetworkPolicy objects as you need them.
+
+## 5.4 Secrets Management
+### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If possible, rewrite application code to read Secrets from mounted secret files, rather than
+from environment variables.
+
+### 5.4.2 Consider external secret storage (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Refer to the Secrets management options offered by your cloud provider or a third-party
+secrets management solution.
+
+## 5.5 Extensible Admission Control
+### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and setup image provenance.
+
+## 5.7 General Policies
+### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the documentation and create namespaces for objects in your deployment as you need
+them.
+
+### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
+An example is as below:
+ securityContext:
+ seccompProfile:
+ type: RuntimeDefault
+
+### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
+suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
+Containers.
+
+### 5.7.4 The default namespace should not be used (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
+resources and that all new resources are created in a specific namespace.
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25.md
deleted file mode 100644
index 084568cc402..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25.md
+++ /dev/null
@@ -1,3085 +0,0 @@
----
-title: RKE Self-Assessment Guide - CIS Benchmark v1.7 - K8s v1.25
----
-
-This document is a companion to the [RKE Hardening Guide](../../../../pages-for-subheaders/rke1-hardening-guide.md), which provides prescriptive guidance on how to harden RKE clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark.
-
-
-This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes:
-
-| Rancher Version | CIS Benchmark Version | Kubernetes Version |
-|-----------------|-----------------------|--------------------|
-| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 |
-
-This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`.
-
-This document is for Rancher operators, security teams, auditors and decision makers.
-
-For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.7. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
-
-## Testing Methodology
-
-Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files.
-
-Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results.
-
-:::note
-
-This guide only covers `automated` (previously called `scored`) tests.
-
-:::
-
-### Controls
-
-## 1.1 Control Plane Node Configuration Files
-### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644
-
-**Audit:**
-
-```bash
-ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644 permissions=600
-```
-
-### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root
-
-**Audit:**
-
-```bash
-ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root root:root
-```
-
-### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above). For example,
-chmod 700 /var/lib/etcd
-
-**Audit:**
-
-```bash
-stat -c %a /node/var/lib/etcd
-```
-
-**Expected Result**:
-
-```console
-'700' is equal to '700'
-```
-
-**Returned Value**:
-
-```console
-700
-```
-
-### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above).
-For example, chown etcd:etcd /var/lib/etcd
-
-### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes.
-
-### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes.
-
-### 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
-All configuration is passed in as arguments at container run time.
-
-### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the master node.
-For example,
-chown -R root:root /etc/kubernetes/pki/
-
-**Audit Script:** `check_files_owner_in_dir.sh`
-
-```bash
-#!/usr/bin/env bash
-
-# This script is used to ensure the owner is set to root:root for
-# the given directory and all the files in it
-#
-# inputs:
-# $1 = /full/path/to/directory
-#
-# outputs:
-# true/false
-
-INPUT_DIR=$1
-
-if [[ "${INPUT_DIR}" == "" ]]; then
- echo "false"
- exit
-fi
-
-if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then
- echo "false"
- exit
-fi
-
-statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*)
-while read -r statInfoLine; do
- f=$(echo ${statInfoLine} | cut -d' ' -f1)
- p=$(echo ${statInfoLine} | cut -d' ' -f2)
-
- if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then
- if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then
- echo "false"
- exit
- fi
- else
- if [[ "$p" != "root:root" ]]; then
- echo "false"
- exit
- fi
- fi
-done <<< "${statInfoLines}"
-
-
-echo "true"
-exit
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_files_owner_in_dir.sh /node/etc/kubernetes/ssl
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-true
-```
-
-### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the master node.
-For example,
-chmod -R 644 /etc/kubernetes/pki/*.crt
-
-**Audit Script:** `check_files_permissions.sh`
-
-```bash
-#!/usr/bin/env bash
-
-# This script is used to ensure the file permissions are set to 644 or
-# more restrictive for all files in a given directory or a wildcard
-# selection of files
-#
-# inputs:
-# $1 = /full/path/to/directory or /path/to/fileswithpattern
-# ex: !(*key).pem
-#
-# $2 (optional) = permission (ex: 600)
-#
-# outputs:
-# true/false
-
-# Turn on "extended glob" for use of '!' in wildcard
-shopt -s extglob
-
-# Turn off history to avoid surprises when using '!'
-set -H
-
-USER_INPUT=$1
-
-if [[ "${USER_INPUT}" == "" ]]; then
- echo "false"
- exit
-fi
-
-
-if [[ -d ${USER_INPUT} ]]; then
- PATTERN="${USER_INPUT}/*"
-else
- PATTERN="${USER_INPUT}"
-fi
-
-PERMISSION=""
-if [[ "$2" != "" ]]; then
- PERMISSION=$2
-fi
-
-FILES_PERMISSIONS=$(stat -c %n\ %a ${PATTERN})
-
-while read -r fileInfo; do
- p=$(echo ${fileInfo} | cut -d' ' -f2)
-
- if [[ "${PERMISSION}" != "" ]]; then
- if [[ "$p" != "${PERMISSION}" ]]; then
- echo "false"
- exit
- fi
- else
- if [[ "$p" != "644" && "$p" != "640" && "$p" != "600" ]]; then
- echo "false"
- exit
- fi
- fi
-done <<< "${FILES_PERMISSIONS}"
-
-
-echo "true"
-exit
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_files_permissions.sh '/node/etc/kubernetes/ssl/!(*key).pem'
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-true
-```
-
-### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod -R 600 /etc/kubernetes/ssl/*key.pem
-
-**Audit Script:** `check_files_permissions.sh`
-
-```bash
-#!/usr/bin/env bash
-
-# This script is used to ensure the file permissions are set to 644 or
-# more restrictive for all files in a given directory or a wildcard
-# selection of files
-#
-# inputs:
-# $1 = /full/path/to/directory or /path/to/fileswithpattern
-# ex: !(*key).pem
-#
-# $2 (optional) = permission (ex: 600)
-#
-# outputs:
-# true/false
-
-# Turn on "extended glob" for use of '!' in wildcard
-shopt -s extglob
-
-# Turn off history to avoid surprises when using '!'
-set -H
-
-USER_INPUT=$1
-
-if [[ "${USER_INPUT}" == "" ]]; then
- echo "false"
- exit
-fi
-
-
-if [[ -d ${USER_INPUT} ]]; then
- PATTERN="${USER_INPUT}/*"
-else
- PATTERN="${USER_INPUT}"
-fi
-
-PERMISSION=""
-if [[ "$2" != "" ]]; then
- PERMISSION=$2
-fi
-
-FILES_PERMISSIONS=$(stat -c %n\ %a ${PATTERN})
-
-while read -r fileInfo; do
- p=$(echo ${fileInfo} | cut -d' ' -f2)
-
- if [[ "${PERMISSION}" != "" ]]; then
- if [[ "$p" != "${PERMISSION}" ]]; then
- echo "false"
- exit
- fi
- else
- if [[ "$p" != "644" && "$p" != "640" && "$p" != "600" ]]; then
- echo "false"
- exit
- fi
- fi
-done <<< "${FILES_PERMISSIONS}"
-
-
-echo "true"
-exit
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_files_permissions.sh '/node/etc/kubernetes/ssl/*key.pem'
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-true
-```
-
-## 1.2 API Server
-### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---anonymous-auth=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--anonymous-auth' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and configure alternate mechanisms for authentication. Then,
-edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the --token-auth-file= parameter.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--token-auth-file' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the `DenyServiceExternalIPs`
-from enabled admission plugins.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and remove the --kubelet-https parameter.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-https' is present OR '--kubelet-https' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the
-apiserver and kubelets. Then, edit API server pod specification file
-/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
-kubelet client certificate and key parameters as below.
---kubelet-client-certificate=
---kubelet-client-key=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and setup the TLS connection between
-the apiserver and kubelets. Then, edit the API server pod specification file
-/etc/kubernetes/manifests/kube-apiserver.yaml on the control plane node and set the
---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
---kubelet-certificate-authority=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-certificate-authority' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
-One such example could be as below.
---authorization-mode=RBAC
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes Node.
---authorization-mode=Node,RBAC
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'Node'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
-for example `--authorization-mode=Node,RBAC`.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'RBAC'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set the desired limits in a configuration file.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-and set the below parameters.
---enable-admission-plugins=...,EventRateLimit,...
---admission-control-config-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'EventRateLimit'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
-value that does not include AlwaysAdmit.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-AlwaysPullImages.
---enable-admission-plugins=...,AlwaysPullImages,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-SecurityContextDeny, unless PodSecurityPolicy is already in place.
---enable-admission-plugins=...,SecurityContextDeny,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and create ServiceAccount objects as per your environment.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
-value that does not include ServiceAccount.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --disable-admission-plugins parameter to
-ensure it does not include NamespaceLifecycle.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.16 Ensure that the admission control plugin NodeRestriction is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to a
-value that includes NodeRestriction.
---enable-admission-plugins=...,NodeRestriction,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'NodeRestriction'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.17 Ensure that the --secure-port argument is not set to 0 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and either remove the --secure-port parameter or
-set it to a different (non-zero) desired port.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--secure-port' is greater than 0 OR '--secure-port' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.18 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.19 Ensure that the --audit-log-path argument is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-path parameter to a suitable path and
-file where you would like audit logs to be written, for example,
---audit-log-path=/var/log/apiserver/audit.log
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-path' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.20 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxage parameter to 30
-or as an appropriate number of days, for example,
---audit-log-maxage=30
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxage' is greater or equal to 30
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.21 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
-value. For example,
---audit-log-maxbackup=10
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxbackup' is greater or equal to 10
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.22 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
-For example, to set it as 100 MB, --audit-log-maxsize=100
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxsize' is greater or equal to 100
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.24 Ensure that the --service-account-lookup argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---service-account-lookup=true
-Alternatively, you can delete the --service-account-lookup parameter from this file so
-that the default takes effect.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.25 Ensure that the --request-timeout argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --service-account-key-file parameter
-to the public key file for service accounts. For example,
---service-account-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.26 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate and key file parameters.
---etcd-certfile=
---etcd-keyfile=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--etcd-certfile' is present AND '--etcd-keyfile' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.27 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the TLS certificate and private key file parameters.
---tls-cert-file=
---tls-private-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.28 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the client certificate authority file.
---client-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.29 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate authority file parameter.
---etcd-cafile=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--etcd-cafile' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.30 Ensure that the --encryption-provider-config argument is set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the --encryption-provider-config parameter to the path of that file.
-For example, --encryption-provider-config=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--encryption-provider-config' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 1.2.31 Ensure that encryption providers are appropriately configured (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-In this file, choose aescbc, kms or secretbox as the encryption provider.
-
-**Audit Script:** `check_encryption_provider_config.sh`
-
-```bash
-#!/usr/bin/env bash
-
-# This script is used to check the encrption provider config is set to aesbc
-#
-# outputs:
-# true/false
-
-# TODO: Figure out the file location from the kube-apiserver commandline args
-ENCRYPTION_CONFIG_FILE="/node/etc/kubernetes/ssl/encryption.yaml"
-
-if [[ ! -f "${ENCRYPTION_CONFIG_FILE}" ]]; then
- echo "false"
- exit
-fi
-
-for provider in "$@"
-do
- if grep "$provider" "${ENCRYPTION_CONFIG_FILE}"; then
- echo "true"
- exit
- fi
-done
-
-echo "false"
-exit
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_encryption_provider_config.sh aescbc
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-- aescbc: true
-```
-
-### 1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
-TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
-TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
-TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384'
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-## 1.3 Controller Manager
-### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
-for example, --terminated-pod-gc-threshold=10
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--terminated-pod-gc-threshold' is present
-```
-
-**Returned Value**:
-
-```console
-root 5506 5484 2 22:01 ? 00:00:05 kube-controller-manager --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --v=2 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --allow-untagged-cloud=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --cluster-cidr=10.42.0.0/16 --node-monitor-grace-period=40s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --profiling=false --configure-cloud-routes=false --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true
-```
-
-### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 5506 5484 2 22:01 ? 00:00:05 kube-controller-manager --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --v=2 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --allow-untagged-cloud=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --cluster-cidr=10.42.0.0/16 --node-monitor-grace-period=40s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --profiling=false --configure-cloud-routes=false --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true
-```
-
-### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node to set the below parameter.
---use-service-account-credentials=true
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--use-service-account-credentials' is not equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 5506 5484 2 22:01 ? 00:00:05 kube-controller-manager --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --v=2 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --allow-untagged-cloud=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --cluster-cidr=10.42.0.0/16 --node-monitor-grace-period=40s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --profiling=false --configure-cloud-routes=false --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true
-```
-
-### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --service-account-private-key-file parameter
-to the private key file for service accounts.
---service-account-private-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 5506 5484 2 22:01 ? 00:00:05 kube-controller-manager --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --v=2 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --allow-untagged-cloud=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --cluster-cidr=10.42.0.0/16 --node-monitor-grace-period=40s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --profiling=false --configure-cloud-routes=false --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true
-```
-
-### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
---root-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--root-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 5506 5484 2 22:01 ? 00:00:05 kube-controller-manager --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --v=2 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --allow-untagged-cloud=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --cluster-cidr=10.42.0.0/16 --node-monitor-grace-period=40s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --profiling=false --configure-cloud-routes=false --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true
-```
-
-### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
---feature-gates=RotateKubeletServerCertificate=true
-
-Cluster provisioned by RKE handles certificate rotation directly through RKE.
-
-### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is present OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5506 5484 2 22:01 ? 00:00:05 kube-controller-manager --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --pod-eviction-timeout=5m0s --terminated-pod-gc-threshold=1000 --v=2 --allocate-node-cidrs=true --enable-hostpath-provisioner=false --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --allow-untagged-cloud=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --cluster-cidr=10.42.0.0/16 --node-monitor-grace-period=40s --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --profiling=false --configure-cloud-routes=false --leader-elect=true --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true
-```
-
-## 1.4 Scheduler
-### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-scheduler | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 5671 5649 0 22:01 ? 00:00:01 kube-scheduler --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml
-```
-
-### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-scheduler | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is present OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-root 5671 5649 0 22:01 ? 00:00:01 kube-scheduler --authorization-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --authentication-kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml
-```
-
-## 2 Etcd Node Configuration
-### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the etcd service documentation and configure TLS encryption.
-Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
-on the master node and set the below parameters.
---cert-file=
---key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--cert-file' is present AND '--key-file' is present
-```
-
-**Returned Value**:
-
-```console
-etcd 5188 5167 3 22:01 ? 00:00:08 /usr/local/bin/etcd --client-cert-auth=true --data-dir=/var/lib/rancher/etcd/ --initial-advertise-peer-urls=https://172.31.31.51:2380 --listen-peer-urls=https://172.31.31.51:2380 --initial-cluster=etcd-ip-172-31-31-51=https://172.31.31.51:2380 --initial-cluster-state=new --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --peer-client-cert-auth=true --listen-client-urls=https://172.31.31.51:2379 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --advertise-client-urls=https://172.31.31.51:2379 --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-31-51 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --election-timeout=5000 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --heartbeat-interval=500 root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem root 19036 18926 5 22:05 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.23-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
-```
-
-### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
-node and set the below parameter.
---client-cert-auth="true"
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--client-cert-auth' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-etcd 5188 5167 3 22:01 ? 00:00:08 /usr/local/bin/etcd --client-cert-auth=true --data-dir=/var/lib/rancher/etcd/ --initial-advertise-peer-urls=https://172.31.31.51:2380 --listen-peer-urls=https://172.31.31.51:2380 --initial-cluster=etcd-ip-172-31-31-51=https://172.31.31.51:2380 --initial-cluster-state=new --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --peer-client-cert-auth=true --listen-client-urls=https://172.31.31.51:2379 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --advertise-client-urls=https://172.31.31.51:2379 --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-31-51 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --election-timeout=5000 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --heartbeat-interval=500 root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem root 19036 18926 4 22:05 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.23-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
-```
-
-### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
-node and either remove the --auto-tls parameter or set it to false.
---auto-tls=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-31-51 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem ETCDCTL_ENDPOINTS=https://172.31.31.51:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/
-```
-
-### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the etcd service documentation and configure peer TLS encryption as appropriate
-for your etcd cluster.
-Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the
-master node and set the below parameters.
---peer-client-file=
---peer-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--peer-cert-file' is present AND '--peer-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-etcd 5188 5167 3 22:01 ? 00:00:08 /usr/local/bin/etcd --client-cert-auth=true --data-dir=/var/lib/rancher/etcd/ --initial-advertise-peer-urls=https://172.31.31.51:2380 --listen-peer-urls=https://172.31.31.51:2380 --initial-cluster=etcd-ip-172-31-31-51=https://172.31.31.51:2380 --initial-cluster-state=new --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --peer-client-cert-auth=true --listen-client-urls=https://172.31.31.51:2379 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --advertise-client-urls=https://172.31.31.51:2379 --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-31-51 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --election-timeout=5000 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --heartbeat-interval=500 root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem root 19036 18926 2 22:05 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.23-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
-```
-
-### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
-node and set the below parameter.
---peer-client-cert-auth=true
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--peer-client-cert-auth' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-etcd 5188 5167 3 22:01 ? 00:00:08 /usr/local/bin/etcd --client-cert-auth=true --data-dir=/var/lib/rancher/etcd/ --initial-advertise-peer-urls=https://172.31.31.51:2380 --listen-peer-urls=https://172.31.31.51:2380 --initial-cluster=etcd-ip-172-31-31-51=https://172.31.31.51:2380 --initial-cluster-state=new --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --peer-client-cert-auth=true --listen-client-urls=https://172.31.31.51:2379 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --advertise-client-urls=https://172.31.31.51:2379 --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-31-51 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --election-timeout=5000 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --heartbeat-interval=500 root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem root 19036 18926 3 22:05 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.23-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
-```
-
-### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
-node and either remove the --peer-auto-tls parameter or set it to false.
---peer-auto-tls=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-31-51 ETCDCTL_API=3 ETCDCTL_CACERT=/etc/kubernetes/ssl/kube-ca.pem ETCDCTL_CERT=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem ETCDCTL_KEY=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem ETCDCTL_ENDPOINTS=https://172.31.31.51:2379 ETCD_UNSUPPORTED_ARCH=x86_64 HOME=/
-```
-
-### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-[Manual test]
-Follow the etcd documentation and create a dedicated certificate authority setup for the
-etcd service.
-Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the
-master node and set the below parameter.
---trusted-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--trusted-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-etcd 5188 5167 3 22:01 ? 00:00:08 /usr/local/bin/etcd --client-cert-auth=true --data-dir=/var/lib/rancher/etcd/ --initial-advertise-peer-urls=https://172.31.31.51:2380 --listen-peer-urls=https://172.31.31.51:2380 --initial-cluster=etcd-ip-172-31-31-51=https://172.31.31.51:2380 --initial-cluster-state=new --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --peer-client-cert-auth=true --listen-client-urls=https://172.31.31.51:2379 --cert-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51.pem --advertise-client-urls=https://172.31.31.51:2379 --initial-cluster-token=etcd-cluster-1 --name=etcd-ip-172-31-31-51 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-172-31-31-51-key.pem --election-timeout=5000 --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --heartbeat-interval=500 root 5354 5332 14 22:01 ? 00:00:33 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem root 19036 18926 2 22:05 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=0 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.23-hardened --json --log_dir /tmp/sonobuoy/logs --outputfile /tmp/sonobuoy/etcd.json
-```
-
-## 3.1 Authentication and Authorization
-### 3.1.1 Client certificate authentication should not be used for users (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
-implemented in place of client certificates.
-
-## 3.2 Logging
-### 3.2.1 Ensure that a minimal audit policy is created (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Create an audit policy file for your cluster.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-policy-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 5354 5332 14 22:01 ? 00:00:34 kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,EventRateLimit --runtime-config=authorization.k8s.io/v1beta1=true --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxsize=100 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --requestheader-group-headers=X-Remote-Group --storage-backend=etcd3 --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --authentication-token-webhook-cache-ttl=5s --etcd-prefix=/registry --service-node-port-range=30000-32767 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml --profiling=false --audit-log-format=json --admission-control-config-file=/etc/kubernetes/admission.yaml --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --allow-privileged=true --requestheader-username-headers=X-Remote-User --anonymous-auth=false --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --api-audiences=unknown --etcd-servers=https://172.31.31.51:2379 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --advertise-address=172.31.31.51 --audit-log-maxage=30 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --requestheader-extra-headers-prefix=X-Remote-Extra- --bind-address=0.0.0.0 --service-account-lookup=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-allowed-names=kube-apiserver-proxy-client --service-account-issuer=rke --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --secure-port=6443 --audit-log-maxbackup=10 --audit-policy-file=/etc/kubernetes/audit-policy.yaml --cloud-provider= --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --service-cluster-ip-range=10.43.0.0/16 --service-account-signing-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem
-```
-
-### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the audit policy provided for the cluster and ensure that it covers
-at least the following areas,
-- Access to Secrets managed by the cluster. Care should be taken to only
- log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
- order to avoid risk of logging sensitive data.
-- Modification of Pod and Deployment objects.
-- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
- For most requests, minimally logging at the Metadata level is recommended
- (the most basic level of logging).
-
-## 4.1 Worker Node Configuration Files
-### 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service.
-All configuration is passed in as arguments at container run time.
-
-### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service.
-All configuration is passed in as arguments at container run time.
-
-### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive OR '/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml' is not present
-```
-
-**Returned Value**:
-
-```console
-permissions=600
-```
-
-### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example, chown root:root /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present OR '/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml' is not present
-```
-
-### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /etc/kubernetes/ssl/kubecfg-kube-node.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=600
-```
-
-### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chown root:root /etc/kubernetes/ssl/kubecfg-kube-node.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the file permissions of the
---client-ca-file chmod 644
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the ownership of the --client-ca-file.
-chown root:root
-
-**Audit:**
-
-```bash
-stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chmod 644 /var/lib/kubelet/config.yaml
-
-Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet.
-All configuration is passed in as arguments at container run time.
-
-### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chown root:root /var/lib/kubelet/config.yaml
-
-Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet.
-All configuration is passed in as arguments at container run time.
-
-## 4.2 Kubelet
-### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
-`false`.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-`--anonymous-auth=false`
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--anonymous-auth' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
-using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---authorization-mode=Webhook
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
-the location of the client CA file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---client-ca-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.4 Ensure that the --read-only-port argument is set to 0 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---read-only-port=0
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--read-only-port' is equal to '0' OR '--read-only-port' is not present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
-value other than 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---streaming-connection-idle-timeout=5m
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--streaming-connection-idle-timeout' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---protect-kernel-defaults=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--protect-kernel-defaults' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove the --make-iptables-util-chains argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--make-iptables-util-chains' is equal to 'true' OR '--make-iptables-util-chains' is not present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and remove the --hostname-override argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors
-
-### 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--event-qps' is equal to '0'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
-of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
-to the location of the corresponding private key file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
---tls-cert-file=
---tls-private-key-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
-remove it altogether to use the default value.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
-variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'{.rotateCertificates}' is present OR '{.rotateCertificates}' is not present
-```
-
-### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
---feature-gates=RotateKubeletServerCertificate=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-Clusters provisioned by RKE handles certificate rotation directly through RKE.
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
-TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-or to a subset of these values.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the --tls-cipher-suites parameter as follows, or to a subset of these values.
---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then /bin/cat /var/lib/kubelet/config.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'--tls-cipher-suites' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 6239 5834 2 22:02 ? 00:00:04 kubelet --authorization-mode=Webhook --v=2 --root-dir=/var/lib/kubelet --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51-key.pem --cgroups-per-qos=True --streaming-connection-idle-timeout=30m --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-172-31-31-51.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --address=0.0.0.0 --cluster-domain=cluster.local --fail-swap-on=false --make-iptables-util-chains=true --volume-plugin-dir=/var/lib/kubelet/volumeplugins --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --pod-infra-container-image=rancher/mirrored-pause:3.6 --node-ip=172.31.31.51 --resolv-conf=/etc/resolv.conf --event-qps=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --protect-kernel-defaults=true --cluster-dns=10.43.0.10 --container-runtime=remote --authentication-token-webhook=true --anonymous-auth=false --feature-gates=RotateKubeletServerCertificate=true --cloud-provider= --read-only-port=0 --hostname-override=ip-172-31-31-51 --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf
-```
-
-## 5.1 RBAC and Service Accounts
-### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
-if they need this role or if they could use a role with fewer privileges.
-Where possible, first bind users to a lower privileged role and then remove the
-clusterrolebinding to the cluster-admin role :
-kubectl delete clusterrolebinding [name]
-
-### 5.1.2 Minimize access to secrets (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove get, list and watch access to Secret objects in the cluster.
-
-### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible replace any use of wildcards in clusterroles and roles with specific
-objects or actions.
-
-### 5.1.4 Minimize access to create pods (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove create access to pod objects in the cluster.
-
-### 5.1.5 Ensure that default service accounts are not actively used. (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Create explicit service accounts wherever a Kubernetes workload requires specific access
-to the Kubernetes API server.
-Modify the configuration of each default service account to include this value
-automountServiceAccountToken: false
-
-**Audit Script:** `check_for_default_sa.sh`
-
-```bash
-#!/bin/bash
-
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l)
-if [[ ${count_sa} -gt 0 ]]; then
- echo "false"
- exit
-fi
-
-for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name")
-do
- for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[].kind=="ServiceAccount" and .subjects[].name=="default") or (.subjects[].kind=="Group" and .subjects[].name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"')
- do
- read kind name <<<$(IFS=","; echo $result)
- resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[] != "podsecuritypolicies")' | wc -l)
- if [[ ${resource_count} -gt 0 ]]; then
- echo "false"
- exit
- fi
- done
-done
-
-
-echo "true"
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_default_sa.sh
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-Error from server (Forbidden): serviceaccounts is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "serviceaccounts" in API group "" at the cluster scope Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-fleet-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-impersonation-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cis-operator-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "default" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "ingress-nginx" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-node-lease" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-public" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "local" true
-```
-
-### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Modify the definition of pods and service accounts which do not need to mount service
-account tokens to disable it.
-
-### 5.1.7 Avoid use of system:masters group (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Remove the system:masters group from all users in the cluster.
-
-### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove the impersonate, bind and escalate rights from subjects.
-
-## 5.2 Pod Security Standards
-### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that either Pod Security Admission or an external policy control system is in place
-for every namespace which contains user workloads.
-
-### 5.2.2 Minimize the admission of privileged containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of privileged containers.
-
-### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostPID` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostIPC` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostNetwork` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.7 Minimize the admission of root containers (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
-or `MustRunAs` with the range of UIDs not including 0, is set.
-
-### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with the `NET_RAW` capability.
-
-### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that `allowedCapabilities` is not present in policies for the cluster unless
-it is set to an empty array.
-
-### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the use of capabilites in applications running on your cluster. Where a namespace
-contains applicaions which do not require any Linux capabities to operate consider adding
-a PSP which forbids the admission of containers which do not drop all capabilities.
-
-### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
-
-### 5.2.12 Minimize the admission of HostPath volumes (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `hostPath` volumes.
-
-### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers which use `hostPort` sections.
-
-## 5.3 Network Policies and CNI
-### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If the CNI plugin in use does not support network policies, consideration should be given to
-making use of a different plugin, or finding an alternate mechanism for restricting traffic
-in the Kubernetes cluster.
-
-### 5.3.2 Ensure that all Namespaces have Network Policies defined (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and create NetworkPolicy objects as you need them.
-
-**Audit Script:** `check_for_network_policies.sh`
-
-```bash
-#!/bin/bash
-
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-for namespace in $(kubectl get namespaces --all-namespaces -o json | jq -r '.items[].metadata.name'); do
- policy_count=$(kubectl get networkpolicy -n ${namespace} -o json | jq '.items | length')
- if [[ ${policy_count} -eq 0 ]]; then
- echo "false"
- exit
- fi
-done
-
-echo "true"
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_network_policies.sh
-```
-
-**Expected Result**:
-
-```console
-'true' is present
-```
-
-**Returned Value**:
-
-```console
-true
-```
-
-## 5.4 Secrets Management
-### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If possible, rewrite application code to read Secrets from mounted secret files, rather than
-from environment variables.
-
-### 5.4.2 Consider external secret storage (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Refer to the Secrets management options offered by your cloud provider or a third-party
-secrets management solution.
-
-## 5.5 Extensible Admission Control
-### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and setup image provenance.
-
-## 5.7 General Policies
-### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the documentation and create namespaces for objects in your deployment as you need
-them.
-
-### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
-An example is as below:
-securityContext:
-seccompProfile:
-type: RuntimeDefault
-
-### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
-suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
-Containers.
-
-### 5.7.4 The default namespace should not be used (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
-resources and that all new resources are created in a specific namespace.
-
-**Audit Script:** `check_for_default_ns.sh`
-
-```bash
-#!/bin/bash
-
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-count=$(kubectl get all -n default -o json | jq .items[] | jq -r 'select((.metadata.name!="kubernetes"))' | jq .metadata.name | wc -l)
-if [[ ${count} -gt 0 ]]; then
- echo "false"
- exit
-fi
-
-echo "true"
-
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_default_ns.sh
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-Error from server (Forbidden): replicationcontrollers is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "replicationcontrollers" in API group "" in the namespace "default" Error from server (Forbidden): services is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "services" in API group "" in the namespace "default" Error from server (Forbidden): daemonsets.apps is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "daemonsets" in API group "apps" in the namespace "default" Error from server (Forbidden): deployments.apps is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "deployments" in API group "apps" in the namespace "default" Error from server (Forbidden): replicasets.apps is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "replicasets" in API group "apps" in the namespace "default" Error from server (Forbidden): statefulsets.apps is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "statefulsets" in API group "apps" in the namespace "default" Error from server (Forbidden): horizontalpodautoscalers.autoscaling is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "default" Error from server (Forbidden): cronjobs.batch is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "cronjobs" in API group "batch" in the namespace "default" Error from server (Forbidden): jobs.batch is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "jobs" in API group "batch" in the namespace "default" true
-```
-
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-hardening-guide.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-hardening-guide.md
new file mode 100644
index 00000000000..431d87bb1ed
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-hardening-guide.md
@@ -0,0 +1,273 @@
+---
+title: RKE2 加固指南
+---
+
+
+
+
+
+本文档提供了针对生产环境的 RKE2 集群进行加固的具体指导,以便在使用 Rancher 部署之前进行配置。它概述了满足信息安全中心(Center for Information Security, CIS)Kubernetes benchmark controls 制所需的配置和控制。
+
+:::note
+这份加固指南描述了如何确保你集群中的节点安全。我们建议你在安装 Kubernetes 之前遵循本指南。
+:::
+
+此加固指南适用于 RKE2 集群,并与以下版本的 CIS Kubernetes Benchmark、Kubernetes 和 Rancher 相关联:
+
+| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 |
+|-----------------|-----------------------|------------------------------|
+| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 |
+| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 |
+| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 至 v1.26 |
+
+:::note
+- 在 Benchmark v1.24 及更高版本中,由于新的文件权限要求(600 而不是 644),一些检查 ID 可能会失败。受影响的检查 ID 包括:`1.1.1`, `1.1.3`, `1.1.5`, `1.1.7`, `1.1.13`, `1.1.15`, `1.1.17`, `4.1.3`, `4.1.5` 和 `4.1.9`。
+- 在 Benchmark v1.7 中,不再需要 `--protect-kernel-defaults` (4.2.6) 参数,并已被 CIS 删除。
+:::
+
+有关如何评估加固的 RKE2 集群与官方 CIS benchmark 的更多细节,请参考特定 Kubernetes 和 CIS benchmark 版本的 RKE2 自我评估指南。
+
+RKE2 在不需要修改的情况下通过了许多 Kubernetes CIS controls,因为它默认应用了几个安全缓解措施。然而,有一些值得注意的例外情况,需要手动干预才能完全符合 CIS Benchmark 要求:
+
+1. RKE2 不会修改主机操作系统。因此,作为运维人员,你必须进行一些主机级别的修改。
+2. 某些 CIS controls 对于网络策略和 Pod 安全标准(或 RKE2 v1.25 之前的 Pod 安全策略 (PSP))将限制集群的功能。你必须选择让 RKE2 为你配置这些功能。为了确保满足这些要求,可以启动 RKE2 并设置 profile 标志为 `cis-1.23`(适用于 v1.25 及更新版本)或 `cis-1.6`(适用于 v1.24 及更早版本)。
+
+## 主机级别要求
+
+主机级要求有两个方面:内核参数和 etcd 进程/目录配置。这些在本节中进行了概述。
+
+### 确保 `protect-kernel-defaults` 已经设置
+
+
+
+
+自 CIS benchmark 1.7 开始,不再需要`protect-kernel-defaults`。
+
+
+
+
+这是一个 kubelet 标志,如果所需的内核参数未设置或设置为与 kubelet 的默认值不同的值,将导致 kubelet 退出。
+
+可以在 Rancher 的集群配置中设置 `protect-kernel-defaults` 标志。
+
+```yaml
+spec:
+ rkeConfig:
+ machineSelectorConfig:
+ - config:
+ protect-kernel-defaults: true
+```
+
+
+
+
+### 设置内核参数
+
+建议为集群中的所有节点类型设置以下 `sysctl` 配置。在 `/etc/sysctl.d/90-kubelet.conf` 中设置以下参数:
+
+```ini
+vm.panic_on_oom=0
+vm.overcommit_memory=1
+kernel.panic=10
+kernel.panic_on_oops=1
+```
+
+运行 `sudo sysctl -p /etc/sysctl.d/90-kubelet.conf` 以启用设置。
+
+### 确保 etcd 配置正确
+
+CIS Benchmark 要求 etcd 数据目录由 `etcd` 用户和组拥有。这意味着 etcd 进程必须以主机级别的 `etcd` 用户身份运行。为了实现这一点,在使用有效的 `cis-1.xx` 配置文件启动 RKE2 时,RKE2 会采取以下几个步骤:
+
+1. 检查主机上是否存在 `etcd` 用户和组。如果不存在,则显示错误并退出。
+2. 使用 `etcd` 作为用户和组所有者创建 etcd 的数据目录。
+3. 通过适当设置 etcd 静态 Pod 的 `SecurityContext`,确保 etcd 进程以 `etcd` 用户和组的身份运行。
+
+为满足上述要求,你必须执行以下操作:
+
+#### 创建 etcd 用户
+
+在某些 Linux 发行版中,`useradd` 命令不会创建组。下面包含了 `-U` 标志来解决这个问题。这个标志告诉 `useradd` 创建一个与用户同名的组。
+
+```bash
+sudo useradd -r -c "etcd user" -s /sbin/nologin -M etcd -U
+```
+
+## Kubernetes 运行时要求
+
+通过 CIS Benchmark 测试的运行时要求主要集中在 Pod 安全、网络策略和内核参数上。当使用有效的 `cis-1.xx` 配置文件时,大部分都会被 RKE2 自动处理,但需要一些额外的运维人员干预是。本节概述了这些内容。
+
+### Pod 安全
+
+RKE2 始终以一定程度的 Pod 安全性运行。
+
+
+
+
+在 v1.25 及更高版本中,[Pod 安全准入(PSAs)](https://kubernetes.io/docs/concepts/security/pod-security-admission/)用于保证 pod 安全。
+
+以下是确保加固 RKE2 通过 Rancher 中提供的 CIS v1.23 加固配置文件 `rke2-cis-1.7-hardened` 所需的最低配置。
+
+```yaml
+spec:
+ defaultPodSecurityAdmissionConfigurationTemplateName: rancher-restricted
+ rkeConfig:
+ machineSelectorConfig:
+ - config:
+ profile: cis-1.23
+```
+
+当同时设置了 `defaultPodSecurityAdmissionConfigurationTemplateName` 和 `profile` 标志时,Rancher 和 RKE2 会执行以下操作:
+
+1. 检查是否已满足主机级要求。如果未满足,RKE2 将以致命错误退出,并描述未满足的要求。
+2. 应用允许群集传递关联 controls 的网络策略。
+3. 配置 Pod 安全准入控制器(PSA)使用 PSA 配置模板 `rancher-restricted`,以在所有命名空间中强制执行受限模式,除了模板豁免列表中的命名空间。
+ 这些命名空间被豁免,以允许系统 Pod 在没有限制的情况下运行,这是集群正常运行所必需的。
+
+:::note
+如果你打算将一个 RKE 集群导入到 Rancher 中,请参考[文档](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md)了解如何配置 PSA 以豁免 Rancher system 命名空间。
+:::
+
+
+
+
+
+在 Kubernetes v1.24 及更早版本中,`PodSecurityPolicy` 准入控制器始终是启用的。
+
+以下是确保 RKE2 加固以通过 Rancher 中提供的 CIS v1.23 加固配置文件 `rke2-cis-1.23-hardened` 所需的最低配置。
+
+:::note
+在下面的示例中,配置文件设置为`cis-1.6`,这是在上游 RKE2 中定义的值,但集群实际上配置为传递 CIS v1.23 加固配置文件
+:::
+
+```yaml
+spec:
+ defaultPodSecurityPolicyTemplateName: restricted-noroot
+ rkeConfig:
+ machineSelectorConfig:
+ - config:
+ profile: cis-1.6
+```
+
+
+当同时设置了 `defaultPodSecurityPolicyTemplateName` 和 `profile` 标志时,Rancher 和 RKE2 会执行以下操作:
+
+1. 检查是否已满足主机级要求。如果未满足,RKE2 将以致命错误退出,并描述未满足的要求。
+2. 应用网络策略,以确保集群通过相关的 controls 要求。
+3. 配置运行时 Pod 安全策略,以确保集群通过相关的 controls 要求。
+
+
+
+
+:::note
+Kubernetes control plane 组件以及关键的附加组件,如 CNI、DNS 和 Ingress,都作为 `kube-system` 命名空间中的 Pod 运行。因此,这个命名空间的限制政策较少,从而使这些组件能够正常运行。
+:::
+
+### 网络策略
+
+当使用有效的 `cis-1.xx` 配置文件运行时,RKE2 将设置适当的 `NetworkPolicies`,以满足 Kubernetes 内置命名空间的 CIS Benchmark 要求。这些命名空间包括:`kube-system`、`kube-public`、`kube-node-lease` 和 `default`。
+
+所使用的 `NetworkPolicy` 仅允许同一命名空间内的 Pod 相互通信。值得注意的例外是允许 DNS 请求进行解析。
+
+:::note
+运维人员必须像管理其他命名空间一样管理额外创建的命名空间的网络策略。
+:::
+
+### 配置 `default` service account
+
+**将 `default` service accountsSet 的 `automountServiceAccountToken` 设置为 `false`**
+
+Kubernetes 提供了一个 `default` service account,用于集群工作负载,在 pod 没有分配特定 service account 时使用。如果需要从 pod 访问 Kubernetes API,则应为该 pod 创建一个特定的 service account,并授予该 service account 权限。`default` service account 应配置为不提供 service account 令牌,并且不具有任何明确的权限分配。
+
+对于标准的 RKE2 安装中的每个命名空间,包括 `default` 和 `kube-system`,`default` service account 必须包含此值:
+
+```yaml
+automountServiceAccountToken: false
+```
+
+对于由集群操作员创建的命名空间,可以使用以下脚本和配置文件来配置 `default` service account。
+
+以下配置必须保存到一个名为 `account_update.yaml` 的文件中。
+
+```yaml
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: default
+automountServiceAccountToken: false
+```
+
+创建一个名为 `account_update.sh` 的 bash 脚本文件。确保运行 `sudo chmod +x account_update.sh` 命令,以便脚本具有执行权限。
+
+```bash
+#!/bin/bash -e
+
+for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do
+ echo -n "Patching namespace $namespace - "
+ kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)"
+done
+```
+
+执行此脚本以将 `account_update.yaml` 配置应用到所有命名空间中的 `default` service account。
+
+### API server 审计配置
+
+CIS 要求 1.2.19 至 1.2.22 与为 API server 配置审计日志有关。当 RKE2 在设置配置文件标志的情况下启动时,它将自动在 API server 中配置加固的 `--audit-log-` 参数以通过这些 CIS 检查。
+
+RKE2 的默认审计策略被配置为不记录 API server 中的请求。这样做是为了允许集群操作员灵活地定制符合其审计要求和需求的审计策略,因为这些策略是针对每个用户的环境和政策而特定的。
+
+当使用 `profile` 标志启动 RKE2 时,RKE2 会创建一个默认的审计策略。该策略定义在 `/etc/rancher/rke2/audit-policy.yaml` 中。
+
+```yaml
+apiVersion: audit.k8s.io/v1
+kind: Policy
+metadata:
+ creationTimestamp: null
+rules:
+- level: None
+```
+
+## 加固的 RKE2 模板配置参考
+
+参考模板配置用于在 Rancher 中创建加固的 RKE2 自定义集群。该参考不包括其他必需的**集群配置**指令,这些指令会根据你的环境而有所不同。
+
+
+
+
+```yaml
+apiVersion: provisioning.cattle.io/v1
+kind: Cluster
+metadata:
+ name: # 定义集群名称
+spec:
+ defaultPodSecurityAdmissionConfigurationTemplateName: rancher-restricted
+ kubernetesVersion: # 定义 RKE2 版本
+ rkeConfig:
+ machineSelectorConfig:
+ - config:
+ profile: cis-1.23
+```
+
+
+
+
+```yaml
+apiVersion: provisioning.cattle.io/v1
+kind: Cluster
+metadata:
+ name: # 定义集群名称
+spec:
+ defaultPodSecurityPolicyTemplateName: restricted-noroot
+ kubernetesVersion: # 定义 RKE2 版本
+ rkeConfig:
+ machineSelectorConfig:
+ - config:
+ profile: cis-1.6
+ protect-kernel-defaults: true
+```
+
+
+
+
+## 结论
+
+如果你按照本指南操作,由 Rancher 提供的 RKE2 自定义集群将配置为通过 CIS Kubernetes Benchmark 测试。你可以查看我们的 RKE2 自我评估指南,了解我们是如何验证每个 benchmarks 的,并且你可以在你的集群上执行相同的操作。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md
index 83cac7d6e59..05ec2b4aef5 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md
@@ -1,30 +1,34 @@
---
-title: RKE2 Self-Assessment Guide - CIS Benchmark v1.23 - K8s v1.23
+title: RKE2 自我评估指南 - CIS Benchmark v1.23 - K8s v1.23
---
-This document is a companion to the [RKE2 Hardening Guide](../../../../pages-for-subheaders/rke2-hardening-guide.md), which provides prescriptive guidance on how to harden RKE2 clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark.
+
+
+
-This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes:
+本文档是 [RKE2 加固指南](rke2-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 RKE2 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。
-| Rancher Version | CIS Benchmark Version | Kubernetes Version |
+本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes:
+
+| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 |
|-----------------|-----------------------|--------------------|
| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 |
-This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE2 install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`.
+本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群中的合规性。由于 Rancher 和 RKE2 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多 control 验证检查不适用。这些检查将返回 `Not Applicable` 的结果。
-This document is for Rancher operators, security teams, auditors and decision makers.
+本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。
-For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.23. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
+有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.23 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。
-## Testing Methodology
+## 测试方法
-RKE2 launches control plane components as static pods, managed by the kubelet, and uses containerd as the container runtime. Configuration is defined by arguments passed to the container at the time of initialization or via configuration file.
+RKE2 将 control plane 组件作为静态 Pod 启动,由 kubelet 管理,并使用 containerd 作为容器运行时。配置是由初始化时或通过配置文件传递给容器的参数定义的。
-Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE2 nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results.
+在 control 审计与原始 CIS benchmark 不同时,提供了针对 Rancher 的特定审计命令以进行测试。在执行测试时,你将需要访问所有 RKE2 节点主机上的命令行。这些命令还使用了 [kubectl](https://kubernetes.io/docs/tasks/tools/)(带有有效的配置文件)和 [jq](https://stedolan.github.io/jq/) 工具,在测试和评估测试结果时这些工具是必需的。
:::note
-This guide only covers `automated` (previously called `scored`) tests.
+本指南仅涵盖 `automated`(之前称为 `scored`)测试。
:::
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.24.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.24.md
deleted file mode 100644
index ffb7755d7ad..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.24.md
+++ /dev/null
@@ -1,3196 +0,0 @@
----
-title: RKE2 Self-Assessment Guide - CIS Benchmark v1.23 - K8s v1.24
----
-
-This document is a companion to the [RKE2 Hardening Guide](../../../../pages-for-subheaders/rke2-hardening-guide.md), which provides prescriptive guidance on how to harden RKE2 clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark.
-
-This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes:
-
-| Rancher Version | CIS Benchmark Version | Kubernetes Version |
-|-----------------|-----------------------|--------------------|
-| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.24 |
-
-This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE2 install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`.
-
-This document is for Rancher operators, security teams, auditors and decision makers.
-
-For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.23. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
-
-## Testing Methodology
-
-RKE2 launches control plane components as static pods, managed by the kubelet, and uses containerd as the container runtime. Configuration is defined by arguments passed to the container at the time of initialization or via configuration file.
-
-Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE2 nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results.
-
-:::note
-
-This guide only covers `automated` (previously called `scored`) tests.
-
-:::
-
-### Controls
-
-## 1.1 Master Node Configuration Files
-### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the
-control plane node.
-For example, chmod 644 /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644 /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; then stat -c %a /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'644' is equal to '644'
-```
-
-**Returned Value**:
-
-```console
-644
-```
-
-### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644 /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'permissions' is equal to '644'
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'644' is equal to '644'
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644
-
-**Audit:**
-
-```bash
-ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=600 permissions=644
-```
-
-### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root
-
-**Audit:**
-
-```bash
-ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root root:root
-```
-
-### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above). For example,
-chmod 700 /var/lib/etcd
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/db/etcd
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 700, expected 700 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=700
-```
-
-### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above).
-For example, chown etcd:etcd /var/lib/etcd
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/rke2/server/db/etcd
-```
-
-**Expected Result**:
-
-```console
-'etcd:etcd' is present
-```
-
-**Returned Value**:
-
-```console
-etcd:etcd
-```
-
-### 1.1.13 Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 600 /etc/kubernetes/admin.conf
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/cred/admin.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /etc/kubernetes/admin.conf
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/rke2/server/cred/admin.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 scheduler
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root scheduler
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 controllermanager
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/cred/controller.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root /var/lib/rancher/rke2/server/cred/controller.kubeconfig
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/rke2/server/cred/controller.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown -R root:root /var/lib/rancher/rke2/server/tls/
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/rke2/server/tls
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod -R 644 /var/lib/rancher/rke2/server/tls/*.crt
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/tls/*.crt
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644
-```
-
-### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod -R 600 /var/lib/rancher/rke2/server/tls/*.key
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/tls/*.key
-```
-
-**Expected Result**:
-
-```console
-'permissions' is equal to '600'
-```
-
-**Returned Value**:
-
-```console
-permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600
-```
-
-## 1.2 API Server
-### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---anonymous-auth=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--anonymous-auth' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and configure alternate mechanisms for authentication. Then,
-edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and remove the --token-auth-file= parameter.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--token-auth-file' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and remove the `DenyServiceExternalIPs`
-from enabled admission plugins.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and remove the --kubelet-https parameter.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-https' is present OR '--kubelet-https' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the
-apiserver and kubelets. Then, edit API server pod specification file
-/var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml on the control plane node and set the
-kubelet client certificate and key parameters as below.
---kubelet-client-certificate=
---kubelet-client-key=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and setup the TLS connection between
-the apiserver and kubelets. Then, edit the API server pod specification file
-/var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml on the control plane node and set the
---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
---kubelet-certificate-authority=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-certificate-authority' is present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
-One such example could be as below.
---authorization-mode=RBAC
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes Node.
---authorization-mode=Node,RBAC
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'Node'
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
-for example `--authorization-mode=Node,RBAC`.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'RBAC'
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and set the desired limits in a configuration file.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-and set the below parameters.
---enable-admission-plugins=...,EventRateLimit,...
---admission-control-config-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'EventRateLimit'
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
-value that does not include AlwaysAdmit.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-AlwaysPullImages.
---enable-admission-plugins=...,AlwaysPullImages,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'AlwaysPullImages'
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-SecurityContextDeny, unless PodSecurityPolicy is already in place.
---enable-admission-plugins=...,SecurityContextDeny,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy'
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and create ServiceAccount objects as per your environment.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
-value that does not include ServiceAccount.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --disable-admission-plugins parameter to
-ensure it does not include NamespaceLifecycle.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.16 Ensure that the admission control plugin NodeRestriction is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to a
-value that includes NodeRestriction.
---enable-admission-plugins=...,NodeRestriction,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'NodeRestriction'
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.17 Ensure that the --secure-port argument is not set to 0 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and either remove the --secure-port parameter or
-set it to a different (non-zero) desired port.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--secure-port' is greater than 0 OR '--secure-port' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.18 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.19 Ensure that the --audit-log-path argument is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-path parameter to a suitable path and
-file where you would like audit logs to be written, for example,
---audit-log-path=/var/log/apiserver/audit.log
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-path' is present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.20 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxage parameter to 30
-or as an appropriate number of days, for example,
---audit-log-maxage=30
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxage' is greater or equal to 30
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:34 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.21 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
-value. For example,
---audit-log-maxbackup=10
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxbackup' is greater or equal to 10
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:35 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.22 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
-For example, to set it as 100 MB, --audit-log-maxsize=100
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxsize' is greater or equal to 100
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:35 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.24 Ensure that the --service-account-lookup argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---service-account-lookup=true
-Alternatively, you can delete the --service-account-lookup parameter from this file so
-that the default takes effect.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-lookup' is not present OR '--service-account-lookup' is present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:35 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.25 Ensure that the --request-timeout argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --service-account-key-file parameter
-to the public key file for service accounts. For example,
---service-account-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:35 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.26 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate and key file parameters.
---etcd-certfile=
---etcd-keyfile=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--etcd-certfile' is present AND '--etcd-keyfile' is present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:35 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.27 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the TLS certificate and private key file parameters.
---tls-cert-file=
---tls-private-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:35 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.28 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the client certificate authority file.
---client-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:35 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.29 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate authority file parameter.
---etcd-cafile=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--etcd-cafile' is present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:35 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.30 Ensure that the --encryption-provider-config argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --encryption-provider-config parameter to the path of that file.
-For example, --encryption-provider-config=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--encryption-provider-config' is present
-```
-
-**Returned Value**:
-
-```console
-root 3484 3419 16 23:45 ? 00:01:35 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
-TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
-TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
-TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
-
-### 1.2.33 Ensure that encryption providers are appropriately configured (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-In this file, choose aescbc, kms or secretbox as the encryption provider.
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if grep aescbc /var/lib/rancher/rke2/server/cred/encryption-config.json; then echo 0; fi'
-```
-
-**Expected Result**:
-
-```console
-'0' is present
-```
-
-**Returned Value**:
-
-```console
-{"kind":"EncryptionConfiguration","apiVersion":"apiserver.config.k8s.io/v1","resources":[{"resources":["secrets"],"providers":[{"aescbc":{"keys":[{"name":"aescbckey","secret":"LdLeqCPN/HfmgnUBXVPkDtUfeOPHwcQzDiLYG3nzFI4="}]}},{"identity":{}}]}]} 0
-```
-
-## 1.3 Controller Manager
-### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
-for example, --terminated-pod-gc-threshold=10
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--terminated-pod-gc-threshold' is present
-```
-
-**Returned Value**:
-
-```console
-root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node to set the below parameter.
---use-service-account-credentials=true
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--use-service-account-credentials' is not equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and set the --service-account-private-key-file parameter
-to the private key file for service accounts.
---service-account-private-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
---root-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--root-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
---feature-gates=RotateKubeletServerCertificate=true
-
-### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3645 3538 1 23:45 ? 00:00:10 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-## 1.4 Scheduler
-### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml file
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-scheduler | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 3652 3523 0 23:45 ? 00:00:03 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
-```
-
-### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-scheduler | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3652 3523 0 23:45 ? 00:00:03 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
-```
-
-## 2 Etcd Node Configuration
-### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Follow the etcd service documentation and configure TLS encryption.
-Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
-on the master node and set the below parameters.
---cert-file=
---key-file=
-
-### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
-node and set the below parameter.
---client-cert-auth="true"
-
-### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
-node and either remove the --auto-tls parameter or set it to false.
---auto-tls=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-31-102 ETCD_UNSUPPORTED_ARCH= FILE_HASH=755e8914c854015cfb808b8456f808644387876672add1f9e19564c58f754595 NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 POD_HASH=78088791fd2612fad13ce760ef91a0ee HOME=/
-```
-
-### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Follow the etcd service documentation and configure peer TLS encryption as appropriate
-for your etcd cluster.
-Then, edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the
-master node and set the below parameters.
---peer-client-file=
---peer-key-file=
-
-### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
-node and set the below parameter.
---peer-client-cert-auth=true
-
-### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
-node and either remove the --peer-auto-tls parameter or set it to false.
---peer-auto-tls=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-31-102 ETCD_UNSUPPORTED_ARCH= FILE_HASH=755e8914c854015cfb808b8456f808644387876672add1f9e19564c58f754595 NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 POD_HASH=78088791fd2612fad13ce760ef91a0ee HOME=/
-```
-
-### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-[Manual test]
-Follow the etcd documentation and create a dedicated certificate authority setup for the
-etcd service.
-Then, edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the
-master node and set the below parameter.
---trusted-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Audit Config:**
-
-```bash
-cat /var/lib/rancher/rke2/server/db/etcd/config
-```
-
-**Expected Result**:
-
-```console
-'ETCD_TRUSTED_CA_FILE' is present OR '{.peer-transport-security.trusted-ca-file}' is equal to '/var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt'
-```
-
-**Returned Value**:
-
-```console
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-31-102 ETCD_UNSUPPORTED_ARCH= FILE_HASH=755e8914c854015cfb808b8456f808644387876672add1f9e19564c58f754595 NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 POD_HASH=78088791fd2612fad13ce760ef91a0ee HOME=/
-```
-
-## 3.1 Authentication and Authorization
-### 3.1.1 Client certificate authentication should not be used for users (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
-implemented in place of client certificates.
-
-## 3.2 Logging
-### 3.2.1 Ensure that a minimal audit policy is created (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Create an audit policy file for your cluster.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep | grep -o audit-policy-file
-```
-
-**Expected Result**:
-
-```console
-'audit-policy-file' is equal to 'audit-policy-file'
-```
-
-**Returned Value**:
-
-```console
-audit-policy-file
-```
-
-### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the audit policy provided for the cluster and ensure that it covers
-at least the following areas,
-- Access to Secrets managed by the cluster. Care should be taken to only
- log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
- order to avoid risk of logging sensitive data.
-- Modification of Pod and Deployment objects.
-- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
- For most requests, minimally logging at the Metadata level is recommended
- (the most basic level of logging).
-
-## 4.1 Worker Node Configuration Files
-### 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example, chmod 644 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-
-### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-
-### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive OR '/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig' is not present
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example, chown root:root /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present OR '/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig' is not present
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /var/lib/rancher/rke2/agent/kubelet.kubeconfig
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-'644' is equal to '644'
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chown root:root /var/lib/rancher/rke2/agent/kubelet.kubeconfig
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the file permissions of the
---client-ca-file chmod 644
-
-**Audit Script:** `check_cafile_permissions.sh`
-
-```bash
-#!/usr/bin/env bash
-
-CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
-CAFILE=/node$CAFILE
-if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
-if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_cafile_permissions.sh
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=600
-```
-
-### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the ownership of the --client-ca-file.
-chown root:root
-
-**Audit Script:** `check_cafile_ownership.sh`
-
-```bash
-#!/usr/bin/env bash
-
-CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
-CAFILE=/node$CAFILE
-if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
-if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_cafile_ownership.sh
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chmod 644 /etc/rancher/rke2/rke2.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /etc/rancher/rke2/rke2.yaml; then stat -c permissions=%a /etc/rancher/rke2/rke2.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=600
-```
-
-### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chown root:root /etc/rancher/rke2/rke2.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /etc/rancher/rke2/rke2.yaml; then stat -c %U:%G /etc/rancher/rke2/rke2.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-## 4.2 Kubelet
-### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
-`false`.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-`--anonymous-auth=false`
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--anonymous-auth' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3221 3096 3 23:44 ? 00:00:19 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-31-102 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=499dafa1-6719-45d4-a0c6-66109f61b011 --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
-using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---authorization-mode=Webhook
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3221 3096 3 23:44 ? 00:00:19 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-31-102 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=499dafa1-6719-45d4-a0c6-66109f61b011 --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
-the location of the client CA file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---client-ca-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3221 3096 3 23:44 ? 00:00:19 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-31-102 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=499dafa1-6719-45d4-a0c6-66109f61b011 --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.4 Ensure that the --read-only-port argument is set to 0 (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---read-only-port=0
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--read-only-port' is equal to '0' OR '--read-only-port' is present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3221 3096 3 23:44 ? 00:00:19 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-31-102 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=499dafa1-6719-45d4-a0c6-66109f61b011 --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
-value other than 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---streaming-connection-idle-timeout=5m
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{.streamingConnectionIdleTimeout}' is present OR '{.streamingConnectionIdleTimeout}' is not present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVTFNRE0zTUI0WERUSXpNREl5TmpJek5ETTFOMW9YRFRNek1ESXlNekl6TkRNMQpOMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFOVEF6TnpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJIZXg2aVo0Q09SYmlTd1pyYmRHa2ZIaHNKb2RZaGRScDFjNklkMngKS05ITHZzZEdxeEc0V2xySVp6WTEwTDlLWDJCVk9Senp0RUZ4cDc3MTlFTnVnNDZqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCU1pFWGkwbDNsS3dXS1U3S0x6CkgwZG5aTzVEOWpBS0JnZ3Foa2pPUFFRREFnTkhBREJFQWlCK2NQUUhLdXBJc05zU3BLVEg2NHJibFpKYTIyZ0oKdlh4TmNxb0dQSEdhUlFJZ1JJdU1hcFVKOHVyVEdxblZGckFldy81aFdOZFdRUjNyN3FXQmlrYnk0aEk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRpZ0F3SUJBZ0lJUnIzVlVFbTY4Yll3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU5UQXpOekFlRncweU16QXlNall5TXpRek5UZGFGdzB5TkRBeQpNall5TXpRek5UZGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUTE5SWdKZVJyUldOVGUKZC9UUWcwNWdNcWFMSU9oUU1HSUZKRytTSFJUTVl1RkxBUHhydGozV1g3T3hIRDFDaE5nM2p6VnJKbjcwYitpUApIRzJRNS9XTW8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVViNGJ3aEFWMitabXd0MWlaN2pKMGVDTytGRWd3Q2dZSUtvWkl6ajBFQXdJRFJ3QXcKUkFJZ2ZjNHJWdWJwUWVRMmxiVUJPNkxhK0hKNWVHbUFqSTRHMTV6K1pEZmZqQ1lDSUZGVkFTd1ZzVFRsUXlvMQpicmpsdnJzVW9rRmNxN3RIYjJxZk51dTV2aTB1Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxXTnMKYVdWdWRDMWpZVUF4TmpjM05EVTFNRE0zTUI0WERUSXpNREl5TmpJek5ETTFOMW9YRFRNek1ESXlNekl6TkRNMQpOMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTFqYkdsbGJuUXRZMkZBTVRZM056UTFOVEF6TnpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJLTzZGUG4xd3dqbWFYdzc1T1l1blI1QytMbkpiRlpNYlgyWDRVR0kKMWFKK0tXb0tMZExmRnlZbVdvbHoyV3IzUnBhNGVxRXYySTg1bndINWp1NDZnNXlqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnZodkNFQlhiNW1iQzNXSm51Ck1uUjRJNzRVU0RBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlCeWR6VWRaMm5IZU56UHY2RkwzYUdabXMrZ2VuTmoKLzNaditGR1dhZi9nK3dJaEFNMnVCVE9maW9vQUptRXBpSi92WUFrQmRMdkdyWEdOY2R6UzdJRzRJR3BXCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUQ4MXVnTGVmd0EvekE2U2l0V3czeDNiQWRHQ3A2eXJqYmNSRE14eFgwd3FvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTmZTSUNYa2EwVmpVM25mMDBJTk9ZREttaXlEb1VEQmlCU1J2a2gwVXpHTGhTd0Q4YTdZOQoxbCt6c1J3OVFvVFlONDgxYXlaKzlHL29qeHh0a09mMWpBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---protect-kernel-defaults=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--protect-kernel-defaults' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3221 3096 3 23:44 ? 00:00:19 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-31-102 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=499dafa1-6719-45d4-a0c6-66109f61b011 --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove the --make-iptables-util-chains argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{.makeIPTablesUtilChains}' is present OR '{.makeIPTablesUtilChains}' is not present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVTFNRE0zTUI0WERUSXpNREl5TmpJek5ETTFOMW9YRFRNek1ESXlNekl6TkRNMQpOMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFOVEF6TnpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJIZXg2aVo0Q09SYmlTd1pyYmRHa2ZIaHNKb2RZaGRScDFjNklkMngKS05ITHZzZEdxeEc0V2xySVp6WTEwTDlLWDJCVk9Senp0RUZ4cDc3MTlFTnVnNDZqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCU1pFWGkwbDNsS3dXS1U3S0x6CkgwZG5aTzVEOWpBS0JnZ3Foa2pPUFFRREFnTkhBREJFQWlCK2NQUUhLdXBJc05zU3BLVEg2NHJibFpKYTIyZ0oKdlh4TmNxb0dQSEdhUlFJZ1JJdU1hcFVKOHVyVEdxblZGckFldy81aFdOZFdRUjNyN3FXQmlrYnk0aEk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRpZ0F3SUJBZ0lJUnIzVlVFbTY4Yll3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU5UQXpOekFlRncweU16QXlNall5TXpRek5UZGFGdzB5TkRBeQpNall5TXpRek5UZGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUTE5SWdKZVJyUldOVGUKZC9UUWcwNWdNcWFMSU9oUU1HSUZKRytTSFJUTVl1RkxBUHhydGozV1g3T3hIRDFDaE5nM2p6VnJKbjcwYitpUApIRzJRNS9XTW8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVViNGJ3aEFWMitabXd0MWlaN2pKMGVDTytGRWd3Q2dZSUtvWkl6ajBFQXdJRFJ3QXcKUkFJZ2ZjNHJWdWJwUWVRMmxiVUJPNkxhK0hKNWVHbUFqSTRHMTV6K1pEZmZqQ1lDSUZGVkFTd1ZzVFRsUXlvMQpicmpsdnJzVW9rRmNxN3RIYjJxZk51dTV2aTB1Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxXTnMKYVdWdWRDMWpZVUF4TmpjM05EVTFNRE0zTUI0WERUSXpNREl5TmpJek5ETTFOMW9YRFRNek1ESXlNekl6TkRNMQpOMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTFqYkdsbGJuUXRZMkZBTVRZM056UTFOVEF6TnpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJLTzZGUG4xd3dqbWFYdzc1T1l1blI1QytMbkpiRlpNYlgyWDRVR0kKMWFKK0tXb0tMZExmRnlZbVdvbHoyV3IzUnBhNGVxRXYySTg1bndINWp1NDZnNXlqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnZodkNFQlhiNW1iQzNXSm51Ck1uUjRJNzRVU0RBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlCeWR6VWRaMm5IZU56UHY2RkwzYUdabXMrZ2VuTmoKLzNaditGR1dhZi9nK3dJaEFNMnVCVE9maW9vQUptRXBpSi92WUFrQmRMdkdyWEdOY2R6UzdJRzRJR3BXCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUQ4MXVnTGVmd0EvekE2U2l0V3czeDNiQWRHQ3A2eXJqYmNSRE14eFgwd3FvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTmZTSUNYa2EwVmpVM25mMDBJTk9ZREttaXlEb1VEQmlCU1J2a2gwVXpHTGhTd0Q4YTdZOQoxbCt6c1J3OVFvVFlONDgxYXlaKzlHL29qeHh0a09mMWpBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-### 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and remove the --hostname-override argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-### 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{.eventRecordQPS}' is present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVTFNRE0zTUI0WERUSXpNREl5TmpJek5ETTFOMW9YRFRNek1ESXlNekl6TkRNMQpOMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFOVEF6TnpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJIZXg2aVo0Q09SYmlTd1pyYmRHa2ZIaHNKb2RZaGRScDFjNklkMngKS05ITHZzZEdxeEc0V2xySVp6WTEwTDlLWDJCVk9Senp0RUZ4cDc3MTlFTnVnNDZqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCU1pFWGkwbDNsS3dXS1U3S0x6CkgwZG5aTzVEOWpBS0JnZ3Foa2pPUFFRREFnTkhBREJFQWlCK2NQUUhLdXBJc05zU3BLVEg2NHJibFpKYTIyZ0oKdlh4TmNxb0dQSEdhUlFJZ1JJdU1hcFVKOHVyVEdxblZGckFldy81aFdOZFdRUjNyN3FXQmlrYnk0aEk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRpZ0F3SUJBZ0lJUnIzVlVFbTY4Yll3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU5UQXpOekFlRncweU16QXlNall5TXpRek5UZGFGdzB5TkRBeQpNall5TXpRek5UZGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUTE5SWdKZVJyUldOVGUKZC9UUWcwNWdNcWFMSU9oUU1HSUZKRytTSFJUTVl1RkxBUHhydGozV1g3T3hIRDFDaE5nM2p6VnJKbjcwYitpUApIRzJRNS9XTW8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVViNGJ3aEFWMitabXd0MWlaN2pKMGVDTytGRWd3Q2dZSUtvWkl6ajBFQXdJRFJ3QXcKUkFJZ2ZjNHJWdWJwUWVRMmxiVUJPNkxhK0hKNWVHbUFqSTRHMTV6K1pEZmZqQ1lDSUZGVkFTd1ZzVFRsUXlvMQpicmpsdnJzVW9rRmNxN3RIYjJxZk51dTV2aTB1Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxXTnMKYVdWdWRDMWpZVUF4TmpjM05EVTFNRE0zTUI0WERUSXpNREl5TmpJek5ETTFOMW9YRFRNek1ESXlNekl6TkRNMQpOMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTFqYkdsbGJuUXRZMkZBTVRZM056UTFOVEF6TnpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJLTzZGUG4xd3dqbWFYdzc1T1l1blI1QytMbkpiRlpNYlgyWDRVR0kKMWFKK0tXb0tMZExmRnlZbVdvbHoyV3IzUnBhNGVxRXYySTg1bndINWp1NDZnNXlqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnZodkNFQlhiNW1iQzNXSm51Ck1uUjRJNzRVU0RBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlCeWR6VWRaMm5IZU56UHY2RkwzYUdabXMrZ2VuTmoKLzNaditGR1dhZi9nK3dJaEFNMnVCVE9maW9vQUptRXBpSi92WUFrQmRMdkdyWEdOY2R6UzdJRzRJR3BXCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUQ4MXVnTGVmd0EvekE2U2l0V3czeDNiQWRHQ3A2eXJqYmNSRE14eFgwd3FvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTmZTSUNYa2EwVmpVM25mMDBJTk9ZREttaXlEb1VEQmlCU1J2a2gwVXpHTGhTd0Q4YTdZOQoxbCt6c1J3OVFvVFlONDgxYXlaKzlHL29qeHh0a09mMWpBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
-of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
-to the location of the corresponding private key file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
---tls-cert-file=
---tls-private-key-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3221 3096 3 23:44 ? 00:00:19 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-31-102 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=499dafa1-6719-45d4-a0c6-66109f61b011 --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
-remove it altogether to use the default value.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
-variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{.rotateCertificates}' is present OR '{.rotateCertificates}' is not present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVTFNRE0zTUI0WERUSXpNREl5TmpJek5ETTFOMW9YRFRNek1ESXlNekl6TkRNMQpOMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFOVEF6TnpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJIZXg2aVo0Q09SYmlTd1pyYmRHa2ZIaHNKb2RZaGRScDFjNklkMngKS05ITHZzZEdxeEc0V2xySVp6WTEwTDlLWDJCVk9Senp0RUZ4cDc3MTlFTnVnNDZqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCU1pFWGkwbDNsS3dXS1U3S0x6CkgwZG5aTzVEOWpBS0JnZ3Foa2pPUFFRREFnTkhBREJFQWlCK2NQUUhLdXBJc05zU3BLVEg2NHJibFpKYTIyZ0oKdlh4TmNxb0dQSEdhUlFJZ1JJdU1hcFVKOHVyVEdxblZGckFldy81aFdOZFdRUjNyN3FXQmlrYnk0aEk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRpZ0F3SUJBZ0lJUnIzVlVFbTY4Yll3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU5UQXpOekFlRncweU16QXlNall5TXpRek5UZGFGdzB5TkRBeQpNall5TXpRek5UZGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUTE5SWdKZVJyUldOVGUKZC9UUWcwNWdNcWFMSU9oUU1HSUZKRytTSFJUTVl1RkxBUHhydGozV1g3T3hIRDFDaE5nM2p6VnJKbjcwYitpUApIRzJRNS9XTW8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVViNGJ3aEFWMitabXd0MWlaN2pKMGVDTytGRWd3Q2dZSUtvWkl6ajBFQXdJRFJ3QXcKUkFJZ2ZjNHJWdWJwUWVRMmxiVUJPNkxhK0hKNWVHbUFqSTRHMTV6K1pEZmZqQ1lDSUZGVkFTd1ZzVFRsUXlvMQpicmpsdnJzVW9rRmNxN3RIYjJxZk51dTV2aTB1Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxXTnMKYVdWdWRDMWpZVUF4TmpjM05EVTFNRE0zTUI0WERUSXpNREl5TmpJek5ETTFOMW9YRFRNek1ESXlNekl6TkRNMQpOMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTFqYkdsbGJuUXRZMkZBTVRZM056UTFOVEF6TnpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJLTzZGUG4xd3dqbWFYdzc1T1l1blI1QytMbkpiRlpNYlgyWDRVR0kKMWFKK0tXb0tMZExmRnlZbVdvbHoyV3IzUnBhNGVxRXYySTg1bndINWp1NDZnNXlqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnZodkNFQlhiNW1iQzNXSm51Ck1uUjRJNzRVU0RBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlCeWR6VWRaMm5IZU56UHY2RkwzYUdabXMrZ2VuTmoKLzNaditGR1dhZi9nK3dJaEFNMnVCVE9maW9vQUptRXBpSi92WUFrQmRMdkdyWEdOY2R6UzdJRzRJR3BXCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUQ4MXVnTGVmd0EvekE2U2l0V3czeDNiQWRHQ3A2eXJqYmNSRE14eFgwd3FvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTmZTSUNYa2EwVmpVM25mMDBJTk9ZREttaXlEb1VEQmlCU1J2a2gwVXpHTGhTd0Q4YTdZOQoxbCt6c1J3OVFvVFlONDgxYXlaKzlHL29qeHh0a09mMWpBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
---feature-gates=RotateKubeletServerCertificate=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{.featureGates.RotateKubeletServerCertificate}' is present OR '{.featureGates.RotateKubeletServerCertificate}' is not present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVTFNRE0zTUI0WERUSXpNREl5TmpJek5ETTFOMW9YRFRNek1ESXlNekl6TkRNMQpOMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFOVEF6TnpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJIZXg2aVo0Q09SYmlTd1pyYmRHa2ZIaHNKb2RZaGRScDFjNklkMngKS05ITHZzZEdxeEc0V2xySVp6WTEwTDlLWDJCVk9Senp0RUZ4cDc3MTlFTnVnNDZqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCU1pFWGkwbDNsS3dXS1U3S0x6CkgwZG5aTzVEOWpBS0JnZ3Foa2pPUFFRREFnTkhBREJFQWlCK2NQUUhLdXBJc05zU3BLVEg2NHJibFpKYTIyZ0oKdlh4TmNxb0dQSEdhUlFJZ1JJdU1hcFVKOHVyVEdxblZGckFldy81aFdOZFdRUjNyN3FXQmlrYnk0aEk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRpZ0F3SUJBZ0lJUnIzVlVFbTY4Yll3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU5UQXpOekFlRncweU16QXlNall5TXpRek5UZGFGdzB5TkRBeQpNall5TXpRek5UZGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUTE5SWdKZVJyUldOVGUKZC9UUWcwNWdNcWFMSU9oUU1HSUZKRytTSFJUTVl1RkxBUHhydGozV1g3T3hIRDFDaE5nM2p6VnJKbjcwYitpUApIRzJRNS9XTW8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVViNGJ3aEFWMitabXd0MWlaN2pKMGVDTytGRWd3Q2dZSUtvWkl6ajBFQXdJRFJ3QXcKUkFJZ2ZjNHJWdWJwUWVRMmxiVUJPNkxhK0hKNWVHbUFqSTRHMTV6K1pEZmZqQ1lDSUZGVkFTd1ZzVFRsUXlvMQpicmpsdnJzVW9rRmNxN3RIYjJxZk51dTV2aTB1Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxXTnMKYVdWdWRDMWpZVUF4TmpjM05EVTFNRE0zTUI0WERUSXpNREl5TmpJek5ETTFOMW9YRFRNek1ESXlNekl6TkRNMQpOMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTFqYkdsbGJuUXRZMkZBTVRZM056UTFOVEF6TnpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJLTzZGUG4xd3dqbWFYdzc1T1l1blI1QytMbkpiRlpNYlgyWDRVR0kKMWFKK0tXb0tMZExmRnlZbVdvbHoyV3IzUnBhNGVxRXYySTg1bndINWp1NDZnNXlqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnZodkNFQlhiNW1iQzNXSm51Ck1uUjRJNzRVU0RBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlCeWR6VWRaMm5IZU56UHY2RkwzYUdabXMrZ2VuTmoKLzNaditGR1dhZi9nK3dJaEFNMnVCVE9maW9vQUptRXBpSi92WUFrQmRMdkdyWEdOY2R6UzdJRzRJR3BXCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUQ4MXVnTGVmd0EvekE2U2l0V3czeDNiQWRHQ3A2eXJqYmNSRE14eFgwd3FvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTmZTSUNYa2EwVmpVM25mMDBJTk9ZREttaXlEb1VEQmlCU1J2a2gwVXpHTGhTd0Q4YTdZOQoxbCt6c1J3OVFvVFlONDgxYXlaKzlHL29qeHh0a09mMWpBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
-TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-or to a subset of these values.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the --tls-cipher-suites parameter as follows, or to a subset of these values.
---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{range .tlsCipherSuites[:]}{}{','}{end}' is present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVTFNRE0zTUI0WERUSXpNREl5TmpJek5ETTFOMW9YRFRNek1ESXlNekl6TkRNMQpOMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFOVEF6TnpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJIZXg2aVo0Q09SYmlTd1pyYmRHa2ZIaHNKb2RZaGRScDFjNklkMngKS05ITHZzZEdxeEc0V2xySVp6WTEwTDlLWDJCVk9Senp0RUZ4cDc3MTlFTnVnNDZqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCU1pFWGkwbDNsS3dXS1U3S0x6CkgwZG5aTzVEOWpBS0JnZ3Foa2pPUFFRREFnTkhBREJFQWlCK2NQUUhLdXBJc05zU3BLVEg2NHJibFpKYTIyZ0oKdlh4TmNxb0dQSEdhUlFJZ1JJdU1hcFVKOHVyVEdxblZGckFldy81aFdOZFdRUjNyN3FXQmlrYnk0aEk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRpZ0F3SUJBZ0lJUnIzVlVFbTY4Yll3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU5UQXpOekFlRncweU16QXlNall5TXpRek5UZGFGdzB5TkRBeQpNall5TXpRek5UZGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUTE5SWdKZVJyUldOVGUKZC9UUWcwNWdNcWFMSU9oUU1HSUZKRytTSFJUTVl1RkxBUHhydGozV1g3T3hIRDFDaE5nM2p6VnJKbjcwYitpUApIRzJRNS9XTW8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVViNGJ3aEFWMitabXd0MWlaN2pKMGVDTytGRWd3Q2dZSUtvWkl6ajBFQXdJRFJ3QXcKUkFJZ2ZjNHJWdWJwUWVRMmxiVUJPNkxhK0hKNWVHbUFqSTRHMTV6K1pEZmZqQ1lDSUZGVkFTd1ZzVFRsUXlvMQpicmpsdnJzVW9rRmNxN3RIYjJxZk51dTV2aTB1Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxXTnMKYVdWdWRDMWpZVUF4TmpjM05EVTFNRE0zTUI0WERUSXpNREl5TmpJek5ETTFOMW9YRFRNek1ESXlNekl6TkRNMQpOMW93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTFqYkdsbGJuUXRZMkZBTVRZM056UTFOVEF6TnpCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJLTzZGUG4xd3dqbWFYdzc1T1l1blI1QytMbkpiRlpNYlgyWDRVR0kKMWFKK0tXb0tMZExmRnlZbVdvbHoyV3IzUnBhNGVxRXYySTg1bndINWp1NDZnNXlqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnZodkNFQlhiNW1iQzNXSm51Ck1uUjRJNzRVU0RBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlCeWR6VWRaMm5IZU56UHY2RkwzYUdabXMrZ2VuTmoKLzNaditGR1dhZi9nK3dJaEFNMnVCVE9maW9vQUptRXBpSi92WUFrQmRMdkdyWEdOY2R6UzdJRzRJR3BXCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUQ4MXVnTGVmd0EvekE2U2l0V3czeDNiQWRHQ3A2eXJqYmNSRE14eFgwd3FvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTmZTSUNYa2EwVmpVM25mMDBJTk9ZREttaXlEb1VEQmlCU1J2a2gwVXpHTGhTd0Q4YTdZOQoxbCt6c1J3OVFvVFlONDgxYXlaKzlHL29qeHh0a09mMWpBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-## 5.1 RBAC and Service Accounts
-### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
-if they need this role or if they could use a role with fewer privileges.
-Where possible, first bind users to a lower privileged role and then remove the
-clusterrolebinding to the cluster-admin role :
-kubectl delete clusterrolebinding [name]
-
-### 5.1.2 Minimize access to secrets (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove get, list and watch access to Secret objects in the cluster.
-
-### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible replace any use of wildcards in clusterroles and roles with specific
-objects or actions.
-
-### 5.1.4 Minimize access to create pods (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove create access to pod objects in the cluster.
-
-### 5.1.5 Ensure that default service accounts are not actively used. (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Create explicit service accounts wherever a Kubernetes workload requires specific access
-to the Kubernetes API server.
-Modify the configuration of each default service account to include this value
-automountServiceAccountToken: false
-
-**Audit Script:** `check_for_default_sa.sh`
-
-```bash
-#!/bin/bash
-
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l)
-if [[ ${count_sa} -gt 0 ]]; then
- echo "false"
- exit
-fi
-
-for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name")
-do
- for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[].kind=="ServiceAccount" and .subjects[].name=="default") or (.subjects[].kind=="Group" and .subjects[].name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"')
- do
- read kind name <<<$(IFS=","; echo $result)
- resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[] != "podsecuritypolicies")' | wc -l)
- if [[ ${resource_count} -gt 0 ]]; then
- echo "false"
- exit
- fi
- done
-done
-
-
-echo "true"
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_default_sa.sh
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-Error from server (Forbidden): serviceaccounts is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "serviceaccounts" in API group "" at the cluster scope Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "calico-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-fleet-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-impersonation-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cis-operator-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "default" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-node-lease" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-public" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "local" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "tigera-operator" true
-```
-
-### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Modify the definition of pods and service accounts which do not need to mount service
-account tokens to disable it.
-
-### 5.1.7 Avoid use of system:masters group (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Remove the system:masters group from all users in the cluster.
-
-### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove the impersonate, bind and escalate rights from subjects.
-
-## 5.2 Pod Security Standards
-### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that either Pod Security Admission or an external policy control system is in place
-for every namespace which contains user workloads.
-
-### 5.2.2 Minimize the admission of privileged containers (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of privileged containers.
-
-**Audit:**
-
-```bash
-kubectl get psp global-restricted-psp -o json | jq -r '.spec.runAsUser.rule'
-```
-
-**Expected Result**:
-
-```console
-'MustRunAsNonRoot' is present
-```
-
-**Returned Value**:
-
-```console
-MustRunAsNonRoot
-```
-
-### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostPID` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
---count=5
-```
-
-### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostIPC` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
---count=5
-```
-
-### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostNetwork` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
---count=2
-```
-
-### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
---count=4
-```
-
-### 5.2.7 Minimize the admission of root containers (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
-or `MustRunAs` with the range of UIDs not including 0, is set.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
---count=4
-```
-
-### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with the `NET_RAW` capability.
-
-**Audit:**
-
-```bash
-kubectl get psp global-restricted-psp -o json | jq -r .spec.requiredDropCapabilities[]
-```
-
-**Expected Result**:
-
-```console
-'ALL' is present
-```
-
-**Returned Value**:
-
-```console
-ALL
-```
-
-### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that `allowedCapabilities` is not present in policies for the cluster unless
-it is set to an empty array.
-
-### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the use of capabilites in applications running on your cluster. Where a namespace
-contains applicaions which do not require any Linux capabities to operate consider adding
-a PSP which forbids the admission of containers which do not drop all capabilities.
-
-### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
-
-### 5.2.12 Minimize the admission of HostPath volumes (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `hostPath` volumes.
-
-### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers which use `hostPort` sections.
-
-## 5.3 Network Policies and CNI
-### 5.3.1 Ensure that the CNI in use supports Network Policies (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If the CNI plugin in use does not support network policies, consideration should be given to
-making use of a different plugin, or finding an alternate mechanism for restricting traffic
-in the Kubernetes cluster.
-
-**Audit:**
-
-```bash
-kubectl get pods --all-namespaces --selector='k8s-app in (calico-node, canal, cilium)' -o name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
---count=1
-```
-
-### 5.3.2 Ensure that all Namespaces have Network Policies defined (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and create NetworkPolicy objects as you need them.
-
-**Audit Script:** `check_for_rke2_network_policies.sh`
-
-```bash
-#!/bin/bash
-
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-for namespace in kube-system kube-public default; do
- policy_count=$(/var/lib/rancher/rke2/bin/kubectl get networkpolicy -n ${namespace} -o json | jq -r '.items | length')
- if [ ${policy_count} -eq 0 ]; then
- echo "false"
- exit
- fi
-done
-
-echo "true"
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_rke2_network_policies.sh
-```
-
-**Expected Result**:
-
-```console
-'true' is present
-```
-
-**Returned Value**:
-
-```console
-true
-```
-
-## 5.4 Secrets Management
-### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If possible, rewrite application code to read Secrets from mounted secret files, rather than
-from environment variables.
-
-### 5.4.2 Consider external secret storage (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Refer to the Secrets management options offered by your cloud provider or a third-party
-secrets management solution.
-
-## 5.5 Extensible Admission Control
-### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and setup image provenance.
-
-## 5.7 General Policies
-### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the documentation and create namespaces for objects in your deployment as you need
-them.
-
-### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
-An example is as below:
-securityContext:
-seccompProfile:
-type: RuntimeDefault
-
-### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
-suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
-Containers.
-
-### 5.7.4 The default namespace should not be used (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
-resources and that all new resources are created in a specific namespace.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.25.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.25.md
deleted file mode 100644
index 4d524d3fab4..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.25.md
+++ /dev/null
@@ -1,3196 +0,0 @@
----
-title: RKE2 Self-Assessment Guide - CIS Benchmark v1.23 - K8s v1.25
----
-
-This document is a companion to the [RKE2 Hardening Guide](../../../../pages-for-subheaders/rke2-hardening-guide.md), which provides prescriptive guidance on how to harden RKE2 clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark.
-
-This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes:
-
-| Rancher Version | CIS Benchmark Version | Kubernetes Version |
-|-----------------|-----------------------|--------------------|
-| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.25 |
-
-This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE2 install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`.
-
-This document is for Rancher operators, security teams, auditors and decision makers.
-
-For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.23. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
-
-## Testing Methodology
-
-RKE2 launches control plane components as static pods, managed by the kubelet, and uses containerd as the container runtime. Configuration is defined by arguments passed to the container at the time of initialization or via configuration file.
-
-Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE2 nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results.
-
-:::note
-
-This guide only covers `automated` (previously called `scored`) tests.
-
-:::
-
-### Controls
-
-## 1.1 Master Node Configuration Files
-### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the
-control plane node.
-For example, chmod 644 /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644 /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; then stat -c %a /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'644' is equal to '644'
-```
-
-**Returned Value**:
-
-```console
-644
-```
-
-### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644 /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'permissions' is equal to '644'
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'644' is equal to '644'
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644
-
-**Audit:**
-
-```bash
-ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=600 permissions=644
-```
-
-### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root
-
-**Audit:**
-
-```bash
-ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root root:root
-```
-
-### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above). For example,
-chmod 700 /var/lib/etcd
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/db/etcd
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 700, expected 700 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=700
-```
-
-### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above).
-For example, chown etcd:etcd /var/lib/etcd
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/rke2/server/db/etcd
-```
-
-**Expected Result**:
-
-```console
-'etcd:etcd' is present
-```
-
-**Returned Value**:
-
-```console
-etcd:etcd
-```
-
-### 1.1.13 Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 600 /etc/kubernetes/admin.conf
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/cred/admin.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /etc/kubernetes/admin.conf
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/rke2/server/cred/admin.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 scheduler
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root scheduler
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 controllermanager
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/cred/controller.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root /var/lib/rancher/rke2/server/cred/controller.kubeconfig
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/rke2/server/cred/controller.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown -R root:root /var/lib/rancher/rke2/server/tls/
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/rke2/server/tls
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod -R 644 /var/lib/rancher/rke2/server/tls/*.crt
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/tls/*.crt
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644
-```
-
-### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod -R 600 /var/lib/rancher/rke2/server/tls/*.key
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/tls/*.key
-```
-
-**Expected Result**:
-
-```console
-'permissions' is equal to '600'
-```
-
-**Returned Value**:
-
-```console
-permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600
-```
-
-## 1.2 API Server
-### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---anonymous-auth=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--anonymous-auth' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and configure alternate mechanisms for authentication. Then,
-edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and remove the --token-auth-file= parameter.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--token-auth-file' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and remove the `DenyServiceExternalIPs`
-from enabled admission plugins.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and remove the --kubelet-https parameter.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-https' is present OR '--kubelet-https' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the
-apiserver and kubelets. Then, edit API server pod specification file
-/var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml on the control plane node and set the
-kubelet client certificate and key parameters as below.
---kubelet-client-certificate=
---kubelet-client-key=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and setup the TLS connection between
-the apiserver and kubelets. Then, edit the API server pod specification file
-/var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml on the control plane node and set the
---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
---kubelet-certificate-authority=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-certificate-authority' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
-One such example could be as below.
---authorization-mode=RBAC
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes Node.
---authorization-mode=Node,RBAC
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'Node'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
-for example `--authorization-mode=Node,RBAC`.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'RBAC'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and set the desired limits in a configuration file.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-and set the below parameters.
---enable-admission-plugins=...,EventRateLimit,...
---admission-control-config-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'EventRateLimit'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
-value that does not include AlwaysAdmit.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-AlwaysPullImages.
---enable-admission-plugins=...,AlwaysPullImages,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'AlwaysPullImages'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-SecurityContextDeny, unless PodSecurityPolicy is already in place.
---enable-admission-plugins=...,SecurityContextDeny,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and create ServiceAccount objects as per your environment.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
-value that does not include ServiceAccount.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --disable-admission-plugins parameter to
-ensure it does not include NamespaceLifecycle.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true root 15378 3910 99 23:32 ? 00:00:00 kubectl get --server=https://localhost:6443/ --client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --raw=/readyz
-```
-
-### 1.2.16 Ensure that the admission control plugin NodeRestriction is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to a
-value that includes NodeRestriction.
---enable-admission-plugins=...,NodeRestriction,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'NodeRestriction'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.17 Ensure that the --secure-port argument is not set to 0 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and either remove the --secure-port parameter or
-set it to a different (non-zero) desired port.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--secure-port' is greater than 0 OR '--secure-port' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.18 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.19 Ensure that the --audit-log-path argument is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-path parameter to a suitable path and
-file where you would like audit logs to be written, for example,
---audit-log-path=/var/log/apiserver/audit.log
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-path' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.20 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxage parameter to 30
-or as an appropriate number of days, for example,
---audit-log-maxage=30
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxage' is greater or equal to 30
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.21 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
-value. For example,
---audit-log-maxbackup=10
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxbackup' is greater or equal to 10
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.22 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
-For example, to set it as 100 MB, --audit-log-maxsize=100
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxsize' is greater or equal to 100
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.24 Ensure that the --service-account-lookup argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---service-account-lookup=true
-Alternatively, you can delete the --service-account-lookup parameter from this file so
-that the default takes effect.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-lookup' is not present OR '--service-account-lookup' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.25 Ensure that the --request-timeout argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --service-account-key-file parameter
-to the public key file for service accounts. For example,
---service-account-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.26 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate and key file parameters.
---etcd-certfile=
---etcd-keyfile=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--etcd-certfile' is present AND '--etcd-keyfile' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.27 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the TLS certificate and private key file parameters.
---tls-cert-file=
---tls-private-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.28 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the client certificate authority file.
---client-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.29 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate authority file parameter.
---etcd-cafile=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--etcd-cafile' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.30 Ensure that the --encryption-provider-config argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --encryption-provider-config parameter to the path of that file.
-For example, --encryption-provider-config=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--encryption-provider-config' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
-TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
-TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
-TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
-
-### 1.2.33 Ensure that encryption providers are appropriately configured (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-In this file, choose aescbc, kms or secretbox as the encryption provider.
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if grep aescbc /var/lib/rancher/rke2/server/cred/encryption-config.json; then echo 0; fi'
-```
-
-**Expected Result**:
-
-```console
-'0' is present
-```
-
-**Returned Value**:
-
-```console
-{"kind":"EncryptionConfiguration","apiVersion":"apiserver.config.k8s.io/v1","resources":[{"resources":["secrets"],"providers":[{"aescbc":{"keys":[{"name":"aescbckey","secret":"TSpBkJhIU0sRx+84IZuBZ1qO+eaRdW31C7QCnF3+n8s="}]}},{"identity":{}}]}]} 0
-```
-
-## 1.3 Controller Manager
-### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
-for example, --terminated-pod-gc-threshold=10
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--terminated-pod-gc-threshold' is present
-```
-
-**Returned Value**:
-
-```console
-root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node to set the below parameter.
---use-service-account-credentials=true
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--use-service-account-credentials' is not equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and set the --service-account-private-key-file parameter
-to the private key file for service accounts.
---service-account-private-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
---root-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--root-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
---feature-gates=RotateKubeletServerCertificate=true
-
-### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-## 1.4 Scheduler
-### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml file
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-scheduler | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 4126 4014 0 23:27 ? 00:00:02 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
-```
-
-### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-scheduler | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-root 4126 4014 0 23:27 ? 00:00:02 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
-```
-
-## 2 Etcd Node Configuration
-### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Follow the etcd service documentation and configure TLS encryption.
-Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
-on the master node and set the below parameters.
---cert-file=
---key-file=
-
-### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
-node and set the below parameter.
---client-cert-auth="true"
-
-### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
-node and either remove the --auto-tls parameter or set it to false.
---auto-tls=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-25-112 ETCD_UNSUPPORTED_ARCH= POD_HASH=ab0b8a2ee7711940d3d951edece075f3 FILE_HASH=068666c5f959fc1023cb1761daaaed212727c04120747d825c78fd7683122e6d NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 HOME=/
-```
-
-### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Follow the etcd service documentation and configure peer TLS encryption as appropriate
-for your etcd cluster.
-Then, edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the
-master node and set the below parameters.
---peer-client-file=
---peer-key-file=
-
-### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
-node and set the below parameter.
---peer-client-cert-auth=true
-
-### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
-node and either remove the --peer-auto-tls parameter or set it to false.
---peer-auto-tls=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-25-112 ETCD_UNSUPPORTED_ARCH= POD_HASH=ab0b8a2ee7711940d3d951edece075f3 FILE_HASH=068666c5f959fc1023cb1761daaaed212727c04120747d825c78fd7683122e6d NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 HOME=/
-```
-
-### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-[Manual test]
-Follow the etcd documentation and create a dedicated certificate authority setup for the
-etcd service.
-Then, edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the
-master node and set the below parameter.
---trusted-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Audit Config:**
-
-```bash
-cat /var/lib/rancher/rke2/server/db/etcd/config
-```
-
-**Expected Result**:
-
-```console
-'ETCD_TRUSTED_CA_FILE' is present OR '{.peer-transport-security.trusted-ca-file}' is equal to '/var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt'
-```
-
-**Returned Value**:
-
-```console
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-25-112 ETCD_UNSUPPORTED_ARCH= POD_HASH=ab0b8a2ee7711940d3d951edece075f3 FILE_HASH=068666c5f959fc1023cb1761daaaed212727c04120747d825c78fd7683122e6d NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 HOME=/
-```
-
-## 3.1 Authentication and Authorization
-### 3.1.1 Client certificate authentication should not be used for users (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
-implemented in place of client certificates.
-
-## 3.2 Logging
-### 3.2.1 Ensure that a minimal audit policy is created (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Create an audit policy file for your cluster.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep | grep -o audit-policy-file
-```
-
-**Expected Result**:
-
-```console
-'audit-policy-file' is equal to 'audit-policy-file'
-```
-
-**Returned Value**:
-
-```console
-audit-policy-file
-```
-
-### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the audit policy provided for the cluster and ensure that it covers
-at least the following areas,
-- Access to Secrets managed by the cluster. Care should be taken to only
- log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
- order to avoid risk of logging sensitive data.
-- Modification of Pod and Deployment objects.
-- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
- For most requests, minimally logging at the Metadata level is recommended
- (the most basic level of logging).
-
-## 4.1 Worker Node Configuration Files
-### 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example, chmod 644 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-
-### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-
-### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive OR '/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig' is not present
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example, chown root:root /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present OR '/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig' is not present
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /var/lib/rancher/rke2/agent/kubelet.kubeconfig
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-'644' is equal to '644'
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chown root:root /var/lib/rancher/rke2/agent/kubelet.kubeconfig
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the file permissions of the
---client-ca-file chmod 644
-
-**Audit Script:** `check_cafile_permissions.sh`
-
-```bash
-#!/usr/bin/env bash
-
-CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
-CAFILE=/node$CAFILE
-if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
-if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_cafile_permissions.sh
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=600
-```
-
-### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the ownership of the --client-ca-file.
-chown root:root
-
-**Audit Script:** `check_cafile_ownership.sh`
-
-```bash
-#!/usr/bin/env bash
-
-CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
-CAFILE=/node$CAFILE
-if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
-if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_cafile_ownership.sh
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chmod 644 /etc/rancher/rke2/rke2.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /etc/rancher/rke2/rke2.yaml; then stat -c permissions=%a /etc/rancher/rke2/rke2.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=600
-```
-
-### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chown root:root /etc/rancher/rke2/rke2.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /etc/rancher/rke2/rke2.yaml; then stat -c %U:%G /etc/rancher/rke2/rke2.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-## 4.2 Kubelet
-### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
-`false`.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-`--anonymous-auth=false`
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--anonymous-auth' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3727 3667 3 23:26 ? 00:00:11 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-112 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c1cd514-db7b-4692-a1c4-cacb2656161f --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
-using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---authorization-mode=Webhook
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3727 3667 3 23:26 ? 00:00:11 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-112 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c1cd514-db7b-4692-a1c4-cacb2656161f --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
-the location of the client CA file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---client-ca-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3727 3667 3 23:26 ? 00:00:11 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-112 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c1cd514-db7b-4692-a1c4-cacb2656161f --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.4 Ensure that the --read-only-port argument is set to 0 (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---read-only-port=0
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--read-only-port' is equal to '0' OR '--read-only-port' is present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3727 3667 3 23:26 ? 00:00:11 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-112 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c1cd514-db7b-4692-a1c4-cacb2656161f --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
-value other than 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---streaming-connection-idle-timeout=5m
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{.streamingConnectionIdleTimeout}' is present OR '{.streamingConnectionIdleTimeout}' is not present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVXpPVE00TUI0WERUSXpNREl5TmpJek1qVXpPRm9YRFRNek1ESXlNekl6TWpVegpPRm93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFNemt6T0RCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJOUHllSlcwNE9lMUN6clo5cHplN09WWlliMlh4QmNvWUJ5Z3JRcUwKYUdpR29tN2xLNGs2ZW1uaUZQekpiOU9EQ0hDRmYyVUZaVXZDQTQxcUVmNVB3NlNqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnhUdGJTbTVMTnFrditkYXQrCmczalBWWXJjcERBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBN08wV1NNVHo4djgwM3BZK3hFeDR2SUZaVEMraVoKUHVDc082eFRuVVVtb2dJaEFPY1NqQVdkUVJ0UmJvaUhJbTd2RHZuM1czT1JmYlUwaXU2UUhNcXdsTGs4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRpZ0F3SUJBZ0lJRjc3RlZRT1l5RzR3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU16a3pPREFlRncweU16QXlNall5TXpJMU16aGFGdzB5TkRBeQpNall5TXpJMU16aGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUXg5Z1lxUzVCbndwTDIKcUdOeStjNnR0Yk15VmE3ZTdzT0J1UWdaWHBHZ1hNcGxOZHFhVy9lZjNZTjQwTk5RUnl2SWdWeTMzU0ZHSFJ0VgpqaXBpOXRZbG8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVVyNlUvYVNMcm1ONkxiaDZHS1JJa3NoM1M1bEF3Q2dZSUtvWkl6ajBFQXdJRFNBQXcKUlFJaEFPS2paZmxRVU11RnZldlFkYzg3ckxPcnhoNUtyUGhlQUtkY0Y4YWdielFJQWlCcHBmUGNMMFRoZ1g5UAptcCtOZHJqa1hvQU5SRTlEWVdIRUlRbDdubytDdWc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZVRDQ0FSK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFrTVNJd0lBWURWUVFEREJseWEyVXlMV05zCmFXVnVkQzFqWVVBeE5qYzNORFV6T1RNNE1CNFhEVEl6TURJeU5qSXpNalV6T0ZvWERUTXpNREl5TXpJek1qVXoKT0Zvd0pERWlNQ0FHQTFVRUF3d1pjbXRsTWkxamJHbGxiblF0WTJGQU1UWTNOelExTXprek9EQlpNQk1HQnlxRwpTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUFCQlhFZ0x2Z3JLV09KdkZtVnJhNEhyY05YdmNwN3JMN3VFNW1IcFpkCmJmT2xkRDRkVlJ4NjRxak9DeUNpc2Vsczk4WDJLSXlieGNSNkpnbFU2VXRoOU5xalFqQkFNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlN2cFQ5cEl1dVkzb3R1SG9ZcApFaVN5SGRMbVVEQUtCZ2dxaGtqT1BRUURBZ05JQURCRkFpRUF3R2xqWXUxZkJpMHZROFczbWxueXVDNUJqMlBBCm14Sm1uS3BVSG8ydjJBZ0NJRndiblM3ajROUGtCT2hzRjJBeFhEZlZzdExoRWpqbmhPRHlQek1kT01STQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpZblVvZ2U2bHVuNFN1WVVaN3VBQTVGYXV6blBaQzV2WHlpc1R2SVRjOXFvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTWZZR0trdVFaOEtTOXFoamN2bk9yYld6TWxXdTN1N0RnYmtJR1Y2Um9GektaVFhhbWx2MwpuOTJEZU5EVFVFY3J5SUZjdDkwaFJoMGJWWTRxWXZiV0pRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---protect-kernel-defaults=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--protect-kernel-defaults' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3727 3667 3 23:26 ? 00:00:11 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-112 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c1cd514-db7b-4692-a1c4-cacb2656161f --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove the --make-iptables-util-chains argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{.makeIPTablesUtilChains}' is present OR '{.makeIPTablesUtilChains}' is not present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVXpPVE00TUI0WERUSXpNREl5TmpJek1qVXpPRm9YRFRNek1ESXlNekl6TWpVegpPRm93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFNemt6T0RCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJOUHllSlcwNE9lMUN6clo5cHplN09WWlliMlh4QmNvWUJ5Z3JRcUwKYUdpR29tN2xLNGs2ZW1uaUZQekpiOU9EQ0hDRmYyVUZaVXZDQTQxcUVmNVB3NlNqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnhUdGJTbTVMTnFrditkYXQrCmczalBWWXJjcERBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBN08wV1NNVHo4djgwM3BZK3hFeDR2SUZaVEMraVoKUHVDc082eFRuVVVtb2dJaEFPY1NqQVdkUVJ0UmJvaUhJbTd2RHZuM1czT1JmYlUwaXU2UUhNcXdsTGs4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRpZ0F3SUJBZ0lJRjc3RlZRT1l5RzR3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU16a3pPREFlRncweU16QXlNall5TXpJMU16aGFGdzB5TkRBeQpNall5TXpJMU16aGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUXg5Z1lxUzVCbndwTDIKcUdOeStjNnR0Yk15VmE3ZTdzT0J1UWdaWHBHZ1hNcGxOZHFhVy9lZjNZTjQwTk5RUnl2SWdWeTMzU0ZHSFJ0VgpqaXBpOXRZbG8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVVyNlUvYVNMcm1ONkxiaDZHS1JJa3NoM1M1bEF3Q2dZSUtvWkl6ajBFQXdJRFNBQXcKUlFJaEFPS2paZmxRVU11RnZldlFkYzg3ckxPcnhoNUtyUGhlQUtkY0Y4YWdielFJQWlCcHBmUGNMMFRoZ1g5UAptcCtOZHJqa1hvQU5SRTlEWVdIRUlRbDdubytDdWc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZVRDQ0FSK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFrTVNJd0lBWURWUVFEREJseWEyVXlMV05zCmFXVnVkQzFqWVVBeE5qYzNORFV6T1RNNE1CNFhEVEl6TURJeU5qSXpNalV6T0ZvWERUTXpNREl5TXpJek1qVXoKT0Zvd0pERWlNQ0FHQTFVRUF3d1pjbXRsTWkxamJHbGxiblF0WTJGQU1UWTNOelExTXprek9EQlpNQk1HQnlxRwpTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUFCQlhFZ0x2Z3JLV09KdkZtVnJhNEhyY05YdmNwN3JMN3VFNW1IcFpkCmJmT2xkRDRkVlJ4NjRxak9DeUNpc2Vsczk4WDJLSXlieGNSNkpnbFU2VXRoOU5xalFqQkFNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlN2cFQ5cEl1dVkzb3R1SG9ZcApFaVN5SGRMbVVEQUtCZ2dxaGtqT1BRUURBZ05JQURCRkFpRUF3R2xqWXUxZkJpMHZROFczbWxueXVDNUJqMlBBCm14Sm1uS3BVSG8ydjJBZ0NJRndiblM3ajROUGtCT2hzRjJBeFhEZlZzdExoRWpqbmhPRHlQek1kT01STQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpZblVvZ2U2bHVuNFN1WVVaN3VBQTVGYXV6blBaQzV2WHlpc1R2SVRjOXFvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTWZZR0trdVFaOEtTOXFoamN2bk9yYld6TWxXdTN1N0RnYmtJR1Y2Um9GektaVFhhbWx2MwpuOTJEZU5EVFVFY3J5SUZjdDkwaFJoMGJWWTRxWXZiV0pRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-### 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and remove the --hostname-override argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-### 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{.eventRecordQPS}' is present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVXpPVE00TUI0WERUSXpNREl5TmpJek1qVXpPRm9YRFRNek1ESXlNekl6TWpVegpPRm93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFNemt6T0RCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJOUHllSlcwNE9lMUN6clo5cHplN09WWlliMlh4QmNvWUJ5Z3JRcUwKYUdpR29tN2xLNGs2ZW1uaUZQekpiOU9EQ0hDRmYyVUZaVXZDQTQxcUVmNVB3NlNqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnhUdGJTbTVMTnFrditkYXQrCmczalBWWXJjcERBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBN08wV1NNVHo4djgwM3BZK3hFeDR2SUZaVEMraVoKUHVDc082eFRuVVVtb2dJaEFPY1NqQVdkUVJ0UmJvaUhJbTd2RHZuM1czT1JmYlUwaXU2UUhNcXdsTGs4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRpZ0F3SUJBZ0lJRjc3RlZRT1l5RzR3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU16a3pPREFlRncweU16QXlNall5TXpJMU16aGFGdzB5TkRBeQpNall5TXpJMU16aGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUXg5Z1lxUzVCbndwTDIKcUdOeStjNnR0Yk15VmE3ZTdzT0J1UWdaWHBHZ1hNcGxOZHFhVy9lZjNZTjQwTk5RUnl2SWdWeTMzU0ZHSFJ0VgpqaXBpOXRZbG8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVVyNlUvYVNMcm1ONkxiaDZHS1JJa3NoM1M1bEF3Q2dZSUtvWkl6ajBFQXdJRFNBQXcKUlFJaEFPS2paZmxRVU11RnZldlFkYzg3ckxPcnhoNUtyUGhlQUtkY0Y4YWdielFJQWlCcHBmUGNMMFRoZ1g5UAptcCtOZHJqa1hvQU5SRTlEWVdIRUlRbDdubytDdWc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZVRDQ0FSK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFrTVNJd0lBWURWUVFEREJseWEyVXlMV05zCmFXVnVkQzFqWVVBeE5qYzNORFV6T1RNNE1CNFhEVEl6TURJeU5qSXpNalV6T0ZvWERUTXpNREl5TXpJek1qVXoKT0Zvd0pERWlNQ0FHQTFVRUF3d1pjbXRsTWkxamJHbGxiblF0WTJGQU1UWTNOelExTXprek9EQlpNQk1HQnlxRwpTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUFCQlhFZ0x2Z3JLV09KdkZtVnJhNEhyY05YdmNwN3JMN3VFNW1IcFpkCmJmT2xkRDRkVlJ4NjRxak9DeUNpc2Vsczk4WDJLSXlieGNSNkpnbFU2VXRoOU5xalFqQkFNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlN2cFQ5cEl1dVkzb3R1SG9ZcApFaVN5SGRMbVVEQUtCZ2dxaGtqT1BRUURBZ05JQURCRkFpRUF3R2xqWXUxZkJpMHZROFczbWxueXVDNUJqMlBBCm14Sm1uS3BVSG8ydjJBZ0NJRndiblM3ajROUGtCT2hzRjJBeFhEZlZzdExoRWpqbmhPRHlQek1kT01STQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpZblVvZ2U2bHVuNFN1WVVaN3VBQTVGYXV6blBaQzV2WHlpc1R2SVRjOXFvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTWZZR0trdVFaOEtTOXFoamN2bk9yYld6TWxXdTN1N0RnYmtJR1Y2Um9GektaVFhhbWx2MwpuOTJEZU5EVFVFY3J5SUZjdDkwaFJoMGJWWTRxWXZiV0pRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
-of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
-to the location of the corresponding private key file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
---tls-cert-file=
---tls-private-key-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3727 3667 3 23:26 ? 00:00:11 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-112 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c1cd514-db7b-4692-a1c4-cacb2656161f --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
-remove it altogether to use the default value.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
-variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{.rotateCertificates}' is present OR '{.rotateCertificates}' is not present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVXpPVE00TUI0WERUSXpNREl5TmpJek1qVXpPRm9YRFRNek1ESXlNekl6TWpVegpPRm93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFNemt6T0RCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJOUHllSlcwNE9lMUN6clo5cHplN09WWlliMlh4QmNvWUJ5Z3JRcUwKYUdpR29tN2xLNGs2ZW1uaUZQekpiOU9EQ0hDRmYyVUZaVXZDQTQxcUVmNVB3NlNqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnhUdGJTbTVMTnFrditkYXQrCmczalBWWXJjcERBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBN08wV1NNVHo4djgwM3BZK3hFeDR2SUZaVEMraVoKUHVDc082eFRuVVVtb2dJaEFPY1NqQVdkUVJ0UmJvaUhJbTd2RHZuM1czT1JmYlUwaXU2UUhNcXdsTGs4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRpZ0F3SUJBZ0lJRjc3RlZRT1l5RzR3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU16a3pPREFlRncweU16QXlNall5TXpJMU16aGFGdzB5TkRBeQpNall5TXpJMU16aGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUXg5Z1lxUzVCbndwTDIKcUdOeStjNnR0Yk15VmE3ZTdzT0J1UWdaWHBHZ1hNcGxOZHFhVy9lZjNZTjQwTk5RUnl2SWdWeTMzU0ZHSFJ0VgpqaXBpOXRZbG8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVVyNlUvYVNMcm1ONkxiaDZHS1JJa3NoM1M1bEF3Q2dZSUtvWkl6ajBFQXdJRFNBQXcKUlFJaEFPS2paZmxRVU11RnZldlFkYzg3ckxPcnhoNUtyUGhlQUtkY0Y4YWdielFJQWlCcHBmUGNMMFRoZ1g5UAptcCtOZHJqa1hvQU5SRTlEWVdIRUlRbDdubytDdWc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZVRDQ0FSK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFrTVNJd0lBWURWUVFEREJseWEyVXlMV05zCmFXVnVkQzFqWVVBeE5qYzNORFV6T1RNNE1CNFhEVEl6TURJeU5qSXpNalV6T0ZvWERUTXpNREl5TXpJek1qVXoKT0Zvd0pERWlNQ0FHQTFVRUF3d1pjbXRsTWkxamJHbGxiblF0WTJGQU1UWTNOelExTXprek9EQlpNQk1HQnlxRwpTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUFCQlhFZ0x2Z3JLV09KdkZtVnJhNEhyY05YdmNwN3JMN3VFNW1IcFpkCmJmT2xkRDRkVlJ4NjRxak9DeUNpc2Vsczk4WDJLSXlieGNSNkpnbFU2VXRoOU5xalFqQkFNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlN2cFQ5cEl1dVkzb3R1SG9ZcApFaVN5SGRMbVVEQUtCZ2dxaGtqT1BRUURBZ05JQURCRkFpRUF3R2xqWXUxZkJpMHZROFczbWxueXVDNUJqMlBBCm14Sm1uS3BVSG8ydjJBZ0NJRndiblM3ajROUGtCT2hzRjJBeFhEZlZzdExoRWpqbmhPRHlQek1kT01STQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpZblVvZ2U2bHVuNFN1WVVaN3VBQTVGYXV6blBaQzV2WHlpc1R2SVRjOXFvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTWZZR0trdVFaOEtTOXFoamN2bk9yYld6TWxXdTN1N0RnYmtJR1Y2Um9GektaVFhhbWx2MwpuOTJEZU5EVFVFY3J5SUZjdDkwaFJoMGJWWTRxWXZiV0pRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
---feature-gates=RotateKubeletServerCertificate=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{.featureGates.RotateKubeletServerCertificate}' is present OR '{.featureGates.RotateKubeletServerCertificate}' is not present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVXpPVE00TUI0WERUSXpNREl5TmpJek1qVXpPRm9YRFRNek1ESXlNekl6TWpVegpPRm93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFNemt6T0RCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJOUHllSlcwNE9lMUN6clo5cHplN09WWlliMlh4QmNvWUJ5Z3JRcUwKYUdpR29tN2xLNGs2ZW1uaUZQekpiOU9EQ0hDRmYyVUZaVXZDQTQxcUVmNVB3NlNqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnhUdGJTbTVMTnFrditkYXQrCmczalBWWXJjcERBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBN08wV1NNVHo4djgwM3BZK3hFeDR2SUZaVEMraVoKUHVDc082eFRuVVVtb2dJaEFPY1NqQVdkUVJ0UmJvaUhJbTd2RHZuM1czT1JmYlUwaXU2UUhNcXdsTGs4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRpZ0F3SUJBZ0lJRjc3RlZRT1l5RzR3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU16a3pPREFlRncweU16QXlNall5TXpJMU16aGFGdzB5TkRBeQpNall5TXpJMU16aGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUXg5Z1lxUzVCbndwTDIKcUdOeStjNnR0Yk15VmE3ZTdzT0J1UWdaWHBHZ1hNcGxOZHFhVy9lZjNZTjQwTk5RUnl2SWdWeTMzU0ZHSFJ0VgpqaXBpOXRZbG8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVVyNlUvYVNMcm1ONkxiaDZHS1JJa3NoM1M1bEF3Q2dZSUtvWkl6ajBFQXdJRFNBQXcKUlFJaEFPS2paZmxRVU11RnZldlFkYzg3ckxPcnhoNUtyUGhlQUtkY0Y4YWdielFJQWlCcHBmUGNMMFRoZ1g5UAptcCtOZHJqa1hvQU5SRTlEWVdIRUlRbDdubytDdWc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZVRDQ0FSK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFrTVNJd0lBWURWUVFEREJseWEyVXlMV05zCmFXVnVkQzFqWVVBeE5qYzNORFV6T1RNNE1CNFhEVEl6TURJeU5qSXpNalV6T0ZvWERUTXpNREl5TXpJek1qVXoKT0Zvd0pERWlNQ0FHQTFVRUF3d1pjbXRsTWkxamJHbGxiblF0WTJGQU1UWTNOelExTXprek9EQlpNQk1HQnlxRwpTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUFCQlhFZ0x2Z3JLV09KdkZtVnJhNEhyY05YdmNwN3JMN3VFNW1IcFpkCmJmT2xkRDRkVlJ4NjRxak9DeUNpc2Vsczk4WDJLSXlieGNSNkpnbFU2VXRoOU5xalFqQkFNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlN2cFQ5cEl1dVkzb3R1SG9ZcApFaVN5SGRMbVVEQUtCZ2dxaGtqT1BRUURBZ05JQURCRkFpRUF3R2xqWXUxZkJpMHZROFczbWxueXVDNUJqMlBBCm14Sm1uS3BVSG8ydjJBZ0NJRndiblM3ajROUGtCT2hzRjJBeFhEZlZzdExoRWpqbmhPRHlQek1kT01STQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpZblVvZ2U2bHVuNFN1WVVaN3VBQTVGYXV6blBaQzV2WHlpc1R2SVRjOXFvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTWZZR0trdVFaOEtTOXFoamN2bk9yYld6TWxXdTN1N0RnYmtJR1Y2Um9GektaVFhhbWx2MwpuOTJEZU5EVFVFY3J5SUZjdDkwaFJoMGJWWTRxWXZiV0pRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
-TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-or to a subset of these values.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the --tls-cipher-suites parameter as follows, or to a subset of these values.
---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{range .tlsCipherSuites[:]}{}{','}{end}' is present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVXpPVE00TUI0WERUSXpNREl5TmpJek1qVXpPRm9YRFRNek1ESXlNekl6TWpVegpPRm93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFNemt6T0RCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJOUHllSlcwNE9lMUN6clo5cHplN09WWlliMlh4QmNvWUJ5Z3JRcUwKYUdpR29tN2xLNGs2ZW1uaUZQekpiOU9EQ0hDRmYyVUZaVXZDQTQxcUVmNVB3NlNqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnhUdGJTbTVMTnFrditkYXQrCmczalBWWXJjcERBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBN08wV1NNVHo4djgwM3BZK3hFeDR2SUZaVEMraVoKUHVDc082eFRuVVVtb2dJaEFPY1NqQVdkUVJ0UmJvaUhJbTd2RHZuM1czT1JmYlUwaXU2UUhNcXdsTGs4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRpZ0F3SUJBZ0lJRjc3RlZRT1l5RzR3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU16a3pPREFlRncweU16QXlNall5TXpJMU16aGFGdzB5TkRBeQpNall5TXpJMU16aGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUXg5Z1lxUzVCbndwTDIKcUdOeStjNnR0Yk15VmE3ZTdzT0J1UWdaWHBHZ1hNcGxOZHFhVy9lZjNZTjQwTk5RUnl2SWdWeTMzU0ZHSFJ0VgpqaXBpOXRZbG8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVVyNlUvYVNMcm1ONkxiaDZHS1JJa3NoM1M1bEF3Q2dZSUtvWkl6ajBFQXdJRFNBQXcKUlFJaEFPS2paZmxRVU11RnZldlFkYzg3ckxPcnhoNUtyUGhlQUtkY0Y4YWdielFJQWlCcHBmUGNMMFRoZ1g5UAptcCtOZHJqa1hvQU5SRTlEWVdIRUlRbDdubytDdWc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZVRDQ0FSK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFrTVNJd0lBWURWUVFEREJseWEyVXlMV05zCmFXVnVkQzFqWVVBeE5qYzNORFV6T1RNNE1CNFhEVEl6TURJeU5qSXpNalV6T0ZvWERUTXpNREl5TXpJek1qVXoKT0Zvd0pERWlNQ0FHQTFVRUF3d1pjbXRsTWkxamJHbGxiblF0WTJGQU1UWTNOelExTXprek9EQlpNQk1HQnlxRwpTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUFCQlhFZ0x2Z3JLV09KdkZtVnJhNEhyY05YdmNwN3JMN3VFNW1IcFpkCmJmT2xkRDRkVlJ4NjRxak9DeUNpc2Vsczk4WDJLSXlieGNSNkpnbFU2VXRoOU5xalFqQkFNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlN2cFQ5cEl1dVkzb3R1SG9ZcApFaVN5SGRMbVVEQUtCZ2dxaGtqT1BRUURBZ05JQURCRkFpRUF3R2xqWXUxZkJpMHZROFczbWxueXVDNUJqMlBBCm14Sm1uS3BVSG8ydjJBZ0NJRndiblM3ajROUGtCT2hzRjJBeFhEZlZzdExoRWpqbmhPRHlQek1kT01STQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpZblVvZ2U2bHVuNFN1WVVaN3VBQTVGYXV6blBaQzV2WHlpc1R2SVRjOXFvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTWZZR0trdVFaOEtTOXFoamN2bk9yYld6TWxXdTN1N0RnYmtJR1Y2Um9GektaVFhhbWx2MwpuOTJEZU5EVFVFY3J5SUZjdDkwaFJoMGJWWTRxWXZiV0pRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-## 5.1 RBAC and Service Accounts
-### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
-if they need this role or if they could use a role with fewer privileges.
-Where possible, first bind users to a lower privileged role and then remove the
-clusterrolebinding to the cluster-admin role :
-kubectl delete clusterrolebinding [name]
-
-### 5.1.2 Minimize access to secrets (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove get, list and watch access to Secret objects in the cluster.
-
-### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible replace any use of wildcards in clusterroles and roles with specific
-objects or actions.
-
-### 5.1.4 Minimize access to create pods (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove create access to pod objects in the cluster.
-
-### 5.1.5 Ensure that default service accounts are not actively used. (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Create explicit service accounts wherever a Kubernetes workload requires specific access
-to the Kubernetes API server.
-Modify the configuration of each default service account to include this value
-automountServiceAccountToken: false
-
-**Audit Script:** `check_for_default_sa.sh`
-
-```bash
-#!/bin/bash
-
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l)
-if [[ ${count_sa} -gt 0 ]]; then
- echo "false"
- exit
-fi
-
-for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name")
-do
- for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[].kind=="ServiceAccount" and .subjects[].name=="default") or (.subjects[].kind=="Group" and .subjects[].name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"')
- do
- read kind name <<<$(IFS=","; echo $result)
- resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[] != "podsecuritypolicies")' | wc -l)
- if [[ ${resource_count} -gt 0 ]]; then
- echo "false"
- exit
- fi
- done
-done
-
-
-echo "true"
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_default_sa.sh
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-Error from server (Forbidden): serviceaccounts is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "serviceaccounts" in API group "" at the cluster scope Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "calico-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-fleet-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-impersonation-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cis-operator-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "default" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-node-lease" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-public" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "local" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "tigera-operator" true
-```
-
-### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Modify the definition of pods and service accounts which do not need to mount service
-account tokens to disable it.
-
-### 5.1.7 Avoid use of system:masters group (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Remove the system:masters group from all users in the cluster.
-
-### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove the impersonate, bind and escalate rights from subjects.
-
-## 5.2 Pod Security Standards
-### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that either Pod Security Admission or an external policy control system is in place
-for every namespace which contains user workloads.
-
-### 5.2.2 Minimize the admission of privileged containers (Manual)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of privileged containers.
-
-**Audit:**
-
-```bash
-kubectl get psp global-restricted-psp -o json | jq -r '.spec.runAsUser.rule'
-```
-
-**Expected Result**:
-
-```console
-'MustRunAsNonRoot' is present
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp"
-```
-
-### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostPID` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostIPC` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostNetwork` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.7 Minimize the admission of root containers (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
-or `MustRunAs` with the range of UIDs not including 0, is set.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with the `NET_RAW` capability.
-
-**Audit:**
-
-```bash
-kubectl get psp global-restricted-psp -o json | jq -r .spec.requiredDropCapabilities[]
-```
-
-**Expected Result**:
-
-```console
-'ALL' is present
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp"
-```
-
-### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that `allowedCapabilities` is not present in policies for the cluster unless
-it is set to an empty array.
-
-### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the use of capabilites in applications running on your cluster. Where a namespace
-contains applicaions which do not require any Linux capabities to operate consider adding
-a PSP which forbids the admission of containers which do not drop all capabilities.
-
-### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
-
-### 5.2.12 Minimize the admission of HostPath volumes (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `hostPath` volumes.
-
-### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers which use `hostPort` sections.
-
-## 5.3 Network Policies and CNI
-### 5.3.1 Ensure that the CNI in use supports Network Policies (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If the CNI plugin in use does not support network policies, consideration should be given to
-making use of a different plugin, or finding an alternate mechanism for restricting traffic
-in the Kubernetes cluster.
-
-**Audit:**
-
-```bash
-kubectl get pods --all-namespaces --selector='k8s-app in (calico-node, canal, cilium)' -o name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
---count=1
-```
-
-### 5.3.2 Ensure that all Namespaces have Network Policies defined (Automated)
-
-
-**Result:** true
-
-**Remediation:**
-Follow the documentation and create NetworkPolicy objects as you need them.
-
-**Audit Script:** `check_for_rke2_network_policies.sh`
-
-```bash
-#!/bin/bash
-
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-for namespace in kube-system kube-public default; do
- policy_count=$(/var/lib/rancher/rke2/bin/kubectl get networkpolicy -n ${namespace} -o json | jq -r '.items | length')
- if [ ${policy_count} -eq 0 ]; then
- echo "false"
- exit
- fi
-done
-
-echo "true"
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_rke2_network_policies.sh
-```
-
-**Expected Result**:
-
-```console
-'true' is present
-```
-
-**Returned Value**:
-
-```console
-true
-```
-
-## 5.4 Secrets Management
-### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If possible, rewrite application code to read Secrets from mounted secret files, rather than
-from environment variables.
-
-### 5.4.2 Consider external secret storage (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Refer to the Secrets management options offered by your cloud provider or a third-party
-secrets management solution.
-
-## 5.5 Extensible Admission Control
-### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and setup image provenance.
-
-## 5.7 General Policies
-### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the documentation and create namespaces for objects in your deployment as you need
-them.
-
-### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
-An example is as below:
-securityContext:
-seccompProfile:
-type: RuntimeDefault
-
-### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
-suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
-Containers.
-
-### 5.7.4 The default namespace should not be used (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
-resources and that all new resources are created in a specific namespace.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.24-k8s-v1.24.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.24-k8s-v1.24.md
new file mode 100644
index 00000000000..f14207e929d
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.24-k8s-v1.24.md
@@ -0,0 +1,3202 @@
+---
+title: RKE2 自我评估指南 - CIS Benchmark v1.24 - K8s v1.24
+---
+
+
+
+
+
+本文档是 [RKE2 加固指南](rke2-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 RKE2 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。
+
+本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes:
+
+| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 |
+|-----------------|-----------------------|--------------------|
+| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 |
+
+本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群中的合规性。由于 Rancher 和 RKE2 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多 control 验证检查不适用。这些检查将返回 `Not Applicable` 的结果。
+
+本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。
+
+有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.24 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。
+
+## 测试方法
+
+RKE2 将 control plane 组件作为静态 Pod 启动,由 kubelet 管理,并使用 containerd 作为容器运行时。配置是由初始化时或通过配置文件传递给容器的参数定义的。
+
+在 control 审计与原始 CIS benchmark 不同时,提供了针对 Rancher 的特定审计命令以进行测试。在执行测试时,你将需要访问所有 RKE2 节点主机上的命令行。这些命令还使用了 [kubectl](https://kubernetes.io/docs/tasks/tools/)(带有有效的配置文件)和 [jq](https://stedolan.github.io/jq/) 工具,在测试和评估测试结果时这些工具是必需的。
+
+:::note
+
+本指南仅涵盖 `automated`(之前称为 `scored`)测试。
+
+:::
+
+### Controls
+
+## 1.1 Control Plane Node Configuration Files
+### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the
+control plane node.
+For example, chmod 644 /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+
+**Audit:**
+
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 644 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; then stat -c %a /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'600' is present
+```
+
+**Returned Value**:
+
+```console
+644
+```
+
+### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'permissions' is equal to '600'
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod 644 /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'644' is equal to '644'
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600
+
+**Audit:**
+
+```bash
+ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600 permissions=644
+```
+
+### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root
+
+**Audit:**
+
+```bash
+ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root root:root
+```
+
+### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above). For example,
+chmod 700 /var/lib/etcd
+
+**Audit:**
+
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/db/etcd
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 700, expected 700 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=700
+```
+
+### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above).
+For example, chown etcd:etcd /var/lib/etcd
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/db/etcd
+```
+
+**Expected Result**:
+
+```console
+'etcd:etcd' is present
+```
+
+**Returned Value**:
+
+```console
+etcd:etcd
+```
+
+### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600 /etc/kubernetes/admin.conf
+
+**Audit:**
+
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/cred/admin.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /etc/kubernetes/admin.conf
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/cred/admin.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod 600 scheduler
+
+**Audit:**
+
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root scheduler
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod 600 controllermanager
+
+**Audit:**
+
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/cred/controller.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root /var/lib/rancher/rke2/server/cred/controller.kubeconfig
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/cred/controller.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown -R root:root /var/lib/rancher/rke2/server/tls/
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/tls
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod -R 644 /var/lib/rancher/rke2/server/tls/*.crt
+
+**Audit:**
+
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/tls/*.crt
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644
+```
+
+### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod -R 600 /var/lib/rancher/rke2/server/tls/*.key
+
+**Audit:**
+
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/tls/*.key
+```
+
+**Expected Result**:
+
+```console
+'permissions' is equal to '600'
+```
+
+**Returned Value**:
+
+```console
+permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600
+```
+
+## 1.2 API Server
+### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--anonymous-auth=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--anonymous-auth' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the documentation and configure alternate mechanisms for authentication. Then,
+edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and remove the --token-auth-file= parameter.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--token-auth-file' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and remove the `DenyServiceExternalIPs`
+from enabled admission plugins.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and remove the --kubelet-https parameter.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--kubelet-https' is present OR '--kubelet-https' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the
+apiserver and kubelets. Then, edit API server pod specification file
+/var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml on the control plane node and set the
+kubelet client certificate and key parameters as below.
+--kubelet-client-certificate=
+--kubelet-client-key=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and setup the TLS connection between
+the apiserver and kubelets. Then, edit the API server pod specification file
+/var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml on the control plane node and set the
+--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
+--kubelet-certificate-authority=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--kubelet-certificate-authority' is present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
+One such example could be as below.
+--authorization-mode=RBAC
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' does not have 'AlwaysAllow'
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to a value that includes Node.
+--authorization-mode=Node,RBAC
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' has 'Node'
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
+for example `--authorization-mode=Node,RBAC`.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' has 'RBAC'
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and set the desired limits in a configuration file.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+and set the below parameters.
+--enable-admission-plugins=...,EventRateLimit,...
+--admission-control-config-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'EventRateLimit'
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
+value that does not include AlwaysAdmit.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+AlwaysPullImages.
+--enable-admission-plugins=...,AlwaysPullImages,...
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'AlwaysPullImages'
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+SecurityContextDeny, unless PodSecurityPolicy is already in place.
+--enable-admission-plugins=...,SecurityContextDeny,...
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy'
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the documentation and create ServiceAccount objects as per your environment.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
+value that does not include ServiceAccount.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --disable-admission-plugins parameter to
+ensure it does not include NamespaceLifecycle.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.16 Ensure that the admission control plugin NodeRestriction is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to a
+value that includes NodeRestriction.
+--enable-admission-plugins=...,NodeRestriction,...
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'NodeRestriction'
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.17 Ensure that the --secure-port argument is not set to 0 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and either remove the --secure-port parameter or
+set it to a different (non-zero) desired port.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--secure-port' is greater than 0 OR '--secure-port' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.18 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.19 Ensure that the --audit-log-path argument is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-path parameter to a suitable path and
+file where you would like audit logs to be written, for example,
+--audit-log-path=/var/log/apiserver/audit.log
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-path' is present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.20 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxage parameter to 30
+or as an appropriate number of days, for example,
+--audit-log-maxage=30
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-maxage' is greater or equal to 30
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.21 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
+value. For example,
+--audit-log-maxbackup=10
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-maxbackup' is greater or equal to 10
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.22 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
+For example, to set it as 100 MB, --audit-log-maxsize=100
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-maxsize' is greater or equal to 100
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.24 Ensure that the --service-account-lookup argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--service-account-lookup=true
+Alternatively, you can delete the --service-account-lookup parameter from this file so
+that the default takes effect.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--service-account-lookup' is not present OR '--service-account-lookup' is present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.25 Ensure that the --request-timeout argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --service-account-key-file parameter
+to the public key file for service accounts. For example,
+--service-account-key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--service-account-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true root 1017917 2489 83 16:16 ? 00:00:00 kubectl get --server=https://localhost:6443/ --client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --raw=/readyz root 1017945 2489 75 16:16 ? 00:00:00 kubectl get --server=https://localhost:6443/ --client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --raw=/livez
+```
+
+### 1.2.26 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the etcd certificate and key file parameters.
+--etcd-certfile=
+--etcd-keyfile=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--etcd-certfile' is present AND '--etcd-keyfile' is present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.27 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the TLS certificate and private key file parameters.
+--tls-cert-file=
+--tls-private-key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--tls-cert-file' is present AND '--tls-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.28 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the client certificate authority file.
+--client-ca-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--client-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.29 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the etcd certificate authority file parameter.
+--etcd-cafile=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--etcd-cafile' is present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.30 Ensure that the --encryption-provider-config argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and configure a EncryptionConfig file.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --encryption-provider-config parameter to the path of that file.
+For example, --encryption-provider-config=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--encryption-provider-config' is present
+```
+
+**Returned Value**:
+
+```console
+root 2548 2489 10 Sep11 ? 02:10:01 kube-apiserver --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction,PodSecurityPolicy --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
+TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
+TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
+TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
+TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
+TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
+TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
+TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
+
+### 1.2.33 Ensure that encryption providers are appropriately configured (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and configure a EncryptionConfig file.
+In this file, choose aescbc, kms or secretbox as the encryption provider.
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if grep aescbc /var/lib/rancher/rke2/server/cred/encryption-config.json; then echo 0; fi'
+```
+
+**Expected Result**:
+
+```console
+'0' is present
+```
+
+**Returned Value**:
+
+```console
+{"kind":"EncryptionConfiguration","apiVersion":"apiserver.config.k8s.io/v1","resources":[{"resources":["secrets"],"providers":[{"aescbc":{"keys":[{"name":"aescbckey","secret":"akmMjUAq94YvsUftJpiA3b+9SClu0ESPBeckAI7KZBY="}]}},{"identity":{}}]}]} 0
+```
+
+## 1.3 Controller Manager
+### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
+for example, --terminated-pod-gc-threshold=10
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--terminated-pod-gc-threshold' is present
+```
+
+**Returned Value**:
+
+```console
+root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node to set the below parameter.
+--use-service-account-credentials=true
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--use-service-account-credentials' is not equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node and set the --service-account-private-key-file parameter
+to the private key file for service accounts.
+--service-account-private-key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--service-account-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
+--root-ca-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--root-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
+--feature-gates=RotateKubeletServerCertificate=true
+
+### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node and ensure the correct value for the --bind-address parameter
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2743 2649 2 Sep11 ? 00:28:36 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+## 1.4 Scheduler
+### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Scheduler pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml file
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-scheduler | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 2707 2593 0 Sep11 ? 00:06:20 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
+```
+
+### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Scheduler pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
+on the control plane node and ensure the correct value for the --bind-address parameter
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-scheduler | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2707 2593 0 Sep11 ? 00:06:20 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
+```
+
+## 2 Etcd Node Configuration
+### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Follow the etcd service documentation and configure TLS encryption.
+Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
+on the master node and set the below parameters.
+--cert-file=
+--key-file=
+
+### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
+node and set the below parameter.
+--client-cert-auth="true"
+
+### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
+node and either remove the --auto-tls parameter or set it to false.
+ --auto-tls=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present
+```
+
+**Returned Value**:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-10-113 ETCD_UNSUPPORTED_ARCH= NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 POD_HASH=560b915d9afb672c20c2c1e8664bdf8f FILE_HASH=166aebdce42ff62fcdd2cefc9e7ed8f7c5b562d219ca6afec8f73adc654f65e7 HOME=/
+```
+
+### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Follow the etcd service documentation and configure peer TLS encryption as appropriate
+for your etcd cluster.
+Then, edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the
+master node and set the below parameters.
+--peer-client-file=
+--peer-key-file=
+
+### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
+node and set the below parameter.
+--peer-client-cert-auth=true
+
+### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
+node and either remove the --peer-auto-tls parameter or set it to false.
+--peer-auto-tls=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present
+```
+
+**Returned Value**:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-10-113 ETCD_UNSUPPORTED_ARCH= NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 POD_HASH=560b915d9afb672c20c2c1e8664bdf8f FILE_HASH=166aebdce42ff62fcdd2cefc9e7ed8f7c5b562d219ca6afec8f73adc654f65e7 HOME=/
+```
+
+### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+[Manual test]
+Follow the etcd documentation and create a dedicated certificate authority setup for the
+etcd service.
+Then, edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the
+master node and set the below parameter.
+--trusted-ca-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Audit Config:**
+
+```bash
+cat /var/lib/rancher/rke2/server/db/etcd/config
+```
+
+**Expected Result**:
+
+```console
+'ETCD_TRUSTED_CA_FILE' is present OR '{.peer-transport-security.trusted-ca-file}' is equal to '/var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt'
+```
+
+**Returned Value**:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-10-113 ETCD_UNSUPPORTED_ARCH= NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 POD_HASH=560b915d9afb672c20c2c1e8664bdf8f FILE_HASH=166aebdce42ff62fcdd2cefc9e7ed8f7c5b562d219ca6afec8f73adc654f65e7 HOME=/
+```
+
+## 3.1 Authentication and Authorization
+### 3.1.1 Client certificate authentication should not be used for users (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
+implemented in place of client certificates.
+
+## 3.2 Logging
+### 3.2.1 Ensure that a minimal audit policy is created (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Create an audit policy file for your cluster.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep | grep -o audit-policy-file
+```
+
+**Expected Result**:
+
+```console
+'audit-policy-file' is equal to 'audit-policy-file'
+```
+
+**Returned Value**:
+
+```console
+audit-policy-file
+```
+
+### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Review the audit policy provided for the cluster and ensure that it covers
+at least the following areas,
+- Access to Secrets managed by the cluster. Care should be taken to only
+ log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
+ order to avoid risk of logging sensitive data.
+- Modification of Pod and Deployment objects.
+- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
+For most requests, minimally logging at the Metadata level is recommended
+(the most basic level of logging).
+
+## 4.1 Worker Node Configuration Files
+### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+
+### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+
+### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chmod 600 /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example, chown root:root /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chmod 600 /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+'600' is present
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chown root:root /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the following command to modify the file permissions of the
+--client-ca-file chmod 600
+
+**Audit Script:** `check_cafile_permissions.sh`
+
+```bash
+#!/usr/bin/env bash
+
+CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
+CAFILE=/node$CAFILE
+if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
+if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_cafile_permissions.sh
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 600, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600
+```
+
+### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the following command to modify the ownership of the --client-ca-file.
+chown root:root
+
+**Audit Script:** `check_cafile_ownership.sh`
+
+```bash
+#!/usr/bin/env bash
+
+CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
+CAFILE=/node$CAFILE
+if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
+if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_cafile_ownership.sh
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the following command (using the config file location identified in the Audit step)
+chmod 600 /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the following command (using the config file location identified in the Audit step)
+chown root:root /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+## 4.2 Kubelet
+### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
+`false`.
+If using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+`--anonymous-auth=false`
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'--anonymous-auth' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 2291 2246 4 Sep11 ? 00:50:01 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-10-113 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=6685035f-32be-4d1b-a06d-7ea5f42467f5 --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
+using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_AUTHZ_ARGS variable.
+--authorization-mode=Webhook
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' does not have 'AlwaysAllow'
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 2291 2246 4 Sep11 ? 00:50:01 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-10-113 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=6685035f-32be-4d1b-a06d-7ea5f42467f5 --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
+the location of the client CA file.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_AUTHZ_ARGS variable.
+--client-ca-file=
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'--client-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 2291 2246 4 Sep11 ? 00:50:01 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-10-113 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=6685035f-32be-4d1b-a06d-7ea5f42467f5 --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+### 4.2.4 Verify that the --read-only-port argument is set to 0 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--read-only-port=0
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'--read-only-port' is equal to '0' OR '--read-only-port' is present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 2291 2246 4 Sep11 ? 00:50:01 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-10-113 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=6685035f-32be-4d1b-a06d-7ea5f42467f5 --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
+value other than 0.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--streaming-connection-idle-timeout=5m
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'{.streamingConnectionIdleTimeout}' is present OR '{.streamingConnectionIdleTimeout}' is not present
+```
+
+**Returned Value**:
+
+```console
+apiVersion: v1 clusters: - cluster: server: https://127.0.0.1:6443 certificate-authority: /var/lib/rancher/rke2/agent/server-ca.crt name: local contexts: - context: cluster: local namespace: default user: user name: Default current-context: Default kind: Config preferences: {} users: - name: user user: client-certificate: /var/lib/rancher/rke2/agent/client-kubelet.crt client-key: /var/lib/rancher/rke2/agent/client-kubelet.key
+```
+
+### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--protect-kernel-defaults=true
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'--protect-kernel-defaults' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 2291 2246 4 Sep11 ? 00:50:01 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-10-113 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=6685035f-32be-4d1b-a06d-7ea5f42467f5 --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+remove the --make-iptables-util-chains argument from the
+KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'{.makeIPTablesUtilChains}' is present OR '{.makeIPTablesUtilChains}' is not present
+```
+
+**Returned Value**:
+
+```console
+apiVersion: v1 clusters: - cluster: server: https://127.0.0.1:6443 certificate-authority: /var/lib/rancher/rke2/agent/server-ca.crt name: local contexts: - context: cluster: local namespace: default user: user name: Default current-context: Default kind: Config preferences: {} users: - name: user user: client-certificate: /var/lib/rancher/rke2/agent/client-kubelet.crt client-key: /var/lib/rancher/rke2/agent/client-kubelet.key
+```
+
+### 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+on each worker node and remove the --hostname-override argument from the
+KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+### 4.2.9 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'{.eventRecordQPS}' is present
+```
+
+**Returned Value**:
+
+```console
+apiVersion: v1 clusters: - cluster: server: https://127.0.0.1:6443 certificate-authority: /var/lib/rancher/rke2/agent/server-ca.crt name: local contexts: - context: cluster: local namespace: default user: user name: Default current-context: Default kind: Config preferences: {} users: - name: user user: client-certificate: /var/lib/rancher/rke2/agent/client-kubelet.crt client-key: /var/lib/rancher/rke2/agent/client-kubelet.key
+```
+
+### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
+of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
+to the location of the corresponding private key file.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
+--tls-cert-file=
+--tls-private-key-file=
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'--tls-cert-file' is present AND '--tls-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 2291 2246 4 Sep11 ? 00:50:01 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-10-113 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=6685035f-32be-4d1b-a06d-7ea5f42467f5 --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
+remove it altogether to use the default value.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
+variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'{.rotateCertificates}' is present OR '{.rotateCertificates}' is not present
+```
+
+**Returned Value**:
+
+```console
+apiVersion: v1 clusters: - cluster: server: https://127.0.0.1:6443 certificate-authority: /var/lib/rancher/rke2/agent/server-ca.crt name: local contexts: - context: cluster: local namespace: default user: user name: Default current-context: Default kind: Config preferences: {} users: - name: user user: client-certificate: /var/lib/rancher/rke2/agent/client-kubelet.crt client-key: /var/lib/rancher/rke2/agent/client-kubelet.key
+```
+
+### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
+--feature-gates=RotateKubeletServerCertificate=true
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'{.featureGates.RotateKubeletServerCertificate}' is present OR '{.featureGates.RotateKubeletServerCertificate}' is not present
+```
+
+**Returned Value**:
+
+```console
+apiVersion: v1 clusters: - cluster: server: https://127.0.0.1:6443 certificate-authority: /var/lib/rancher/rke2/agent/server-ca.crt name: local contexts: - context: cluster: local namespace: default user: user name: Default current-context: Default kind: Config preferences: {} users: - name: user user: client-certificate: /var/lib/rancher/rke2/agent/client-kubelet.crt client-key: /var/lib/rancher/rke2/agent/client-kubelet.key
+```
+
+### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
+TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+or to a subset of these values.
+If using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the --tls-cipher-suites parameter as follows, or to a subset of these values.
+--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'{range .tlsCipherSuites[:]}{}{','}{end}' is present
+```
+
+**Returned Value**:
+
+```console
+apiVersion: v1 clusters: - cluster: server: https://127.0.0.1:6443 certificate-authority: /var/lib/rancher/rke2/agent/server-ca.crt name: local contexts: - context: cluster: local namespace: default user: user name: Default current-context: Default kind: Config preferences: {} users: - name: user user: client-certificate: /var/lib/rancher/rke2/agent/client-kubelet.crt client-key: /var/lib/rancher/rke2/agent/client-kubelet.key
+```
+
+## 5.1 RBAC and Service Accounts
+### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
+if they need this role or if they could use a role with fewer privileges.
+Where possible, first bind users to a lower privileged role and then remove the
+clusterrolebinding to the cluster-admin role :
+kubectl delete clusterrolebinding [name]
+
+### 5.1.2 Minimize access to secrets (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove get, list and watch access to Secret objects in the cluster.
+
+### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible replace any use of wildcards in clusterroles and roles with specific
+objects or actions.
+
+### 5.1.4 Minimize access to create pods (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove create access to pod objects in the cluster.
+
+### 5.1.5 Ensure that default service accounts are not actively used. (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Create explicit service accounts wherever a Kubernetes workload requires specific access
+to the Kubernetes API server.
+Modify the configuration of each default service account to include this value
+automountServiceAccountToken: false
+
+**Audit Script:** `check_for_default_sa.sh`
+
+```bash
+#!/bin/bash
+
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l)
+if [[ ${count_sa} -gt 0 ]]; then
+ echo "false"
+ exit
+fi
+
+for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name")
+do
+ for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[]?.kind=="ServiceAccount" and .subjects[]?.name=="default") or (.subjects[]?.kind=="Group" and .subjects[]?.name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"')
+ do
+ read kind name <<<$(IFS=","; echo $result)
+ resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[]? != "podsecuritypolicies")' | wc -l)
+ if [[ ${resource_count} -gt 0 ]]; then
+ echo "false"
+ exit
+ fi
+ done
+done
+
+
+echo "true"
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_default_sa.sh
+```
+
+**Expected Result**:
+
+```console
+'true' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+true
+```
+
+### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Modify the definition of pods and service accounts which do not need to mount service
+account tokens to disable it.
+
+### 5.1.7 Avoid use of system:masters group (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Remove the system:masters group from all users in the cluster.
+
+### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove the impersonate, bind and escalate rights from subjects.
+
+## 5.2 Pod Security Standards
+### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that either Pod Security Admission or an external policy control system is in place
+for every namespace which contains user workloads.
+
+### 5.2.2 Minimize the admission of privileged containers (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of privileged containers.
+
+**Audit:**
+
+```bash
+kubectl get psp global-restricted-psp && kubectl get psp global-restricted-psp -o json | jq -r ".spec.runAsUser.rule" || kubectl get psp restricted-noroot-psp && kubectl get psp restricted-noroot-psp -o json | jq -r ".spec.runAsUser.rule"
+```
+
+**Expected Result**:
+
+```console
+'MustRunAsNonRoot' is equal to 'MustRunAsNonRoot'
+```
+
+**Returned Value**:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Error from server (NotFound): podsecuritypolicies.policy "global-restricted-psp" not found Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES restricted-noroot-psp false RunAsAny MustRunAsNonRoot MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,csi,persistentVolumeClaim Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ MustRunAsNonRoot
+```
+
+### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostPID` containers.
+
+**Audit:**
+
+```bash
+kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result**:
+
+```console
+'count' is greater than 0
+```
+
+**Returned Value**:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ --count=5
+```
+
+### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostIPC` containers.
+
+**Audit:**
+
+```bash
+kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result**:
+
+```console
+'count' is greater than 0
+```
+
+**Returned Value**:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ --count=5
+```
+
+### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostNetwork` containers.
+
+**Audit:**
+
+```bash
+kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result**:
+
+```console
+'count' is greater than 0
+```
+
+**Returned Value**:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ --count=2
+```
+
+### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
+
+**Audit:**
+
+```bash
+kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result**:
+
+```console
+'count' is greater than 0
+```
+
+**Returned Value**:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ --count=4
+```
+
+### 5.2.7 Minimize the admission of root containers (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
+or `MustRunAs` with the range of UIDs not including 0, is set.
+
+**Audit:**
+
+```bash
+kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result**:
+
+```console
+'count' is greater than 0
+```
+
+**Returned Value**:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ --count=4
+```
+
+### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with the `NET_RAW` capability.
+
+**Audit:**
+
+```bash
+kubectl get psp global-restricted-psp && kubectl get psp global-restricted-psp -o json | jq -r .spec.requiredDropCapabilities[] || kubectl get psp restricted-noroot-psp && kubectl get psp restricted-noroot-psp -o json | jq -r .spec.requiredDropCapabilities[]
+```
+
+**Expected Result**:
+
+```console
+'ALL' is equal to 'ALL'
+```
+
+**Returned Value**:
+
+```console
+Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Error from server (NotFound): podsecuritypolicies.policy "global-restricted-psp" not found Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES restricted-noroot-psp false RunAsAny MustRunAsNonRoot MustRunAs MustRunAs false configMap,emptyDir,projected,secret,downwardAPI,csi,persistentVolumeClaim Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ ALL
+```
+
+### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that `allowedCapabilities` is not present in policies for the cluster unless
+it is set to an empty array.
+
+### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Review the use of capabilites in applications running on your cluster. Where a namespace
+contains applicaions which do not require any Linux capabities to operate consider adding
+a PSP which forbids the admission of containers which do not drop all capabilities.
+
+### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
+
+### 5.2.12 Minimize the admission of HostPath volumes (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `hostPath` volumes.
+
+### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers which use `hostPort` sections.
+
+## 5.3 Network Policies and CNI
+### 5.3.1 Ensure that the CNI in use supports Network Policies (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If the CNI plugin in use does not support network policies, consideration should be given to
+making use of a different plugin, or finding an alternate mechanism for restricting traffic
+in the Kubernetes cluster.
+
+**Audit:**
+
+```bash
+kubectl get pods --all-namespaces --selector='k8s-app in (calico-node, canal, cilium)' -o name | wc -l | xargs -I {} echo '--count={}'
+```
+
+**Expected Result**:
+
+```console
+'count' is greater than 0
+```
+
+**Returned Value**:
+
+```console
+--count=1
+```
+
+### 5.3.2 Ensure that all Namespaces have Network Policies defined (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the documentation and create NetworkPolicy objects as you need them.
+
+**Audit Script:** `check_for_rke2_network_policies.sh`
+
+```bash
+#!/bin/bash
+
+set -eE
+
+handle_error() {
+ echo "false"
+}
+
+trap 'handle_error' ERR
+
+for namespace in kube-system kube-public default; do
+ policy_count=$(/var/lib/rancher/rke2/bin/kubectl get networkpolicy -n ${namespace} -o json | jq -r '.items | length')
+ if [ ${policy_count} -eq 0 ]; then
+ echo "false"
+ exit
+ fi
+done
+
+echo "true"
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_for_rke2_network_policies.sh
+```
+
+**Expected Result**:
+
+```console
+'true' is equal to 'true'
+```
+
+**Returned Value**:
+
+```console
+true
+```
+
+## 5.4 Secrets Management
+### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If possible, rewrite application code to read Secrets from mounted secret files, rather than
+from environment variables.
+
+### 5.4.2 Consider external secret storage (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Refer to the Secrets management options offered by your cloud provider or a third-party
+secrets management solution.
+
+## 5.5 Extensible Admission Control
+### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and setup image provenance.
+
+## 5.7 General Policies
+### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the documentation and create namespaces for objects in your deployment as you need
+them.
+
+### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
+An example is as below:
+ securityContext:
+ seccompProfile:
+ type: RuntimeDefault
+
+### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
+suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
+Containers.
+
+### 5.7.4 The default namespace should not be used (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
+resources and that all new resources are created in a specific namespace.
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md
new file mode 100644
index 00000000000..a51c617eea8
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md
@@ -0,0 +1,2967 @@
+---
+title: RKE2 自我评估指南 - CIS Benchmark v1.7 - K8s v1.25/v1.26/v1.27
+---
+
+
+
+
+
+本文档是 [RKE2 加固指南](rke2-hardening-guide.md)的配套文档,该指南提供了关于如何加固正在生产环境中运行并由 Rancher 管理的 RKE2 集群的指导方针。本 benchmark 指南可帮助你根据 CIS Kubernetes Benchmark 中的每个 control 来评估加固集群的安全性。
+
+本指南对应以下版本的 Rancher、CIS Benchmarks 和 Kubernetes:
+
+| Rancher 版本 | CIS Benchmark 版本 | Kubernetes 版本 |
+|-----------------|-----------------------|--------------------|
+| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25/v1.26/v1.27 |
+
+本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群中的合规性。由于 Rancher 和 RKE2 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多 control 验证检查不适用。这些检查将返回 `Not Applicable` 的结果。
+
+本文档适用于 Rancher 运维人员、安全团队、审计员和决策者。
+
+有关每个 control 的更多信息,包括详细描述和未通过测试的补救措施,请参考 CIS Kubernetes Benchmark v1.7 的相应部分。你可以在[互联网安全中心 (CIS)](https://www.cisecurity.org/benchmark/kubernetes/)创建免费账户后下载 benchmark。
+
+## 测试方法
+
+RKE2 将 control plane 组件作为静态 Pod 启动,由 kubelet 管理,并使用 containerd 作为容器运行时。配置是由初始化时或通过配置文件传递给容器的参数定义的。
+
+在 control 审计与原始 CIS benchmark 不同时,提供了针对 Rancher 的特定审计命令以进行测试。在执行测试时,你将需要访问所有 RKE2 节点主机上的命令行。这些命令还使用了 [kubectl](https://kubernetes.io/docs/tasks/tools/)(带有有效的配置文件)和 [jq](https://stedolan.github.io/jq/) 工具,在测试和评估测试结果时这些工具是必需的。
+
+:::note
+
+本指南仅涵盖 `automated`(之前称为 `scored`)测试。
+
+:::
+
+### Controls
+
+## 1.1 Control Plane Node Configuration Files
+### 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the
+control plane node.
+For example, chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+
+**Audit:**
+
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod 600 /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; then find /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml -name '*etcd*' | xargs stat -c permissions=%a; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600
+
+**Audit:**
+
+```bash
+ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600 permissions=644
+```
+
+### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root
+
+**Audit:**
+
+```bash
+ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root root:root
+```
+
+### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above). For example,
+chmod 700 /var/lib/etcd
+
+**Audit:**
+
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/db/etcd
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 700, expected 700 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=700
+```
+
+### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
+from the command 'ps -ef | grep etcd'.
+Run the below command (based on the etcd data directory found above).
+For example, chown etcd:etcd /var/lib/etcd
+
+### 1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600 /etc/kubernetes/admin.conf
+
+**Audit:**
+
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/cred/admin.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example, chown root:root /etc/kubernetes/admin.conf
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/cred/admin.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod 600 /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+'600' is present
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod 600 /var/lib/rancher/rke2/server/cred/controller.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/server/cred/controller.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/server/cred/controller.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+'600' is present
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown root:root /var/lib/rancher/rke2/server/cred/controller.kubeconfig
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/cred/controller.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chown -R root:root /etc/kubernetes/pki/
+
+**Audit:**
+
+```bash
+stat -c %U:%G /var/lib/rancher/rke2/server/tls
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod -R 600 /var/lib/rancher/rke2/server/tls/*.crt
+
+**Audit:**
+
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/tls/*.crt
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644
+```
+
+### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the control plane node.
+For example,
+chmod -R 600 /var/lib/rancher/rke2/server/tls/*.key
+
+**Audit:**
+
+```bash
+stat -c permissions=%a /var/lib/rancher/rke2/server/tls/*.key
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 600, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600
+```
+
+## 1.2 API Server
+### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--anonymous-auth=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the documentation and configure alternate mechanisms for authentication. Then,
+edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and remove the --token-auth-file= parameter.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--token-auth-file' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and remove the `DenyServiceExternalIPs`
+from enabled admission plugins.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the
+apiserver and kubelets. Then, edit API server pod specification file
+/var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml on the control plane node and set the
+kubelet client certificate and key parameters as below.
+--kubelet-client-certificate=
+--kubelet-client-key=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and setup the TLS connection between
+the apiserver and kubelets. Then, edit the API server pod specification file
+/var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml on the control plane node and set the
+--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
+--kubelet-certificate-authority=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--kubelet-certificate-authority' is present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
+One such example could be as below.
+--authorization-mode=RBAC
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' does not have 'AlwaysAllow'
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.7 Ensure that the --authorization-mode argument includes Node (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to a value that includes Node.
+--authorization-mode=Node,RBAC
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' has 'Node'
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
+for example `--authorization-mode=Node,RBAC`.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' has 'RBAC'
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and set the desired limits in a configuration file.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+and set the below parameters.
+--enable-admission-plugins=...,EventRateLimit,...
+--admission-control-config-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'EventRateLimit'
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
+value that does not include AlwaysAdmit.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+AlwaysPullImages.
+--enable-admission-plugins=...,AlwaysPullImages,...
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'AlwaysPullImages'
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to include
+SecurityContextDeny, unless PodSecurityPolicy is already in place.
+--enable-admission-plugins=...,SecurityContextDeny,...
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy'
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.13 Ensure that the admission control plugin ServiceAccount is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the documentation and create ServiceAccount objects as per your environment.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
+value that does not include ServiceAccount.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true root 1018768 2419 99 16:17 ? 00:00:00 kubectl get --server=https://localhost:6443/ --client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --raw=/readyz
+```
+
+### 1.2.14 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --disable-admission-plugins parameter to
+ensure it does not include NamespaceLifecycle.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.15 Ensure that the admission control plugin NodeRestriction is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --enable-admission-plugins parameter to a
+value that includes NodeRestriction.
+--enable-admission-plugins=...,NodeRestriction,...
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--enable-admission-plugins' has 'NodeRestriction'
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.16 Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and either remove the --secure-port parameter or
+set it to a different (non-zero) desired port.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--secure-port' is greater than 0 OR '--secure-port' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.17 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.18 Ensure that the --audit-log-path argument is set (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-path parameter to a suitable path and
+file where you would like audit logs to be written, for example,
+--audit-log-path=/var/log/apiserver/audit.log
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-path' is present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxage parameter to 30
+or as an appropriate number of days, for example,
+--audit-log-maxage=30
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-maxage' is greater or equal to 30
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
+value. For example,
+--audit-log-maxbackup=10
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-log-maxbackup' is greater or equal to 10
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
+For example, to set it as 100 MB, --audit-log-maxsize=100
+
+### 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+and set the below parameter as appropriate and if needed.
+For example, --request-timeout=300s
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--request-timeout' is not present OR '--request-timeout' is present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.23 Ensure that the --service-account-lookup argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--service-account-lookup=true
+Alternatively, you can delete the --service-account-lookup parameter from this file so
+that the default takes effect.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--service-account-lookup' is not present OR '--service-account-lookup' is present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.24 Ensure that the --service-account-key-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --service-account-key-file parameter
+to the public key file for service accounts. For example,
+--service-account-key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--service-account-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.25 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the etcd certificate and key file parameters.
+--etcd-certfile=
+--etcd-keyfile=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--etcd-certfile' is present AND '--etcd-keyfile' is present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.26 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the TLS certificate and private key file parameters.
+--tls-cert-file=
+--tls-private-key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--tls-cert-file' is present AND '--tls-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.27 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the client certificate authority file.
+--client-ca-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--client-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.28 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the etcd certificate authority file parameter.
+--etcd-cafile=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--etcd-cafile' is present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.29 Ensure that the --encryption-provider-config argument is set as appropriate (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Follow the Kubernetes documentation and configure a EncryptionConfig file.
+Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
+on the control plane node and set the --encryption-provider-config parameter to the path of that file.
+For example, --encryption-provider-config=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--encryption-provider-config' is present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.2.30 Ensure that encryption providers are appropriately configured (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and configure a EncryptionConfig file.
+In this file, choose aescbc, kms or secretbox as the encryption provider.
+
+**Audit:**
+
+```bash
+ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep kube-apiserver | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%') if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi
+```
+
+**Expected Result**:
+
+```console
+'provider' is present
+```
+
+### 1.2.31 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
+TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
+TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
+TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
+TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
+TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
+TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
+TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
+Not Applicable.
+
+## 1.3 Controller Manager
+### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
+for example, --terminated-pod-gc-threshold=10
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--terminated-pod-gc-threshold' is present
+```
+
+**Returned Value**:
+
+```console
+root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node to set the below parameter.
+--use-service-account-credentials=true
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--use-service-account-credentials' is not equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node and set the --service-account-private-key-file parameter
+to the private key file for service accounts.
+--service-account-private-key-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--service-account-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
+--root-ca-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--root-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
+--feature-gates=RotateKubeletServerCertificate=true
+
+### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
+on the control plane node and ensure the correct value for the --bind-address parameter
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-controller-manager | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+## 1.4 Scheduler
+### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Scheduler pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml file
+on the control plane node and set the below parameter.
+--profiling=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-scheduler | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--profiling' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+root 2645 2538 0 Sep11 ? 00:05:26 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
+```
+
+### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the Scheduler pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
+on the control plane node and ensure the correct value for the --bind-address parameter
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-scheduler | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
+```
+
+**Returned Value**:
+
+```console
+root 2645 2538 0 Sep11 ? 00:05:26 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
+```
+
+## 2 Etcd Node Configuration
+### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Follow the etcd service documentation and configure TLS encryption.
+Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
+on the master node and set the below parameters.
+--cert-file=
+--key-file=
+Not Applicable.
+
+### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
+node and set the below parameter.
+--client-cert-auth="true"
+Not Applicable.
+
+### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
+node and either remove the --auto-tls parameter or set it to false.
+ --auto-tls=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present
+```
+
+**Returned Value**:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-14-226 ETCD_UNSUPPORTED_ARCH= FILE_HASH=e9ca6f328e70dd3c17ba78c302ee32927a4961198e95d1d948114a5d7e350d99 NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 POD_HASH=aa1658a59ab75324ef67c786d08307e3 HOME=/
+```
+
+### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Follow the etcd service documentation and configure peer TLS encryption as appropriate
+for your etcd cluster.
+Then, edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the
+master node and set the below parameters.
+--peer-client-file=
+--peer-key-file=
+Not Applicable.
+
+### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
+node and set the below parameter.
+--peer-client-cert-auth=true
+Not Applicable.
+
+### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
+node and either remove the --peer-auto-tls parameter or set it to false.
+--peer-auto-tls=false
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present
+```
+
+**Returned Value**:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-14-226 ETCD_UNSUPPORTED_ARCH= FILE_HASH=e9ca6f328e70dd3c17ba78c302ee32927a4961198e95d1d948114a5d7e350d99 NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 POD_HASH=aa1658a59ab75324ef67c786d08307e3 HOME=/
+```
+
+### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+[Manual test]
+Follow the etcd documentation and create a dedicated certificate authority setup for the
+etcd service.
+Then, edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the
+master node and set the below parameter.
+--trusted-ca-file=
+
+**Audit:**
+
+```bash
+/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
+```
+
+**Audit Config:**
+
+```bash
+cat /var/lib/rancher/rke2/server/db/etcd/config
+```
+
+**Expected Result**:
+
+```console
+'ETCD_TRUSTED_CA_FILE' is present OR '{.peer-transport-security.trusted-ca-file}' is equal to '/var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt'
+```
+
+**Returned Value**:
+
+```console
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-14-226 ETCD_UNSUPPORTED_ARCH= FILE_HASH=e9ca6f328e70dd3c17ba78c302ee32927a4961198e95d1d948114a5d7e350d99 NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 POD_HASH=aa1658a59ab75324ef67c786d08307e3 HOME=/
+```
+
+## 3.1 Authentication and Authorization
+### 3.1.1 Client certificate authentication should not be used for users (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
+implemented in place of client certificates.
+
+### 3.1.2 Service account token authentication should not be used for users (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
+in place of service account tokens.
+
+### 3.1.3 Bootstrap token authentication should not be used for users (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
+in place of bootstrap tokens.
+
+## 3.2 Logging
+### 3.2.1 Ensure that a minimal audit policy is created (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Create an audit policy file for your cluster.
+
+**Audit:**
+
+```bash
+/bin/ps -ef | grep kube-apiserver | grep -v grep
+```
+
+**Expected Result**:
+
+```console
+'--audit-policy-file' is present
+```
+
+**Returned Value**:
+
+```console
+root 2489 2419 8 Sep11 ? 01:41:54 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --enable-bootstrap-token-auth=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 2652 2539 2 Sep11 ? 00:24:53 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.nochain.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.nochain.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,tokencleaner,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.current.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
+```
+
+### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Review the audit policy provided for the cluster and ensure that it covers
+at least the following areas,
+- Access to Secrets managed by the cluster. Care should be taken to only
+ log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
+ order to avoid risk of logging sensitive data.
+- Modification of Pod and Deployment objects.
+- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
+For most requests, minimally logging at the Metadata level is recommended
+(the most basic level of logging).
+
+## 4.1 Worker Node Configuration Files
+### 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example, chmod 600 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+Not Applicable.
+
+### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+Not applicable.
+
+### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chmod 600 /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example, chown root:root /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chmod 600 /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the below command (based on the file location on your system) on the each worker node.
+For example,
+chown root:root /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the following command to modify the file permissions of the
+--client-ca-file chmod 600
+
+**Audit Script:** `check_cafile_permissions.sh`
+
+```bash
+#!/usr/bin/env bash
+
+CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
+CAFILE=/node$CAFILE
+if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
+if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_cafile_permissions.sh
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 600, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=600
+```
+
+### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the following command to modify the ownership of the --client-ca-file.
+chown root:root
+
+**Audit Script:** `check_cafile_ownership.sh`
+
+```bash
+#!/usr/bin/env bash
+
+CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
+CAFILE=/node$CAFILE
+if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
+if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
+
+```
+
+**Audit Execution:**
+
+```bash
+./check_cafile_ownership.sh
+```
+
+**Expected Result**:
+
+```console
+'root:root' is equal to 'root:root'
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+### 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)
+
+
+**Result:** fail
+
+**Remediation:**
+Run the following command (using the config file location identified in the Audit step)
+chmod 600 /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+permissions has permissions 644, expected 600 or more restrictive
+```
+
+**Returned Value**:
+
+```console
+permissions=644
+```
+
+### 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+Run the following command (using the config file location identified in the Audit step)
+chown root:root /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+
+**Audit:**
+
+```bash
+/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
+```
+
+**Expected Result**:
+
+```console
+'root:root' is present
+```
+
+**Returned Value**:
+
+```console
+root:root
+```
+
+## 4.2 Kubelet
+### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
+`false`.
+If using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+`--anonymous-auth=false`
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'--anonymous-auth' is equal to 'false'
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 2236 2075 4 Sep11 ? 00:47:51 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-14-226 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=1adcc1f7-66f1-4503-b9bc-dfb6e808f27b --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
+using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_AUTHZ_ARGS variable.
+--authorization-mode=Webhook
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'--authorization-mode' does not have 'AlwaysAllow'
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 2236 2075 4 Sep11 ? 00:47:51 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-14-226 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=1adcc1f7-66f1-4503-b9bc-dfb6e808f27b --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
+the location of the client CA file.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_AUTHZ_ARGS variable.
+--client-ca-file=
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'--client-ca-file' is present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 2236 2075 4 Sep11 ? 00:47:51 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-14-226 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=1adcc1f7-66f1-4503-b9bc-dfb6e808f27b --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+### 4.2.4 Verify that the --read-only-port argument is set to 0 (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--read-only-port=0
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'--read-only-port' is equal to '0' OR '--read-only-port' is not present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 2236 2075 4 Sep11 ? 00:47:51 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-14-226 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=1adcc1f7-66f1-4503-b9bc-dfb6e808f27b --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
+value other than 0.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+--streaming-connection-idle-timeout=5m
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'{.streamingConnectionIdleTimeout}' is present OR '{.streamingConnectionIdleTimeout}' is not present
+```
+
+**Returned Value**:
+
+```console
+apiVersion: v1 clusters: - cluster: server: https://127.0.0.1:6443 certificate-authority: /var/lib/rancher/rke2/agent/server-ca.crt name: local contexts: - context: cluster: local namespace: default user: user name: Default current-context: Default kind: Config preferences: {} users: - name: user user: client-certificate: /var/lib/rancher/rke2/agent/client-kubelet.crt client-key: /var/lib/rancher/rke2/agent/client-kubelet.key
+```
+
+### 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+remove the --make-iptables-util-chains argument from the
+KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'{.makeIPTablesUtilChains}' is present OR '{.makeIPTablesUtilChains}' is not present
+```
+
+**Returned Value**:
+
+```console
+apiVersion: v1 clusters: - cluster: server: https://127.0.0.1:6443 certificate-authority: /var/lib/rancher/rke2/agent/server-ca.crt name: local contexts: - context: cluster: local namespace: default user: user name: Default current-context: Default kind: Config preferences: {} users: - name: user user: client-certificate: /var/lib/rancher/rke2/agent/client-kubelet.crt client-key: /var/lib/rancher/rke2/agent/client-kubelet.key
+```
+
+### 4.2.7 Ensure that the --hostname-override argument is not set (Manual)
+
+
+**Result:** Not Applicable
+
+**Remediation:**
+Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+on each worker node and remove the --hostname-override argument from the
+KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+Not Applicable.
+
+### 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'{.eventRecordQPS}' is present OR '{.eventRecordQPS}' is not present
+```
+
+**Returned Value**:
+
+```console
+apiVersion: v1 clusters: - cluster: server: https://127.0.0.1:6443 certificate-authority: /var/lib/rancher/rke2/agent/server-ca.crt name: local contexts: - context: cluster: local namespace: default user: user name: Default current-context: Default kind: Config preferences: {} users: - name: user user: client-certificate: /var/lib/rancher/rke2/agent/client-kubelet.crt client-key: /var/lib/rancher/rke2/agent/client-kubelet.key
+```
+
+### 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
+of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
+to the location of the corresponding private key file.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
+--tls-cert-file=
+--tls-private-key-file=
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'--tls-cert-file' is present AND '--tls-private-key-file' is present
+```
+
+**Returned Value**:
+
+```console
+UID PID PPID C STIME TTY TIME CMD root 2236 2075 4 Sep11 ? 00:47:51 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-14-226 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=1adcc1f7-66f1-4503-b9bc-dfb6e808f27b --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
+```
+
+### 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated)
+
+
+**Result:** pass
+
+**Remediation:**
+If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
+remove it altogether to use the default value.
+If using command line arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
+variable.
+Based on your system, restart the kubelet service. For example,
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'{.rotateCertificates}' is present OR '{.rotateCertificates}' is not present
+```
+
+**Returned Value**:
+
+```console
+apiVersion: v1 clusters: - cluster: server: https://127.0.0.1:6443 certificate-authority: /var/lib/rancher/rke2/agent/server-ca.crt name: local contexts: - context: cluster: local namespace: default user: user name: Default current-context: Default kind: Config preferences: {} users: - name: user user: client-certificate: /var/lib/rancher/rke2/agent/client-kubelet.crt client-key: /var/lib/rancher/rke2/agent/client-kubelet.key
+```
+
+### 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
+
+
+**Result:** pass
+
+**Remediation:**
+Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
+--feature-gates=RotateKubeletServerCertificate=true
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'{.featureGates.RotateKubeletServerCertificate}' is present OR '{.featureGates.RotateKubeletServerCertificate}' is not present
+```
+
+**Returned Value**:
+
+```console
+apiVersion: v1 clusters: - cluster: server: https://127.0.0.1:6443 certificate-authority: /var/lib/rancher/rke2/agent/server-ca.crt name: local contexts: - context: cluster: local namespace: default user: user name: Default current-context: Default kind: Config preferences: {} users: - name: user user: client-certificate: /var/lib/rancher/rke2/agent/client-kubelet.crt client-key: /var/lib/rancher/rke2/agent/client-kubelet.key
+```
+
+### 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
+TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+or to a subset of these values.
+If using executable arguments, edit the kubelet service file
+/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
+set the --tls-cipher-suites parameter as follows, or to a subset of these values.
+--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
+Based on your system, restart the kubelet service. For example:
+systemctl daemon-reload
+systemctl restart kubelet.service
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'{range .tlsCipherSuites[:]}{}{','}{end}' is present
+```
+
+**Returned Value**:
+
+```console
+apiVersion: v1 clusters: - cluster: server: https://127.0.0.1:6443 certificate-authority: /var/lib/rancher/rke2/agent/server-ca.crt name: local contexts: - context: cluster: local namespace: default user: user name: Default current-context: Default kind: Config preferences: {} users: - name: user user: client-certificate: /var/lib/rancher/rke2/agent/client-kubelet.crt client-key: /var/lib/rancher/rke2/agent/client-kubelet.key
+```
+
+### 4.2.13 Ensure that a limit is set on pod PIDs (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Decide on an appropriate level for this parameter and set it,
+either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting.
+
+**Audit:**
+
+```bash
+/bin/ps -fC kubelet
+```
+
+**Audit Config:**
+
+```bash
+/bin/cat /var/lib/rancher/rke2/agent/kubelet.kubeconfig
+```
+
+**Expected Result**:
+
+```console
+'{.podPidsLimit}' is present
+```
+
+**Returned Value**:
+
+```console
+apiVersion: v1 clusters: - cluster: server: https://127.0.0.1:6443 certificate-authority: /var/lib/rancher/rke2/agent/server-ca.crt name: local contexts: - context: cluster: local namespace: default user: user name: Default current-context: Default kind: Config preferences: {} users: - name: user user: client-certificate: /var/lib/rancher/rke2/agent/client-kubelet.crt client-key: /var/lib/rancher/rke2/agent/client-kubelet.key
+```
+
+## 5.1 RBAC and Service Accounts
+### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
+if they need this role or if they could use a role with fewer privileges.
+Where possible, first bind users to a lower privileged role and then remove the
+clusterrolebinding to the cluster-admin role :
+kubectl delete clusterrolebinding [name]
+
+### 5.1.2 Minimize access to secrets (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove get, list and watch access to Secret objects in the cluster.
+
+### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible replace any use of wildcards in clusterroles and roles with specific
+objects or actions.
+
+### 5.1.4 Minimize access to create pods (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove create access to pod objects in the cluster.
+
+### 5.1.5 Ensure that default service accounts are not actively used. (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Create explicit service accounts wherever a Kubernetes workload requires specific access
+to the Kubernetes API server.
+Modify the configuration of each default service account to include this value
+automountServiceAccountToken: false
+
+### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Modify the definition of pods and service accounts which do not need to mount service
+account tokens to disable it.
+
+### 5.1.7 Avoid use of system:masters group (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Remove the system:masters group from all users in the cluster.
+
+### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove the impersonate, bind and escalate rights from subjects.
+
+### 5.1.9 Minimize access to create persistent volumes (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove create access to PersistentVolume objects in the cluster.
+
+### 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove access to the proxy sub-resource of node objects.
+
+### 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove access to the approval sub-resource of certificatesigningrequest objects.
+
+### 5.1.12 Minimize access to webhook configuration objects (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
+
+### 5.1.13 Minimize access to the service account token creation (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Where possible, remove access to the token sub-resource of serviceaccount objects.
+
+## 5.2 Pod Security Standards
+### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that either Pod Security Admission or an external policy control system is in place
+for every namespace which contains user workloads.
+
+### 5.2.2 Minimize the admission of privileged containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of privileged containers.
+
+### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostPID` containers.
+
+### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostIPC` containers.
+
+### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of `hostNetwork` containers.
+
+### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
+
+### 5.2.7 Minimize the admission of root containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
+or `MustRunAs` with the range of UIDs not including 0, is set.
+
+### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with the `NET_RAW` capability.
+
+### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that `allowedCapabilities` is not present in policies for the cluster unless
+it is set to an empty array.
+
+### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Review the use of capabilites in applications running on your cluster. Where a namespace
+contains applicaions which do not require any Linux capabities to operate consider adding
+a PSP which forbids the admission of containers which do not drop all capabilities.
+
+### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
+
+### 5.2.12 Minimize the admission of HostPath volumes (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers with `hostPath` volumes.
+
+### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Add policies to each namespace in the cluster which has user workloads to restrict the
+admission of containers which use `hostPort` sections.
+
+## 5.3 Network Policies and CNI
+### 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If the CNI plugin in use does not support network policies, consideration should be given to
+making use of a different plugin, or finding an alternate mechanism for restricting traffic
+in the Kubernetes cluster.
+
+### 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the documentation and create NetworkPolicy objects as you need them.
+
+## 5.4 Secrets Management
+### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+If possible, rewrite application code to read Secrets from mounted secret files, rather than
+from environment variables.
+
+### 5.4.2 Consider external secret storage (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Refer to the Secrets management options offered by your cloud provider or a third-party
+secrets management solution.
+
+## 5.5 Extensible Admission Control
+### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and setup image provenance.
+
+## 5.7 General Policies
+### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the documentation and create namespaces for objects in your deployment as you need
+them.
+
+### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
+An example is as below:
+ securityContext:
+ seccompProfile:
+ type: RuntimeDefault
+
+### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
+suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
+Containers.
+
+### 5.7.4 The default namespace should not be used (Manual)
+
+
+**Result:** warn
+
+**Remediation:**
+Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
+resources and that all new resources are created in a specific namespace.
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.7-k8s-v1.25.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.7-k8s-v1.25.md
deleted file mode 100644
index 252accbb79d..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.7-k8s-v1.25.md
+++ /dev/null
@@ -1,3196 +0,0 @@
----
-title: RKE2 Self-Assessment Guide - CIS Benchmark v1.7 - K8s v1.25
----
-
-This document is a companion to the [RKE2 Hardening Guide](../../../../pages-for-subheaders/rke2-hardening-guide.md), which provides prescriptive guidance on how to harden RKE2 clusters that are running in production and managed by Rancher. This benchmark guide helps you evaluate the security of a hardened cluster against each control in the CIS Kubernetes Benchmark.
-
-This guide corresponds to the following versions of Rancher, CIS Benchmarks, and Kubernetes:
-
-| Rancher Version | CIS Benchmark Version | Kubernetes Version |
-|-----------------|-----------------------|--------------------|
-| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 |
-
-This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. Because Rancher and RKE2 install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. These checks will return a result of `Not Applicable`.
-
-This document is for Rancher operators, security teams, auditors and decision makers.
-
-For more information about each control, including detailed descriptions and remediations for failing tests, refer to the corresponding section of the CIS Kubernetes Benchmark v1.7. You can download the benchmark, after creating a free account, at [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
-
-## Testing Methodology
-
-RKE2 launches control plane components as static pods, managed by the kubelet, and uses containerd as the container runtime. Configuration is defined by arguments passed to the container at the time of initialization or via configuration file.
-
-Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the command line on the hosts of all RKE2 nodes. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results.
-
-:::note
-
-This guide only covers `automated` (previously called `scored`) tests.
-
-:::
-
-### Controls
-
-## 1.1 Master Node Configuration Files
-### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the
-control plane node.
-For example, chmod 644 /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644 /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; then stat -c %a /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'644' is equal to '644'
-```
-
-**Returned Value**:
-
-```console
-644
-```
-
-### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644 /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'permissions' is equal to '644'
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; then stat -c permissions=%a /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'644' is equal to '644'
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; then stat -c %U:%G /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 644
-
-**Audit:**
-
-```bash
-ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=600 permissions=644
-```
-
-### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root
-
-**Audit:**
-
-```bash
-ps -fC ${kubeletbin:-kubelet} | grep -- --cni-conf-dir || echo "/etc/cni/net.d" | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root root:root
-```
-
-### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above). For example,
-chmod 700 /var/lib/etcd
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/db/etcd
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 700, expected 700 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=700
-```
-
-### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
-from the command 'ps -ef | grep etcd'.
-Run the below command (based on the etcd data directory found above).
-For example, chown etcd:etcd /var/lib/etcd
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/rke2/server/db/etcd
-```
-
-**Expected Result**:
-
-```console
-'etcd:etcd' is present
-```
-
-**Returned Value**:
-
-```console
-etcd:etcd
-```
-
-### 1.1.13 Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chmod 600 /etc/kubernetes/admin.conf
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/cred/admin.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example, chown root:root /etc/kubernetes/admin.conf
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/rke2/server/cred/admin.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 scheduler
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root scheduler
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod 644 controllermanager
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/cred/controller.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown root:root /var/lib/rancher/rke2/server/cred/controller.kubeconfig
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/rke2/server/cred/controller.kubeconfig
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chown -R root:root /var/lib/rancher/rke2/server/tls/
-
-**Audit:**
-
-```bash
-stat -c %U:%G /var/lib/rancher/rke2/server/tls
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod -R 644 /var/lib/rancher/rke2/server/tls/*.crt
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/tls/*.crt
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644 permissions=644
-```
-
-### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the control plane node.
-For example,
-chmod -R 600 /var/lib/rancher/rke2/server/tls/*.key
-
-**Audit:**
-
-```bash
-stat -c permissions=%a /var/lib/rancher/rke2/server/tls/*.key
-```
-
-**Expected Result**:
-
-```console
-'permissions' is equal to '600'
-```
-
-**Returned Value**:
-
-```console
-permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600 permissions=600
-```
-
-## 1.2 API Server
-### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---anonymous-auth=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--anonymous-auth' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and configure alternate mechanisms for authentication. Then,
-edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and remove the --token-auth-file= parameter.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--token-auth-file' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.3 Ensure that the --DenyServiceExternalIPs is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and remove the `DenyServiceExternalIPs`
-from enabled admission plugins.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' does not have 'DenyServiceExternalIPs' OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and remove the --kubelet-https parameter.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-https' is present OR '--kubelet-https' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the
-apiserver and kubelets. Then, edit API server pod specification file
-/var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml on the control plane node and set the
-kubelet client certificate and key parameters as below.
---kubelet-client-certificate=
---kubelet-client-key=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and setup the TLS connection between
-the apiserver and kubelets. Then, edit the API server pod specification file
-/var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml on the control plane node and set the
---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
---kubelet-certificate-authority=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--kubelet-certificate-authority' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
-One such example could be as below.
---authorization-mode=RBAC
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes Node.
---authorization-mode=Node,RBAC
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'Node'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
-for example `--authorization-mode=Node,RBAC`.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' has 'RBAC'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and set the desired limits in a configuration file.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-and set the below parameters.
---enable-admission-plugins=...,EventRateLimit,...
---admission-control-config-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'EventRateLimit'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
-value that does not include AlwaysAdmit.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' does not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-AlwaysPullImages.
---enable-admission-plugins=...,AlwaysPullImages,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'AlwaysPullImages'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to include
-SecurityContextDeny, unless PodSecurityPolicy is already in place.
---enable-admission-plugins=...,SecurityContextDeny,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'SecurityContextDeny' OR '--enable-admission-plugins' has 'PodSecurityPolicy'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:05 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the documentation and create ServiceAccount objects as per your environment.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
-value that does not include ServiceAccount.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --disable-admission-plugins parameter to
-ensure it does not include NamespaceLifecycle.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true root 15378 3910 99 23:32 ? 00:00:00 kubectl get --server=https://localhost:6443/ --client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --raw=/readyz
-```
-
-### 1.2.16 Ensure that the admission control plugin NodeRestriction is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --enable-admission-plugins parameter to a
-value that includes NodeRestriction.
---enable-admission-plugins=...,NodeRestriction,...
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--enable-admission-plugins' has 'NodeRestriction'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.17 Ensure that the --secure-port argument is not set to 0 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and either remove the --secure-port parameter or
-set it to a different (non-zero) desired port.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--secure-port' is greater than 0 OR '--secure-port' is not present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.18 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.19 Ensure that the --audit-log-path argument is set (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-path parameter to a suitable path and
-file where you would like audit logs to be written, for example,
---audit-log-path=/var/log/apiserver/audit.log
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-path' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.20 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxage parameter to 30
-or as an appropriate number of days, for example,
---audit-log-maxage=30
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxage' is greater or equal to 30
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.21 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
-value. For example,
---audit-log-maxbackup=10
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxbackup' is greater or equal to 10
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.22 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
-For example, to set it as 100 MB, --audit-log-maxsize=100
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--audit-log-maxsize' is greater or equal to 100
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.24 Ensure that the --service-account-lookup argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---service-account-lookup=true
-Alternatively, you can delete the --service-account-lookup parameter from this file so
-that the default takes effect.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-lookup' is not present OR '--service-account-lookup' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.25 Ensure that the --request-timeout argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --service-account-key-file parameter
-to the public key file for service accounts. For example,
---service-account-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.26 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate and key file parameters.
---etcd-certfile=
---etcd-keyfile=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--etcd-certfile' is present AND '--etcd-keyfile' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.27 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the TLS certificate and private key file parameters.
---tls-cert-file=
---tls-private-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.28 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the client certificate authority file.
---client-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.29 Ensure that the --etcd-cafile argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the etcd certificate authority file parameter.
---etcd-cafile=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--etcd-cafile' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.30 Ensure that the --encryption-provider-config argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-Then, edit the API server pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
-on the control plane node and set the --encryption-provider-config parameter to the path of that file.
-For example, --encryption-provider-config=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--encryption-provider-config' is present
-```
-
-**Returned Value**:
-
-```console
-root 3980 3910 19 23:26 ? 00:01:06 kube-apiserver --admission-control-config-file=/etc/rancher/rke2/rke2-pss.yaml --audit-policy-file=/etc/rancher/rke2/audit-policy.yaml --audit-log-path=/var/lib/rancher/rke2/server/logs/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --admission-control-config-file=/etc/rancher/rke2/config/rancher-psact.yaml --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,rke2 --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --cert-dir=/var/lib/rancher/rke2/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/rke2/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/rke2/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/rke2/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/rke2/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/rke2/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/rke2/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/rke2/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-private-key-file=/var/lib/rancher/rke2/server/tls/serving-kube-apiserver.key root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.2.32 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
-on the control plane node and set the below parameter.
---tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
-TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
-TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
-TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
-TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
-TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
-
-### 1.2.33 Ensure that encryption providers are appropriately configured (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Follow the Kubernetes documentation and configure a EncryptionConfig file.
-In this file, choose aescbc, kms or secretbox as the encryption provider.
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if grep aescbc /var/lib/rancher/rke2/server/cred/encryption-config.json; then echo 0; fi'
-```
-
-**Expected Result**:
-
-```console
-'0' is present
-```
-
-**Returned Value**:
-
-```console
-{"kind":"EncryptionConfiguration","apiVersion":"apiserver.config.k8s.io/v1","resources":[{"resources":["secrets"],"providers":[{"aescbc":{"keys":[{"name":"aescbckey","secret":"TSpBkJhIU0sRx+84IZuBZ1qO+eaRdW31C7QCnF3+n8s="}]}},{"identity":{}}]}]} 0
-```
-
-## 1.3 Controller Manager
-### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
-for example, --terminated-pod-gc-threshold=10
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--terminated-pod-gc-threshold' is present
-```
-
-**Returned Value**:
-
-```console
-root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.3.2 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node to set the below parameter.
---use-service-account-credentials=true
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--use-service-account-credentials' is not equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and set the --service-account-private-key-file parameter
-to the private key file for service accounts.
---service-account-private-key-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--service-account-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
---root-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--root-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
---feature-gates=RotateKubeletServerCertificate=true
-
-### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Controller Manager pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-controller-manager | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-root 4128 4029 2 23:27 ? 00:00:06 kube-controller-manager --flex-volume-plugin-dir=/var/lib/kubelet/volumeplugins --terminated-pod-gc-threshold=1000 --permit-port-sharing=true --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-controller-manager --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/rke2/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/rke2/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/rke2/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/rke2/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/rke2/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/rke2/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true
-```
-
-## 1.4 Scheduler
-### 1.4.1 Ensure that the --profiling argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml file
-on the control plane node and set the below parameter.
---profiling=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-scheduler | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--profiling' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-root 4126 4014 0 23:27 ? 00:00:02 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
-```
-
-### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the Scheduler pod specification file /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
-on the control plane node and ensure the correct value for the --bind-address parameter
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-scheduler | grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present
-```
-
-**Returned Value**:
-
-```console
-root 4126 4014 0 23:27 ? 00:00:02 kube-scheduler --permit-port-sharing=true --authentication-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/rke2/server/tls/kube-scheduler --kubeconfig=/var/lib/rancher/rke2/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259
-```
-
-## 2 Etcd Node Configuration
-### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Follow the etcd service documentation and configure TLS encryption.
-Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
-on the master node and set the below parameters.
---cert-file=
---key-file=
-
-### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
-node and set the below parameter.
---client-cert-auth="true"
-
-### 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
-node and either remove the --auto-tls parameter or set it to false.
---auto-tls=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'ETCD_AUTO_TLS' is not present OR 'ETCD_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-25-112 ETCD_UNSUPPORTED_ARCH= POD_HASH=ab0b8a2ee7711940d3d951edece075f3 FILE_HASH=068666c5f959fc1023cb1761daaaed212727c04120747d825c78fd7683122e6d NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 HOME=/
-```
-
-### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Follow the etcd service documentation and configure peer TLS encryption as appropriate
-for your etcd cluster.
-Then, edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the
-master node and set the below parameters.
---peer-client-file=
---peer-key-file=
-
-### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
-node and set the below parameter.
---peer-client-cert-auth=true
-
-### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the master
-node and either remove the --peer-auto-tls parameter or set it to false.
---peer-auto-tls=false
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Expected Result**:
-
-```console
-'ETCD_PEER_AUTO_TLS' is not present OR 'ETCD_PEER_AUTO_TLS' is present
-```
-
-**Returned Value**:
-
-```console
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-25-112 ETCD_UNSUPPORTED_ARCH= POD_HASH=ab0b8a2ee7711940d3d951edece075f3 FILE_HASH=068666c5f959fc1023cb1761daaaed212727c04120747d825c78fd7683122e6d NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 HOME=/
-```
-
-### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-[Manual test]
-Follow the etcd documentation and create a dedicated certificate authority setup for the
-etcd service.
-Then, edit the etcd pod specification file /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml on the
-master node and set the below parameter.
---trusted-ca-file=
-
-**Audit:**
-
-```bash
-/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep
-```
-
-**Audit Config:**
-
-```bash
-cat /var/lib/rancher/rke2/server/db/etcd/config
-```
-
-**Expected Result**:
-
-```console
-'ETCD_TRUSTED_CA_FILE' is present OR '{.peer-transport-security.trusted-ca-file}' is equal to '/var/lib/rancher/rke2/server/tls/etcd/peer-ca.crt'
-```
-
-**Returned Value**:
-
-```console
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=ip-172-31-25-112 ETCD_UNSUPPORTED_ARCH= POD_HASH=ab0b8a2ee7711940d3d951edece075f3 FILE_HASH=068666c5f959fc1023cb1761daaaed212727c04120747d825c78fd7683122e6d NO_PROXY=.svc,.cluster.local,10.42.0.0/16,10.43.0.0/16 HOME=/
-```
-
-## 3.1 Authentication and Authorization
-### 3.1.1 Client certificate authentication should not be used for users (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
-implemented in place of client certificates.
-
-## 3.2 Logging
-### 3.2.1 Ensure that a minimal audit policy is created (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Create an audit policy file for your cluster.
-
-**Audit:**
-
-```bash
-/bin/ps -ef | grep kube-apiserver | grep -v grep | grep -o audit-policy-file
-```
-
-**Expected Result**:
-
-```console
-'audit-policy-file' is equal to 'audit-policy-file'
-```
-
-**Returned Value**:
-
-```console
-audit-policy-file
-```
-
-### 3.2.2 Ensure that the audit policy covers key security concerns (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the audit policy provided for the cluster and ensure that it covers
-at least the following areas,
-- Access to Secrets managed by the cluster. Care should be taken to only
- log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
- order to avoid risk of logging sensitive data.
-- Modification of Pod and Deployment objects.
-- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
- For most requests, minimally logging at the Metadata level is recommended
- (the most basic level of logging).
-
-## 4.1 Worker Node Configuration Files
-### 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example, chmod 644 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-
-### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-
-### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 644, expected 644 or more restrictive OR '/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig' is not present
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example, chown root:root /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubeproxy.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present OR '/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig' is not present
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chmod 644 /var/lib/rancher/rke2/agent/kubelet.kubeconfig
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c permissions=%a /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-'644' is equal to '644'
-```
-
-**Returned Value**:
-
-```console
-permissions=644
-```
-
-### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the below command (based on the file location on your system) on the each worker node.
-For example,
-chown root:root /var/lib/rancher/rke2/agent/kubelet.kubeconfig
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /var/lib/rancher/rke2/agent/kubelet.kubeconfig; then stat -c %U:%G /var/lib/rancher/rke2/agent/kubelet.kubeconfig; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the file permissions of the
---client-ca-file chmod 644
-
-**Audit Script:** `check_cafile_permissions.sh`
-
-```bash
-#!/usr/bin/env bash
-
-CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
-CAFILE=/node$CAFILE
-if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
-if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_cafile_permissions.sh
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=600
-```
-
-### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command to modify the ownership of the --client-ca-file.
-chown root:root
-
-**Audit Script:** `check_cafile_ownership.sh`
-
-```bash
-#!/usr/bin/env bash
-
-CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
-CAFILE=/node$CAFILE
-if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
-if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_cafile_ownership.sh
-```
-
-**Expected Result**:
-
-```console
-'root:root' is equal to 'root:root'
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chmod 644 /etc/rancher/rke2/rke2.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /etc/rancher/rke2/rke2.yaml; then stat -c permissions=%a /etc/rancher/rke2/rke2.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-permissions has permissions 600, expected 644 or more restrictive
-```
-
-**Returned Value**:
-
-```console
-permissions=600
-```
-
-### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Run the following command (using the config file location identified in the Audit step)
-chown root:root /etc/rancher/rke2/rke2.yaml
-
-**Audit:**
-
-```bash
-/bin/sh -c 'if test -e /etc/rancher/rke2/rke2.yaml; then stat -c %U:%G /etc/rancher/rke2/rke2.yaml; fi'
-```
-
-**Expected Result**:
-
-```console
-'root:root' is present
-```
-
-**Returned Value**:
-
-```console
-root:root
-```
-
-## 4.2 Kubelet
-### 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
-`false`.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-`--anonymous-auth=false`
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--anonymous-auth' is equal to 'false'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3727 3667 3 23:26 ? 00:00:11 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-112 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c1cd514-db7b-4692-a1c4-cacb2656161f --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
-using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---authorization-mode=Webhook
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--authorization-mode' does not have 'AlwaysAllow'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3727 3667 3 23:26 ? 00:00:11 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-112 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c1cd514-db7b-4692-a1c4-cacb2656161f --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
-the location of the client CA file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_AUTHZ_ARGS variable.
---client-ca-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--client-ca-file' is present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3727 3667 3 23:26 ? 00:00:11 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-112 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c1cd514-db7b-4692-a1c4-cacb2656161f --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.4 Ensure that the --read-only-port argument is set to 0 (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---read-only-port=0
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--read-only-port' is equal to '0' OR '--read-only-port' is present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3727 3667 3 23:26 ? 00:00:11 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-112 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c1cd514-db7b-4692-a1c4-cacb2656161f --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
-value other than 0.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---streaming-connection-idle-timeout=5m
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{.streamingConnectionIdleTimeout}' is present OR '{.streamingConnectionIdleTimeout}' is not present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVXpPVE00TUI0WERUSXpNREl5TmpJek1qVXpPRm9YRFRNek1ESXlNekl6TWpVegpPRm93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFNemt6T0RCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJOUHllSlcwNE9lMUN6clo5cHplN09WWlliMlh4QmNvWUJ5Z3JRcUwKYUdpR29tN2xLNGs2ZW1uaUZQekpiOU9EQ0hDRmYyVUZaVXZDQTQxcUVmNVB3NlNqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnhUdGJTbTVMTnFrditkYXQrCmczalBWWXJjcERBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBN08wV1NNVHo4djgwM3BZK3hFeDR2SUZaVEMraVoKUHVDc082eFRuVVVtb2dJaEFPY1NqQVdkUVJ0UmJvaUhJbTd2RHZuM1czT1JmYlUwaXU2UUhNcXdsTGs4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRpZ0F3SUJBZ0lJRjc3RlZRT1l5RzR3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU16a3pPREFlRncweU16QXlNall5TXpJMU16aGFGdzB5TkRBeQpNall5TXpJMU16aGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUXg5Z1lxUzVCbndwTDIKcUdOeStjNnR0Yk15VmE3ZTdzT0J1UWdaWHBHZ1hNcGxOZHFhVy9lZjNZTjQwTk5RUnl2SWdWeTMzU0ZHSFJ0VgpqaXBpOXRZbG8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVVyNlUvYVNMcm1ONkxiaDZHS1JJa3NoM1M1bEF3Q2dZSUtvWkl6ajBFQXdJRFNBQXcKUlFJaEFPS2paZmxRVU11RnZldlFkYzg3ckxPcnhoNUtyUGhlQUtkY0Y4YWdielFJQWlCcHBmUGNMMFRoZ1g5UAptcCtOZHJqa1hvQU5SRTlEWVdIRUlRbDdubytDdWc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZVRDQ0FSK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFrTVNJd0lBWURWUVFEREJseWEyVXlMV05zCmFXVnVkQzFqWVVBeE5qYzNORFV6T1RNNE1CNFhEVEl6TURJeU5qSXpNalV6T0ZvWERUTXpNREl5TXpJek1qVXoKT0Zvd0pERWlNQ0FHQTFVRUF3d1pjbXRsTWkxamJHbGxiblF0WTJGQU1UWTNOelExTXprek9EQlpNQk1HQnlxRwpTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUFCQlhFZ0x2Z3JLV09KdkZtVnJhNEhyY05YdmNwN3JMN3VFNW1IcFpkCmJmT2xkRDRkVlJ4NjRxak9DeUNpc2Vsczk4WDJLSXlieGNSNkpnbFU2VXRoOU5xalFqQkFNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlN2cFQ5cEl1dVkzb3R1SG9ZcApFaVN5SGRMbVVEQUtCZ2dxaGtqT1BRUURBZ05JQURCRkFpRUF3R2xqWXUxZkJpMHZROFczbWxueXVDNUJqMlBBCm14Sm1uS3BVSG8ydjJBZ0NJRndiblM3ajROUGtCT2hzRjJBeFhEZlZzdExoRWpqbmhPRHlQek1kT01STQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpZblVvZ2U2bHVuNFN1WVVaN3VBQTVGYXV6blBaQzV2WHlpc1R2SVRjOXFvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTWZZR0trdVFaOEtTOXFoamN2bk9yYld6TWxXdTN1N0RnYmtJR1Y2Um9GektaVFhhbWx2MwpuOTJEZU5EVFVFY3J5SUZjdDkwaFJoMGJWWTRxWXZiV0pRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
---protect-kernel-defaults=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--protect-kernel-defaults' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3727 3667 3 23:26 ? 00:00:11 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-112 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c1cd514-db7b-4692-a1c4-cacb2656161f --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove the --make-iptables-util-chains argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{.makeIPTablesUtilChains}' is present OR '{.makeIPTablesUtilChains}' is not present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVXpPVE00TUI0WERUSXpNREl5TmpJek1qVXpPRm9YRFRNek1ESXlNekl6TWpVegpPRm93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFNemt6T0RCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJOUHllSlcwNE9lMUN6clo5cHplN09WWlliMlh4QmNvWUJ5Z3JRcUwKYUdpR29tN2xLNGs2ZW1uaUZQekpiOU9EQ0hDRmYyVUZaVXZDQTQxcUVmNVB3NlNqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnhUdGJTbTVMTnFrditkYXQrCmczalBWWXJjcERBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBN08wV1NNVHo4djgwM3BZK3hFeDR2SUZaVEMraVoKUHVDc082eFRuVVVtb2dJaEFPY1NqQVdkUVJ0UmJvaUhJbTd2RHZuM1czT1JmYlUwaXU2UUhNcXdsTGs4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRpZ0F3SUJBZ0lJRjc3RlZRT1l5RzR3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU16a3pPREFlRncweU16QXlNall5TXpJMU16aGFGdzB5TkRBeQpNall5TXpJMU16aGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUXg5Z1lxUzVCbndwTDIKcUdOeStjNnR0Yk15VmE3ZTdzT0J1UWdaWHBHZ1hNcGxOZHFhVy9lZjNZTjQwTk5RUnl2SWdWeTMzU0ZHSFJ0VgpqaXBpOXRZbG8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVVyNlUvYVNMcm1ONkxiaDZHS1JJa3NoM1M1bEF3Q2dZSUtvWkl6ajBFQXdJRFNBQXcKUlFJaEFPS2paZmxRVU11RnZldlFkYzg3ckxPcnhoNUtyUGhlQUtkY0Y4YWdielFJQWlCcHBmUGNMMFRoZ1g5UAptcCtOZHJqa1hvQU5SRTlEWVdIRUlRbDdubytDdWc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZVRDQ0FSK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFrTVNJd0lBWURWUVFEREJseWEyVXlMV05zCmFXVnVkQzFqWVVBeE5qYzNORFV6T1RNNE1CNFhEVEl6TURJeU5qSXpNalV6T0ZvWERUTXpNREl5TXpJek1qVXoKT0Zvd0pERWlNQ0FHQTFVRUF3d1pjbXRsTWkxamJHbGxiblF0WTJGQU1UWTNOelExTXprek9EQlpNQk1HQnlxRwpTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUFCQlhFZ0x2Z3JLV09KdkZtVnJhNEhyY05YdmNwN3JMN3VFNW1IcFpkCmJmT2xkRDRkVlJ4NjRxak9DeUNpc2Vsczk4WDJLSXlieGNSNkpnbFU2VXRoOU5xalFqQkFNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlN2cFQ5cEl1dVkzb3R1SG9ZcApFaVN5SGRMbVVEQUtCZ2dxaGtqT1BRUURBZ05JQURCRkFpRUF3R2xqWXUxZkJpMHZROFczbWxueXVDNUJqMlBBCm14Sm1uS3BVSG8ydjJBZ0NJRndiblM3ajROUGtCT2hzRjJBeFhEZlZzdExoRWpqbmhPRHlQek1kT01STQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpZblVvZ2U2bHVuNFN1WVVaN3VBQTVGYXV6blBaQzV2WHlpc1R2SVRjOXFvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTWZZR0trdVFaOEtTOXFoamN2bk9yYld6TWxXdTN1N0RnYmtJR1Y2Um9GektaVFhhbWx2MwpuOTJEZU5EVFVFY3J5SUZjdDkwaFJoMGJWWTRxWXZiV0pRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-### 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
-
-
-**Result:** Not Applicable
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and remove the --hostname-override argument from the
-KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-### 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{.eventRecordQPS}' is present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVXpPVE00TUI0WERUSXpNREl5TmpJek1qVXpPRm9YRFRNek1ESXlNekl6TWpVegpPRm93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFNemt6T0RCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJOUHllSlcwNE9lMUN6clo5cHplN09WWlliMlh4QmNvWUJ5Z3JRcUwKYUdpR29tN2xLNGs2ZW1uaUZQekpiOU9EQ0hDRmYyVUZaVXZDQTQxcUVmNVB3NlNqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnhUdGJTbTVMTnFrditkYXQrCmczalBWWXJjcERBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBN08wV1NNVHo4djgwM3BZK3hFeDR2SUZaVEMraVoKUHVDc082eFRuVVVtb2dJaEFPY1NqQVdkUVJ0UmJvaUhJbTd2RHZuM1czT1JmYlUwaXU2UUhNcXdsTGs4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRpZ0F3SUJBZ0lJRjc3RlZRT1l5RzR3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU16a3pPREFlRncweU16QXlNall5TXpJMU16aGFGdzB5TkRBeQpNall5TXpJMU16aGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUXg5Z1lxUzVCbndwTDIKcUdOeStjNnR0Yk15VmE3ZTdzT0J1UWdaWHBHZ1hNcGxOZHFhVy9lZjNZTjQwTk5RUnl2SWdWeTMzU0ZHSFJ0VgpqaXBpOXRZbG8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVVyNlUvYVNMcm1ONkxiaDZHS1JJa3NoM1M1bEF3Q2dZSUtvWkl6ajBFQXdJRFNBQXcKUlFJaEFPS2paZmxRVU11RnZldlFkYzg3ckxPcnhoNUtyUGhlQUtkY0Y4YWdielFJQWlCcHBmUGNMMFRoZ1g5UAptcCtOZHJqa1hvQU5SRTlEWVdIRUlRbDdubytDdWc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZVRDQ0FSK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFrTVNJd0lBWURWUVFEREJseWEyVXlMV05zCmFXVnVkQzFqWVVBeE5qYzNORFV6T1RNNE1CNFhEVEl6TURJeU5qSXpNalV6T0ZvWERUTXpNREl5TXpJek1qVXoKT0Zvd0pERWlNQ0FHQTFVRUF3d1pjbXRsTWkxamJHbGxiblF0WTJGQU1UWTNOelExTXprek9EQlpNQk1HQnlxRwpTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUFCQlhFZ0x2Z3JLV09KdkZtVnJhNEhyY05YdmNwN3JMN3VFNW1IcFpkCmJmT2xkRDRkVlJ4NjRxak9DeUNpc2Vsczk4WDJLSXlieGNSNkpnbFU2VXRoOU5xalFqQkFNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlN2cFQ5cEl1dVkzb3R1SG9ZcApFaVN5SGRMbVVEQUtCZ2dxaGtqT1BRUURBZ05JQURCRkFpRUF3R2xqWXUxZkJpMHZROFczbWxueXVDNUJqMlBBCm14Sm1uS3BVSG8ydjJBZ0NJRndiblM3ajROUGtCT2hzRjJBeFhEZlZzdExoRWpqbmhPRHlQek1kT01STQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpZblVvZ2U2bHVuNFN1WVVaN3VBQTVGYXV6blBaQzV2WHlpc1R2SVRjOXFvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTWZZR0trdVFaOEtTOXFoamN2bk9yYld6TWxXdTN1N0RnYmtJR1Y2Um9GektaVFhhbWx2MwpuOTJEZU5EVFVFY3J5SUZjdDkwaFJoMGJWWTRxWXZiV0pRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
-of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
-to the location of the corresponding private key file.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
---tls-cert-file=
---tls-private-key-file=
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'--tls-cert-file' is present AND '--tls-private-key-file' is present
-```
-
-**Returned Value**:
-
-```console
-UID PID PPID C STIME TTY TIME CMD root 3727 3667 3 23:26 ? 00:00:11 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/rke2/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=ip-172-31-25-112 --kubeconfig=/var/lib/rancher/rke2/agent/kubelet.kubeconfig --log-file=/var/lib/rancher/rke2/agent/logs/kubelet.log --log-file-max-size=50 --logtostderr=false --node-labels=cattle.io/os=linux,rke.cattle.io/machine=5c1cd514-db7b-4692-a1c4-cacb2656161f --pod-infra-container-image=index.docker.io/rancher/pause:3.6 --pod-manifest-path=/var/lib/rancher/rke2/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --stderrthreshold=FATAL --tls-cert-file=/var/lib/rancher/rke2/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/rke2/agent/serving-kubelet.key
-```
-
-### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
-remove it altogether to use the default value.
-If using command line arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
-variable.
-Based on your system, restart the kubelet service. For example,
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{.rotateCertificates}' is present OR '{.rotateCertificates}' is not present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVXpPVE00TUI0WERUSXpNREl5TmpJek1qVXpPRm9YRFRNek1ESXlNekl6TWpVegpPRm93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFNemt6T0RCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJOUHllSlcwNE9lMUN6clo5cHplN09WWlliMlh4QmNvWUJ5Z3JRcUwKYUdpR29tN2xLNGs2ZW1uaUZQekpiOU9EQ0hDRmYyVUZaVXZDQTQxcUVmNVB3NlNqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnhUdGJTbTVMTnFrditkYXQrCmczalBWWXJjcERBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBN08wV1NNVHo4djgwM3BZK3hFeDR2SUZaVEMraVoKUHVDc082eFRuVVVtb2dJaEFPY1NqQVdkUVJ0UmJvaUhJbTd2RHZuM1czT1JmYlUwaXU2UUhNcXdsTGs4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRpZ0F3SUJBZ0lJRjc3RlZRT1l5RzR3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU16a3pPREFlRncweU16QXlNall5TXpJMU16aGFGdzB5TkRBeQpNall5TXpJMU16aGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUXg5Z1lxUzVCbndwTDIKcUdOeStjNnR0Yk15VmE3ZTdzT0J1UWdaWHBHZ1hNcGxOZHFhVy9lZjNZTjQwTk5RUnl2SWdWeTMzU0ZHSFJ0VgpqaXBpOXRZbG8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVVyNlUvYVNMcm1ONkxiaDZHS1JJa3NoM1M1bEF3Q2dZSUtvWkl6ajBFQXdJRFNBQXcKUlFJaEFPS2paZmxRVU11RnZldlFkYzg3ckxPcnhoNUtyUGhlQUtkY0Y4YWdielFJQWlCcHBmUGNMMFRoZ1g5UAptcCtOZHJqa1hvQU5SRTlEWVdIRUlRbDdubytDdWc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZVRDQ0FSK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFrTVNJd0lBWURWUVFEREJseWEyVXlMV05zCmFXVnVkQzFqWVVBeE5qYzNORFV6T1RNNE1CNFhEVEl6TURJeU5qSXpNalV6T0ZvWERUTXpNREl5TXpJek1qVXoKT0Zvd0pERWlNQ0FHQTFVRUF3d1pjbXRsTWkxamJHbGxiblF0WTJGQU1UWTNOelExTXprek9EQlpNQk1HQnlxRwpTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUFCQlhFZ0x2Z3JLV09KdkZtVnJhNEhyY05YdmNwN3JMN3VFNW1IcFpkCmJmT2xkRDRkVlJ4NjRxak9DeUNpc2Vsczk4WDJLSXlieGNSNkpnbFU2VXRoOU5xalFqQkFNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlN2cFQ5cEl1dVkzb3R1SG9ZcApFaVN5SGRMbVVEQUtCZ2dxaGtqT1BRUURBZ05JQURCRkFpRUF3R2xqWXUxZkJpMHZROFczbWxueXVDNUJqMlBBCm14Sm1uS3BVSG8ydjJBZ0NJRndiblM3ajROUGtCT2hzRjJBeFhEZlZzdExoRWpqbmhPRHlQek1kT01STQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpZblVvZ2U2bHVuNFN1WVVaN3VBQTVGYXV6blBaQzV2WHlpc1R2SVRjOXFvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTWZZR0trdVFaOEtTOXFoamN2bk9yYld6TWxXdTN1N0RnYmtJR1Y2Um9GektaVFhhbWx2MwpuOTJEZU5EVFVFY3J5SUZjdDkwaFJoMGJWWTRxWXZiV0pRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
-
-
-**Result:** pass
-
-**Remediation:**
-Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
---feature-gates=RotateKubeletServerCertificate=true
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{.featureGates.RotateKubeletServerCertificate}' is present OR '{.featureGates.RotateKubeletServerCertificate}' is not present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVXpPVE00TUI0WERUSXpNREl5TmpJek1qVXpPRm9YRFRNek1ESXlNekl6TWpVegpPRm93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFNemt6T0RCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJOUHllSlcwNE9lMUN6clo5cHplN09WWlliMlh4QmNvWUJ5Z3JRcUwKYUdpR29tN2xLNGs2ZW1uaUZQekpiOU9EQ0hDRmYyVUZaVXZDQTQxcUVmNVB3NlNqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnhUdGJTbTVMTnFrditkYXQrCmczalBWWXJjcERBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBN08wV1NNVHo4djgwM3BZK3hFeDR2SUZaVEMraVoKUHVDc082eFRuVVVtb2dJaEFPY1NqQVdkUVJ0UmJvaUhJbTd2RHZuM1czT1JmYlUwaXU2UUhNcXdsTGs4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRpZ0F3SUJBZ0lJRjc3RlZRT1l5RzR3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU16a3pPREFlRncweU16QXlNall5TXpJMU16aGFGdzB5TkRBeQpNall5TXpJMU16aGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUXg5Z1lxUzVCbndwTDIKcUdOeStjNnR0Yk15VmE3ZTdzT0J1UWdaWHBHZ1hNcGxOZHFhVy9lZjNZTjQwTk5RUnl2SWdWeTMzU0ZHSFJ0VgpqaXBpOXRZbG8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVVyNlUvYVNMcm1ONkxiaDZHS1JJa3NoM1M1bEF3Q2dZSUtvWkl6ajBFQXdJRFNBQXcKUlFJaEFPS2paZmxRVU11RnZldlFkYzg3ckxPcnhoNUtyUGhlQUtkY0Y4YWdielFJQWlCcHBmUGNMMFRoZ1g5UAptcCtOZHJqa1hvQU5SRTlEWVdIRUlRbDdubytDdWc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZVRDQ0FSK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFrTVNJd0lBWURWUVFEREJseWEyVXlMV05zCmFXVnVkQzFqWVVBeE5qYzNORFV6T1RNNE1CNFhEVEl6TURJeU5qSXpNalV6T0ZvWERUTXpNREl5TXpJek1qVXoKT0Zvd0pERWlNQ0FHQTFVRUF3d1pjbXRsTWkxamJHbGxiblF0WTJGQU1UWTNOelExTXprek9EQlpNQk1HQnlxRwpTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUFCQlhFZ0x2Z3JLV09KdkZtVnJhNEhyY05YdmNwN3JMN3VFNW1IcFpkCmJmT2xkRDRkVlJ4NjRxak9DeUNpc2Vsczk4WDJLSXlieGNSNkpnbFU2VXRoOU5xalFqQkFNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlN2cFQ5cEl1dVkzb3R1SG9ZcApFaVN5SGRMbVVEQUtCZ2dxaGtqT1BRUURBZ05JQURCRkFpRUF3R2xqWXUxZkJpMHZROFczbWxueXVDNUJqMlBBCm14Sm1uS3BVSG8ydjJBZ0NJRndiblM3ajROUGtCT2hzRjJBeFhEZlZzdExoRWpqbmhPRHlQek1kT01STQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpZblVvZ2U2bHVuNFN1WVVaN3VBQTVGYXV6blBaQzV2WHlpc1R2SVRjOXFvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTWZZR0trdVFaOEtTOXFoamN2bk9yYld6TWxXdTN1N0RnYmtJR1Y2Um9GektaVFhhbWx2MwpuOTJEZU5EVFVFY3J5SUZjdDkwaFJoMGJWWTRxWXZiV0pRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
-TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-or to a subset of these values.
-If using executable arguments, edit the kubelet service file
-/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and
-set the --tls-cipher-suites parameter as follows, or to a subset of these values.
---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
-Based on your system, restart the kubelet service. For example:
-systemctl daemon-reload
-systemctl restart kubelet.service
-
-**Audit:**
-
-```bash
-/bin/ps -fC kubelet
-```
-
-**Audit Config:**
-
-```bash
-/bin/cat /etc/rancher/rke2/rke2.yaml
-```
-
-**Expected Result**:
-
-```console
-'{range .tlsCipherSuites[:]}{}{','}{end}' is present
-```
-
-**Returned Value**:
-
-```console
-apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWtNU0l3SUFZRFZRUUREQmx5YTJVeUxYTmwKY25abGNpMWpZVUF4TmpjM05EVXpPVE00TUI0WERUSXpNREl5TmpJek1qVXpPRm9YRFRNek1ESXlNekl6TWpVegpPRm93SkRFaU1DQUdBMVVFQXd3WmNtdGxNaTF6WlhKMlpYSXRZMkZBTVRZM056UTFNemt6T0RCWk1CTUdCeXFHClNNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJOUHllSlcwNE9lMUN6clo5cHplN09WWlliMlh4QmNvWUJ5Z3JRcUwKYUdpR29tN2xLNGs2ZW1uaUZQekpiOU9EQ0hDRmYyVUZaVXZDQTQxcUVmNVB3NlNqUWpCQU1BNEdBMVVkRHdFQgovd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01CMEdBMVVkRGdRV0JCUnhUdGJTbTVMTnFrditkYXQrCmczalBWWXJjcERBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBN08wV1NNVHo4djgwM3BZK3hFeDR2SUZaVEMraVoKUHVDc082eFRuVVVtb2dJaEFPY1NqQVdkUVJ0UmJvaUhJbTd2RHZuM1czT1JmYlUwaXU2UUhNcXdsTGs4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRpZ0F3SUJBZ0lJRjc3RlZRT1l5RzR3Q2dZSUtvWkl6ajBFQXdJd0pERWlNQ0FHQTFVRUF3d1oKY210bE1pMWpiR2xsYm5RdFkyRkFNVFkzTnpRMU16a3pPREFlRncweU16QXlNall5TXpJMU16aGFGdzB5TkRBeQpNall5TXpJMU16aGFNREF4RnpBVkJnTlZCQW9URG5ONWMzUmxiVHB0WVhOMFpYSnpNUlV3RXdZRFZRUURFd3h6CmVYTjBaVzA2WVdSdGFXNHdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CQndOQ0FBUXg5Z1lxUzVCbndwTDIKcUdOeStjNnR0Yk15VmE3ZTdzT0J1UWdaWHBHZ1hNcGxOZHFhVy9lZjNZTjQwTk5RUnl2SWdWeTMzU0ZHSFJ0VgpqaXBpOXRZbG8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3Ckh3WURWUjBqQkJnd0ZvQVVyNlUvYVNMcm1ONkxiaDZHS1JJa3NoM1M1bEF3Q2dZSUtvWkl6ajBFQXdJRFNBQXcKUlFJaEFPS2paZmxRVU11RnZldlFkYzg3ckxPcnhoNUtyUGhlQUtkY0Y4YWdielFJQWlCcHBmUGNMMFRoZ1g5UAptcCtOZHJqa1hvQU5SRTlEWVdIRUlRbDdubytDdWc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZVRDQ0FSK2dBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFrTVNJd0lBWURWUVFEREJseWEyVXlMV05zCmFXVnVkQzFqWVVBeE5qYzNORFV6T1RNNE1CNFhEVEl6TURJeU5qSXpNalV6T0ZvWERUTXpNREl5TXpJek1qVXoKT0Zvd0pERWlNQ0FHQTFVRUF3d1pjbXRsTWkxamJHbGxiblF0WTJGQU1UWTNOelExTXprek9EQlpNQk1HQnlxRwpTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUFCQlhFZ0x2Z3JLV09KdkZtVnJhNEhyY05YdmNwN3JMN3VFNW1IcFpkCmJmT2xkRDRkVlJ4NjRxak9DeUNpc2Vsczk4WDJLSXlieGNSNkpnbFU2VXRoOU5xalFqQkFNQTRHQTFVZER3RUIKL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlN2cFQ5cEl1dVkzb3R1SG9ZcApFaVN5SGRMbVVEQUtCZ2dxaGtqT1BRUURBZ05JQURCRkFpRUF3R2xqWXUxZkJpMHZROFczbWxueXVDNUJqMlBBCm14Sm1uS3BVSG8ydjJBZ0NJRndiblM3ajROUGtCT2hzRjJBeFhEZlZzdExoRWpqbmhPRHlQek1kT01STQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpZblVvZ2U2bHVuNFN1WVVaN3VBQTVGYXV6blBaQzV2WHlpc1R2SVRjOXFvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFTWZZR0trdVFaOEtTOXFoamN2bk9yYld6TWxXdTN1N0RnYmtJR1Y2Um9GektaVFhhbWx2MwpuOTJEZU5EVFVFY3J5SUZjdDkwaFJoMGJWWTRxWXZiV0pRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
-```
-
-## 5.1 RBAC and Service Accounts
-### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
-if they need this role or if they could use a role with fewer privileges.
-Where possible, first bind users to a lower privileged role and then remove the
-clusterrolebinding to the cluster-admin role :
-kubectl delete clusterrolebinding [name]
-
-### 5.1.2 Minimize access to secrets (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove get, list and watch access to Secret objects in the cluster.
-
-### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible replace any use of wildcards in clusterroles and roles with specific
-objects or actions.
-
-### 5.1.4 Minimize access to create pods (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove create access to pod objects in the cluster.
-
-### 5.1.5 Ensure that default service accounts are not actively used. (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-Create explicit service accounts wherever a Kubernetes workload requires specific access
-to the Kubernetes API server.
-Modify the configuration of each default service account to include this value
-automountServiceAccountToken: false
-
-**Audit Script:** `check_for_default_sa.sh`
-
-```bash
-#!/bin/bash
-
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l)
-if [[ ${count_sa} -gt 0 ]]; then
- echo "false"
- exit
-fi
-
-for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name")
-do
- for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[].kind=="ServiceAccount" and .subjects[].name=="default") or (.subjects[].kind=="Group" and .subjects[].name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"')
- do
- read kind name <<<$(IFS=","; echo $result)
- resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[] != "podsecuritypolicies")' | wc -l)
- if [[ ${resource_count} -gt 0 ]]; then
- echo "false"
- exit
- fi
- done
-done
-
-
-echo "true"
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_default_sa.sh
-```
-
-**Expected Result**:
-
-```console
-'true' is equal to 'true'
-```
-
-**Returned Value**:
-
-```console
-Error from server (Forbidden): serviceaccounts is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "serviceaccounts" in API group "" at the cluster scope Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "calico-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-fleet-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-impersonation-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cattle-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "cis-operator-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "default" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-node-lease" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-public" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-system" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "local" Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:cis-operator-system:cis-serviceaccount" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "tigera-operator" true
-```
-
-### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Modify the definition of pods and service accounts which do not need to mount service
-account tokens to disable it.
-
-### 5.1.7 Avoid use of system:masters group (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Remove the system:masters group from all users in the cluster.
-
-### 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Where possible, remove the impersonate, bind and escalate rights from subjects.
-
-## 5.2 Pod Security Standards
-### 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that either Pod Security Admission or an external policy control system is in place
-for every namespace which contains user workloads.
-
-### 5.2.2 Minimize the admission of privileged containers (Manual)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of privileged containers.
-
-**Audit:**
-
-```bash
-kubectl get psp global-restricted-psp -o json | jq -r '.spec.runAsUser.rule'
-```
-
-**Expected Result**:
-
-```console
-'MustRunAsNonRoot' is present
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp"
-```
-
-### 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostPID` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostIPC` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of `hostNetwork` containers.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.7 Minimize the admission of root containers (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
-or `MustRunAs` with the range of UIDs not including 0, is set.
-
-**Audit:**
-
-```bash
-kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp" --count=0
-```
-
-### 5.2.8 Minimize the admission of containers with the NET_RAW capability (Automated)
-
-
-**Result:** fail
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with the `NET_RAW` capability.
-
-**Audit:**
-
-```bash
-kubectl get psp global-restricted-psp -o json | jq -r .spec.requiredDropCapabilities[]
-```
-
-**Expected Result**:
-
-```console
-'ALL' is present
-```
-
-**Returned Value**:
-
-```console
-error: the server doesn't have a resource type "psp"
-```
-
-### 5.2.9 Minimize the admission of containers with added capabilities (Automated)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that `allowedCapabilities` is not present in policies for the cluster unless
-it is set to an empty array.
-
-### 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Review the use of capabilites in applications running on your cluster. Where a namespace
-contains applicaions which do not require any Linux capabities to operate consider adding
-a PSP which forbids the admission of containers which do not drop all capabilities.
-
-### 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
-
-### 5.2.12 Minimize the admission of HostPath volumes (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers with `hostPath` volumes.
-
-### 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Add policies to each namespace in the cluster which has user workloads to restrict the
-admission of containers which use `hostPort` sections.
-
-## 5.3 Network Policies and CNI
-### 5.3.1 Ensure that the CNI in use supports Network Policies (Automated)
-
-
-**Result:** pass
-
-**Remediation:**
-If the CNI plugin in use does not support network policies, consideration should be given to
-making use of a different plugin, or finding an alternate mechanism for restricting traffic
-in the Kubernetes cluster.
-
-**Audit:**
-
-```bash
-kubectl get pods --all-namespaces --selector='k8s-app in (calico-node, canal, cilium)' -o name | wc -l | xargs -I {} echo '--count={}'
-```
-
-**Expected Result**:
-
-```console
-'count' is greater than 0
-```
-
-**Returned Value**:
-
-```console
---count=1
-```
-
-### 5.3.2 Ensure that all Namespaces have Network Policies defined (Automated)
-
-
-**Result:** true
-
-**Remediation:**
-Follow the documentation and create NetworkPolicy objects as you need them.
-
-**Audit Script:** `check_for_rke2_network_policies.sh`
-
-```bash
-#!/bin/bash
-
-set -eE
-
-handle_error() {
- echo "false"
-}
-
-trap 'handle_error' ERR
-
-for namespace in kube-system kube-public default; do
- policy_count=$(/var/lib/rancher/rke2/bin/kubectl get networkpolicy -n ${namespace} -o json | jq -r '.items | length')
- if [ ${policy_count} -eq 0 ]; then
- echo "false"
- exit
- fi
-done
-
-echo "true"
-
-```
-
-**Audit Execution:**
-
-```bash
-./check_for_rke2_network_policies.sh
-```
-
-**Expected Result**:
-
-```console
-'true' is present
-```
-
-**Returned Value**:
-
-```console
-true
-```
-
-## 5.4 Secrets Management
-### 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-If possible, rewrite application code to read Secrets from mounted secret files, rather than
-from environment variables.
-
-### 5.4.2 Consider external secret storage (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Refer to the Secrets management options offered by your cloud provider or a third-party
-secrets management solution.
-
-## 5.5 Extensible Admission Control
-### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and setup image provenance.
-
-## 5.7 General Policies
-### 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the documentation and create namespaces for objects in your deployment as you need
-them.
-
-### 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
-An example is as below:
-securityContext:
-seccompProfile:
-type: RuntimeDefault
-
-### 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
-suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
-Containers.
-
-### 5.7.4 The default namespace should not be used (Manual)
-
-
-**Result:** warn
-
-**Remediation:**
-Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
-resources and that all new resources are created in a specific namespace.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-security-best-practices.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-security-best-practices.md
new file mode 100644
index 00000000000..0d98495ffec
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-security-best-practices.md
@@ -0,0 +1,21 @@
+---
+title: Rancher 安全最佳实践
+---
+
+
+
+
+
+### 限制对 /version 和 /rancherversion 的公共访问
+
+上游(本地) Rancher 实例提供正在运行的 Rancher 版本和用于构建它的 Go 版本信息。这些信息可以通过 `/version` 路径访问,该路径用于诸如自动化版本升级或确认部署成功等任务。上游实例还提供了可通过 `/rancherversion` 路径访问的 Rancher 版本信息。
+
+攻击者可能会滥用这些信息来识别正在运行的 Rancher 版本,并与潜在的漏洞相关联以进行利用。如果你的上游 Rancher 实例在网上是公开可访问的,请使用 7 层防火墙来阻止 `/version` 和 `/rancherversion` 路径。
+
+更多关于保护服务器的详细信息,请参阅 [OWASP Web Application Security Testing - Enumerate Infrastructure and Application Admin Interfaces](https://owasp.org/www-project-web-security-testing-guide/stable/4-Web_Application_Security_Testing/02-Configuration_and_Deployment_Management_Testing/05-Enumerate_Infrastructure_and_Application_Admin_Interfaces.html)。
+
+### 会话管理
+
+某些环境可能需要额外的安全控制来管理会话。例如,你可能希望限制用户的并发活动会话或限制可以从哪些地理位置发起这些会话。Rancher 默认情况下不支持这些功能。
+
+如果你需要此类功能,请将 7 层防火墙与[外部认证](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/authentication-config.md#外部认证与本地认证)结合使用。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-security.md b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-security.md
new file mode 100644
index 00000000000..935aaa2b780
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/reference-guides/rancher-security/rancher-security.md
@@ -0,0 +1,95 @@
+---
+title: 安全
+---
+
+
+
+
+
+