diff --git a/content/k3s/latest/en/installation/ha-embedded/_index.md b/content/k3s/latest/en/installation/ha-embedded/_index.md index 9d4a1d85cc6..6d4bb7bab49 100644 --- a/content/k3s/latest/en/installation/ha-embedded/_index.md +++ b/content/k3s/latest/en/installation/ha-embedded/_index.md @@ -32,3 +32,5 @@ There are a few config flags that must be the same in all server nodes: ## Existing clusters If you have an existing cluster using the default embedded SQLite database, you can convert it to etcd by simply restarting your K3s server with the `--cluster-init` flag. Once you've done that, you'll be able to add additional instances as described above. + +>**Important:** K3s v1.22.2 and newer support migration from SQLite to etcd. Older versions will create a new empty datastore if you add `--cluster-init` to an existing server. diff --git a/content/k3s/latest/en/installation/ha/_index.md b/content/k3s/latest/en/installation/ha/_index.md index a7bc5491eac..4e5072bfff8 100644 --- a/content/k3s/latest/en/installation/ha/_index.md +++ b/content/k3s/latest/en/installation/ha/_index.md @@ -35,7 +35,7 @@ K3s requires two or more server nodes for this HA configuration. See the [Instal When running the `k3s server` command on these nodes, you must set the `datastore-endpoint` parameter so that K3s knows how to connect to the external datastore. The `token` parameter can also be used to set a deterministic token when adding nodes. When empty, this token will be generated automatically for further use. -For example, a command like the following could be used to install the K3s server with a MySQL database as the external datastore and [set a token]({{}}/k3s/latest/en/installation/install-options/server-config/#cluster-options}}): +For example, a command like the following could be used to install the K3s server with a MySQL database as the external datastore and [set a token]({{}}/k3s/latest/en/installation/install-options/server-config/#cluster-options): ```bash curl -sfL https://get.k3s.io | sh -s - server \ @@ -72,7 +72,7 @@ If the first server node was started without the `--token` CLI flag or `K3S_TOKE cat /var/lib/rancher/k3s/server/token ``` -Additional server nodes can then be added [using the token]({{}}/k3s/latest/en/installation/install-options/server-config/#cluster-options}}): +Additional server nodes can then be added [using the token]({{}}/k3s/latest/en/installation/install-options/server-config/#cluster-options): ```bash curl -sfL https://get.k3s.io | sh -s - server \ diff --git a/content/k3s/latest/en/installation/network-options/_index.md b/content/k3s/latest/en/installation/network-options/_index.md index dcc65a03aea..0889039827e 100644 --- a/content/k3s/latest/en/installation/network-options/_index.md +++ b/content/k3s/latest/en/installation/network-options/_index.md @@ -22,7 +22,7 @@ If you wish to use WireGuard as your flannel backend it may require additional k ### Custom CNI -Run K3s with `--flannel-backend=none` and install your CNI of choice. IP Forwarding should be enabled for Canal and Calico. Please reference the steps below. +Run K3s with `--flannel-backend=none` and install your CNI of choice. Most CNI plugins come with their own network policy engine, so it is recommended to set `--disable-network-policy` as well to avoid conflicts. IP Forwarding should be enabled for Canal and Calico. Please reference the steps below. {{% tabs %}} {{% tab "Canal" %}} @@ -74,15 +74,24 @@ You should see that IP forwarding is set to true. Dual-stack networking must be configured when the cluster is first created. It cannot be enabled on an existing single-stack cluster. +Dual-stack is supported on k3s v1.21 or above. + To enable dual-stack in k3s, you must provide valid dual-stack `cluster-cidr` and `service-cidr`, and set `disable-network-policy` on all server nodes. Both servers and agents must provide valid dual-stack `node-ip` settings. Node address auto-detection and network policy enforcement are not supported on dual-stack clusters when using the default flannel CNI. Besides, only vxlan backend is supported at the moment. This is an example of a valid configuration: ``` -node-ip: 10.0.10.7,2a05:d012:c6f:4611:5c2:5602:eed2:898c -cluster-cidr: 10.42.0.0/16,2001:cafe:42:0::/56 -service-cidr: 10.43.0.0/16,2001:cafe:42:1::/112 -disable-network-policy: true +k3s server --node-ip 10.0.10.7,2a05:d012:c6f:4611:5c2:5602:eed2:898c --cluster-cidr 10.42.0.0/16,2001:cafe:42:0::/56 --service-cidr 10.43.0.0/16,2001:cafe:42:1::/112 --disable-network-policy ``` Note that you can choose whatever `cluster-cidr` and `service-cidr` value, however the `node-ip` values must correspond to the ip addresses of your main interface. Remember to allow ipv6 traffic if you are deploying in a public cloud. If you are using a custom cni plugin, i.e. a cni plugin different from flannel, the previous configuration might not be enough to enable dual-stack in the cni plugin. Please check how to enable dual-stack in its documentation and verify if network policies can be enabled. + +### IPv6 only installation + +IPv6 only setup is supported on k3s v1.22 or above. As in dual-stack operation, IPv6 node addresses cannot be auto-detected; all nodes must have an explicitly configured IPv6 `node-ip`. This is an example of a valid configuration: + +``` +k3s server --node-ip 2a05:d012:c6f:4611:5c2:5602:eed2:898c --cluster-cidr 2001:cafe:42:0::/56 --service-cidr 2001:cafe:42:1::/112 --disable-network-policy +``` + +Note that you can specify only one IPv6 `cluster-cidr` value. diff --git a/content/k3s/latest/en/security/_index.md b/content/k3s/latest/en/security/_index.md index f8b4285fc49..ba6ef7ccbd6 100644 --- a/content/k3s/latest/en/security/_index.md +++ b/content/k3s/latest/en/security/_index.md @@ -5,7 +5,7 @@ weight: 90 This section describes the methodology and means of securing a K3s cluster. It's broken into 2 sections. These guides assume k3s is running with embedded etcd. -The documents below apply to both CIS 1.5 & 1.6. +The documents below apply to CIS Kubernetes Benchmark v1.6. * [Hardening Guide](./hardening_guide/) * [CIS Benchmark Self-Assessment Guide](./self_assessment/) diff --git a/content/k3s/latest/en/security/hardening_guide/_index.md b/content/k3s/latest/en/security/hardening_guide/_index.md index 90e888a8cd4..e22571f30ef 100644 --- a/content/k3s/latest/en/security/hardening_guide/_index.md +++ b/content/k3s/latest/en/security/hardening_guide/_index.md @@ -3,12 +3,12 @@ title: "CIS Hardening Guide" weight: 80 --- -This document provides prescriptive guidance for hardening a production installation of K3s. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). +This document provides prescriptive guidance for hardening a production installation of K3s. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Internet Security (CIS). K3s has a number of security mitigations applied and turned on by default and will pass a number of the Kubernetes CIS controls without modification. There are some notable exceptions to this that require manual intervention to fully comply with the CIS Benchmark: 1. K3s will not modify the host operating system. Any host-level modifications will need to be done manually. -2. Certain CIS policy controls for PodSecurityPolicies and NetworkPolicies will restrict the functionality of this cluster. You must opt into having K3s configure these by adding the appropriate options (enabling of admission plugins) to your command-line flags or configuration file as well as manually applying appropriate policies. Further detail in the sections below. +2. Certain CIS policy controls for `PodSecurityPolicies` and `NetworkPolicies` will restrict the functionality of the cluster. You must opt into having K3s configure these by adding the appropriate options (enabling of admission plugins) to your command-line flags or configuration file as well as manually applying appropriate policies. Further details are presented in the sections below. The first section (1.1) of the CIS Benchmark concerns itself primarily with pod manifest permissions and ownership. K3s doesn't utilize these for the core components since everything is packaged into a single binary. @@ -31,23 +31,24 @@ vm.panic_on_oom=0 vm.overcommit_memory=1 kernel.panic=10 kernel.panic_on_oops=1 +kernel.keys.root_maxbytes=25000000 ``` ## Kubernetes Runtime Requirements -The runtime requirements to comply with the CIS Benchmark are centered around pod security (PSPs) and network policies. These are outlined in this section. K3s doesn't apply any default PSPs or network policies however K3s ships with a controller that is meant to apply a given set of network policies. By default, K3s runs with the "NodeRestriction" admission controller. To enable PSPs, add the following to the K3s start command: `--kube-apiserver-arg="enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount"`. This will have the effect of maintaining the "NodeRestriction" plugin as well as enabling the "PodSecurityPolicy". +The runtime requirements to comply with the CIS Benchmark are centered around pod security (PSPs), network policies and API Server auditing logs. These are outlined in this section. K3s doesn't apply any default PSPs or network policies. However, K3s ships with a controller that is meant to apply a given set of network policies. By default, K3s runs with the `NodeRestriction` admission controller. To enable PSPs, add the following to the K3s start command: `--kube-apiserver-arg="enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount"`. This will have the effect of maintaining the `NodeRestriction` plugin as well as enabling the `PodSecurityPolicy`. The same happens with the API Server auditing logs, K3s doesn't enable them by default, so audit log configuration and audit policy must be created manually. -### PodSecurityPolicies +### Pod Security Policies When PSPs are enabled, a policy can be applied to satisfy the necessary controls described in section 5.2 of the CIS Benchmark. -Here's an example of a compliant PSP. +Here is an example of a compliant PSP. ```yaml apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: - name: cis1.5-compliant-psp + name: restricted-psp spec: privileged: false # CIS - 5.2.1 allowPrivilegeEscalation: false # CIS - 5.2.5 @@ -59,7 +60,9 @@ spec: - 'projected' - 'secret' - 'downwardAPI' + - 'csi' - 'persistentVolumeClaim' + - 'ephemeral' hostNetwork: false # CIS - 5.2.4 hostIPC: false # CIS - 5.2.3 hostPID: false # CIS - 5.2.2 @@ -80,7 +83,7 @@ spec: readOnlyRootFilesystem: false ``` -Before the above PSP to be effective, we need to create a couple ClusterRoles and ClusterRole. We also need to include a "system unrestricted policy" which is needed for system-level pods that require additional privileges. +For the above PSP to be effective, we need to create a ClusterRole and a ClusterRoleBinding. We also need to include a "system unrestricted policy" which is needed for system-level pods that require additional privileges. These can be combined with the PSP yaml above and NetworkPolicy yaml below into a single file and placed in the `/var/lib/rancher/k3s/server/manifests` directory. Below is an example of a `policy.yaml` file. @@ -88,7 +91,7 @@ These can be combined with the PSP yaml above and NetworkPolicy yaml below into apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: - name: cis1.5-compliant-psp + name: restricted-psp spec: privileged: false allowPrivilegeEscalation: false @@ -100,7 +103,9 @@ spec: - 'projected' - 'secret' - 'downwardAPI' + - 'csi' - 'persistentVolumeClaim' + - 'ephemeral' hostNetwork: false hostIPC: false hostPID: false @@ -123,7 +128,7 @@ spec: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: - name: psp:restricted + name: psp:restricted-psp labels: addonmanager.kubernetes.io/mode: EnsureExists rules: @@ -131,62 +136,23 @@ rules: resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: - - cis1.5-compliant-psp + - restricted-psp --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: - name: default:restricted + name: default:restricted-psp labels: addonmanager.kubernetes.io/mode: EnsureExists roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole - name: psp:restricted + name: psp:restricted-psp subjects: - kind: Group name: system:authenticated apiGroup: rbac.authorization.k8s.io --- -kind: NetworkPolicy -apiVersion: networking.k8s.io/v1 -metadata: - name: intra-namespace - namespace: kube-system -spec: - podSelector: {} - ingress: - - from: - - namespaceSelector: - matchLabels: - name: kube-system ---- -kind: NetworkPolicy -apiVersion: networking.k8s.io/v1 -metadata: - name: intra-namespace - namespace: default -spec: - podSelector: {} - ingress: - - from: - - namespaceSelector: - matchLabels: - name: default ---- -kind: NetworkPolicy -apiVersion: networking.k8s.io/v1 -metadata: - name: intra-namespace - namespace: kube-public -spec: - podSelector: {} - ingress: - - from: - - namespaceSelector: - matchLabels: - name: kube-public ---- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: @@ -253,6 +219,45 @@ subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: intra-namespace + namespace: kube-system +spec: + podSelector: {} + ingress: + - from: + - namespaceSelector: + matchLabels: + name: kube-system +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: intra-namespace + namespace: default +spec: + podSelector: {} + ingress: + - from: + - namespaceSelector: + matchLabels: + name: default +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: intra-namespace + namespace: kube-public +spec: + podSelector: {} + ingress: + - from: + - namespaceSelector: + matchLabels: + name: kube-public ``` > **Note:** The Kubernetes critical additions such as CNI, DNS, and Ingress are ran as pods in the `kube-system` namespace. Therefore, this namespace will have a policy that is less restrictive so that these components can run properly. @@ -263,7 +268,7 @@ subjects: CIS requires that all namespaces have a network policy applied that reasonably limits traffic into namespaces and pods. -Here's an example of a compliant network policy. +Here is an example of a compliant network policy. ```yaml kind: NetworkPolicy @@ -302,7 +307,7 @@ spec: - Ingress ``` -The metrics-server and Traefik ingress controller will be blocked by default if network policies are not created to allow access. Traefik v1 as packaged in K3s version 1.20 and below uses different labels than Traefik v2; ensure that you only use the sample yaml below that is associated with the version of Traefik present on your cluster. +The metrics-server and Traefik ingress controller will be blocked by default if network policies are not created to allow access. Traefik v1 as packaged in K3s version 1.20 and below uses different labels than Traefik v2. Ensure that you only use the sample yaml below that is associated with the version of Traefik present on your cluster. ```yaml apiVersion: networking.k8s.io/v1 @@ -366,24 +371,67 @@ spec: > **Note:** Operators must manage network policies as normal for additional namespaces that are created. +### API Server audit configuration + +CIS requirements 1.2.22 to 1.2.25 are related to configuring audit logs for the API Server. K3s doesn't create by default the log directory and audit policy, as auditing requirements are specific to each user's policies and environment. + +The log directory, ideally, must be created before starting K3s. A restrictive access permission is recommended to avoid leaking potential sensitive information. + +```bash +sudo mkdir -p -m 700 /var/lib/rancher/k3s/server/logs +``` + +A starter audit policy to log request metadata is provided below. The policy should be written to a file named `audit.yaml` in `/var/lib/rancher/k3s/server` directory. Detailed information about policy configuration for the API server can be found in the Kubernetes [documentation](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/). + +```yaml +apiVersion: audit.k8s.io/v1 +kind: Policy +rules: +- level: Metadata +``` + +Both configurations must be passed as arguments to the API Server as: + +```bash +--kube-apiserver-arg='audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log' +--kube-apiserver-arg='audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml' +``` + +If the configurations are created after K3s is installed, they must be added to K3s' systemd service in `/etc/systemd/system/k3s.service`. + +```bash +ExecStart=/usr/local/bin/k3s \ + server \ + '--kube-apiserver-arg=audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log' \ + '--kube-apiserver-arg=audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml' \ +``` + +K3s must be restarted to load the new configuration. + +```bash +sudo systemctl daemon-reload +sudo systemctl restart k3s.service +``` + +Additional information about CIS requirements 1.2.22 to 1.2.25 is presented below. + ## Known Issues The following are controls that K3s currently does not pass by default. Each gap will be explained, along with a note clarifying whether it can be passed through manual operator intervention, or if it will be addressed in a future release of K3s. - ### Control 1.2.15 Ensure that the admission control plugin `NamespaceLifecycle` is set.
Rationale -Setting admission control policy to NamespaceLifecycle ensures that objects cannot be created in non-existent namespaces, and that namespaces undergoing termination are not used for creating the new objects. This is recommended to enforce the integrity of the namespace termination process and also for the availability of the newer objects. +Setting admission control policy to `NamespaceLifecycle` ensures that objects cannot be created in non-existent namespaces, and that namespaces undergoing termination are not used for creating the new objects. This is recommended to enforce the integrity of the namespace termination process and also for the availability of the newer objects. This can be remediated by passing this argument as a value to the `enable-admission-plugins=` and pass that to `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below.
-### Control 1.2.16 (mentioned above) +### Control 1.2.16 Ensure that the admission control plugin `PodSecurityPolicy` is set.
Rationale -A Pod Security Policy is a cluster-level resource that controls the actions that a pod can perform and what it has the ability to access. The PodSecurityPolicy objects define a set of conditions that a pod must run with in order to be accepted into the system. Pod Security Policies are comprised of settings and strategies that control the security features a pod has access to and hence this must be used to control pod access permissions. +A Pod Security Policy is a cluster-level resource that controls the actions that a pod can perform and what it has the ability to access. The `PodSecurityPolicy` objects define a set of conditions that a pod must run with in order to be accepted into the system. Pod Security Policies are comprised of settings and strategies that control the security features a pod has access to and hence this must be used to control pod access permissions. This can be remediated by passing this argument as a value to the `enable-admission-plugins=` and pass that to `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below.
@@ -446,16 +494,18 @@ This can be remediated by passing this argument as a value to the `--kube-apiser Ensure that the `--encryption-provider-config` argument is set as appropriate.
Rationale -Where `etcd` encryption is used, it is important to ensure that the appropriate set of encryption providers is used. Currently, the aescbc, kms and secretbox are likely to be appropriate options. +`etcd` is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted at rest to avoid any disclosures. + +Detailed steps on how to configure secrets encryption in K3s are available in [Secrets Encryption](../secrets_encryption/).
### Control 1.2.34 Ensure that encryption providers are appropriately configured.
Rationale -`etcd` is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted at rest to avoid any disclosures. +Where `etcd` encryption is used, it is important to ensure that the appropriate set of encryption providers is used. Currently, the `aescbc`, `kms` and `secretbox` are likely to be appropriate options. -This can be remediated by passing a valid configuration to `k3s` as outlined above. +This can be remediated by passing a valid configuration to `k3s` as outlined above. Detailed steps on how to configure secrets encryption in K3s are available in [Secrets Encryption](../secrets_encryption/).
### Control 1.3.1 @@ -468,7 +518,7 @@ This can be remediated by passing this argument as a value to the `--kube-apiser ### Control 3.2.1 -Ensure that a minimal audit policy is created (Scored) +Ensure that a minimal audit policy is created.
Rationale Logging is an important detective control for all systems, to detect potential unauthorized access. @@ -476,7 +526,6 @@ Logging is an important detective control for all systems, to detect potential u This can be remediated by passing controls 1.2.22 - 1.2.25 and verifying their efficacy.
- ### Control 4.2.7 Ensure that the `--make-iptables-util-chains` argument is set to true.
@@ -487,24 +536,23 @@ This can be remediated by passing this argument as a value to the `--kube-apiser
### Control 5.1.5 -Ensure that default service accounts are not actively used. (Scored) +Ensure that default service accounts are not actively used
Rationale - -Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. +Kubernetes provides a `default` service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. -
-The remediation for this is to update the `automountServiceAccountToken` field to `false` for the `default` service account in each namespace. +This can be remediated by updating the `automountServiceAccountToken` field to `false` for the `default` service account in each namespace. For `default` service accounts in the built-in namespaces (`kube-system`, `kube-public`, `kube-node-lease`, and `default`), K3s does not automatically do this. You can manually update this field on these service accounts to pass the control. + ## Control Plane Execution and Arguments -Listed below are the K3s control plane components and the arguments they're given at start, by default. Commented to their right is the CIS 1.5 control that they satisfy. +Listed below are the K3s control plane components and the arguments they are given at start, by default. Commented to their right is the CIS 1.6 control that they satisfy. ```bash kube-apiserver @@ -604,13 +652,14 @@ kubelet --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key # 4.2.10 ``` -The command below is an example of how the outlined remediations can be applied. +The command below is an example of how the outlined remediations can be applied to harden K3s. ```bash k3s server \ --protect-kernel-defaults=true \ --secrets-encryption=true \ - --kube-apiserver-arg='audit-log-path=/var/lib/rancher/k3s/server/logs/audit-log' \ + --kube-apiserver-arg='audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log' \ + --kube-apiserver-arg='audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml' \ --kube-apiserver-arg='audit-log-maxage=30' \ --kube-apiserver-arg='audit-log-maxbackup=10' \ --kube-apiserver-arg='audit-log-maxsize=100' \ @@ -625,4 +674,4 @@ k3s server \ ## Conclusion -If you have followed this guide, your K3s cluster will be configured to comply with the CIS Kubernetes Benchmark. You can review the [CIS Benchmark Self-Assessment Guide](../self_assessment/) to understand the expectations of each of the benchmarks and how you can do the same on your cluster. +If you have followed this guide, your K3s cluster will be configured to comply with the CIS Kubernetes Benchmark. You can review the [CIS Benchmark Self-Assessment Guide](../self_assessment/) to understand the expectations of each of the benchmark's checks and how you can do the same on your cluster. diff --git a/content/k3s/latest/en/security/self_assessment/_index.md b/content/k3s/latest/en/security/self_assessment/_index.md index ff7ba082384..6471a95fb83 100644 --- a/content/k3s/latest/en/security/self_assessment/_index.md +++ b/content/k3s/latest/en/security/self_assessment/_index.md @@ -1,18 +1,17 @@ --- -title: "CIS Self Assessment Guide" +title: CIS Self Assessment Guide weight: 90 --- - -### CIS Kubernetes Benchmark v1.5 - K3s v1.17, v1.18, & v1.19 +### CIS Kubernetes Benchmark v1.6 - K3s with Kubernetes v1.17 to v1.21 #### Overview -This document is a companion to the K3s security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of K3s, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the CIS Kubernetes benchmark. It is to be used by K3s operators, security teams, auditors, and decision-makers. +This document is a companion to the K3s security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of K3s, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the CIS Kubernetes Benchmark. It is to be used by K3s operators, security teams, auditors, and decision-makers. -This guide is specific to the **v1.17**, **v1.18**, and **v1.19** release line of K3s and the **v1.5.1** release of the CIS Kubernetes Benchmark. +This guide is specific to the **v1.17**, **v1.18**, **v1.19**, **v1.20** and **v1.21** release line of K3s and the **v1.6** release of the CIS Kubernetes Benchmark. -For more detail about each control, including more detailed descriptions and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.5. You can download the benchmark after logging in to [CISecurity.org](https://www.cisecurity.org/benchmark/kubernetes/). +For more information about each control, including detailed descriptions and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.6. You can download the benchmark, after creating a free account, in [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/). #### Testing controls methodology @@ -24,2474 +23,3056 @@ These are the possible results for each control: - **Pass** - The K3s cluster under test passed the audit outlined in the benchmark. - **Not Applicable** - The control is not applicable to K3s because of how it is designed to operate. The remediation section will explain why this is so. -- **Not Scored - Operator Dependent** - The control is not scored in the CIS benchmark and it depends on the cluster's use case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure K3s does not prevent their implementation, but no further configuration or auditing of the cluster under test has been performed. +- **Warn** - The control is manual in the CIS benchmark and it depends on the cluster's use case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure K3s does not prevent their implementation, but no further configuration or auditing of the cluster under test has been performed. This guide makes the assumption that K3s is running as a Systemd unit. Your installation may vary and will require you to adjust the "audit" commands to fit your scenario. +> NOTE: Only `automated` tests (previously called `scored`) are covered in this guide. + ### Controls --- -## 1 Master Node Security Configuration -### 1.1 Master Node Configuration Files -#### 1.1.1 -Ensure that the API server pod specification file permissions are set to `644` or more restrictive (Scored) -
-Rationale -The API server pod specification file controls various parameters that set the behavior of the API server. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. -
+## 1.1 Master Node Configuration Files +### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated) + **Result:** Not Applicable +**Remediation:** +Run the below command (based on the file location on your system) on the +master node. +For example, chmod 644 /etc/kubernetes/manifests/kube-apiserver.yaml + +### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) -#### 1.1.2 -Ensure that the API server pod specification file ownership is set to `root:root` (Scored) -
-Rationale -The API server pod specification file controls various parameters that set the behavior of the API server. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`. -
**Result:** Not Applicable +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root /etc/kubernetes/manifests/kube-apiserver.yaml + +### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated) -#### 1.1.3 -Ensure that the controller manager pod specification file permissions are set to `644` or more restrictive (Scored) -
-Rationale -The controller manager pod specification file controls various parameters that set the behavior of the Controller Manager on the master node. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. -
**Result:** Not Applicable +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chmod 644 /etc/kubernetes/manifests/kube-controller-manager.yaml + +### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) -#### 1.1.4 -Ensure that the controller manager pod specification file ownership is set to `root:root` (Scored) -
-Rationale -The controller manager pod specification file controls various parameters that set the behavior of various components of the master node. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. -
**Result:** Not Applicable +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml + +### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated) -#### 1.1.5 -Ensure that the scheduler pod specification file permissions are set to `644` or more restrictive (Scored) -
-Rationale -The scheduler pod specification file controls various parameters that set the behavior of the Scheduler service in the master node. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. -
**Result:** Not Applicable +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chmod 644 /etc/kubernetes/manifests/kube-scheduler.yaml + +### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) -#### 1.1.6 -Ensure that the scheduler pod specification file ownership is set to `root:root` (Scored) -
-Rationale -The scheduler pod specification file controls various parameters that set the behavior of the kube-scheduler service in the master node. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. -
**Result:** Not Applicable +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root /etc/kubernetes/manifests/kube-scheduler.yaml + +### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated) -#### 1.1.7 -Ensure that the etcd pod specification file permissions are set to `644` or more restrictive (Scored) -
-Rationale -The etcd pod specification file /var/lib/rancher/k3s/agent/pod-manifests/etcd.yaml controls various parameters that set the behavior of the etcd service in the master node. etcd is a highly- available key-value store which Kubernetes uses for persistent storage of all of its REST API object. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. -
**Result:** Not Applicable +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chmod 644 /etc/kubernetes/manifests/etcd.yaml + +### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) -#### 1.1.8 -Ensure that the etcd pod specification file ownership is set to `root:root` (Scored) -
-Rationale -The etcd pod specification file /var/lib/rancher/k3s/agent/pod-manifests/etcd.yaml controls various parameters that set the behavior of the etcd service in the master node. etcd is a highly- available key-value store which Kubernetes uses for persistent storage of all of its REST API object. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. -
**Result:** Not Applicable +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root /etc/kubernetes/manifests/etcd.yaml + +### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual) -#### 1.1.9 -Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Not Scored) -
-Rationale -Container Network Interface provides various networking options for overlay networking. You should consult their documentation and restrict their respective file permissions to maintain the integrity of those files. Those files should be writable by only the administrators on the system. -
**Result:** Not Applicable +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chmod 644 + +### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) -#### 1.1.10 -Ensure that the Container Network Interface file ownership is set to root:root (Not Scored) -
-Rationale -Container Network Interface provides various networking options for overlay networking. You should consult their documentation and restrict their respective file permissions to maintain the integrity of those files. Those files should be owned by root:root. -
**Result:** Not Applicable +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root -#### 1.1.11 -Ensure that the etcd data directory permissions are set to 700 or more restrictive (Scored) -
-Rationale -etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. This data directory should be protected from any unauthorized reads or writes. It should not be readable or writable by any group members or the world. -
+### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) -**Result:** Pass -**Audit:** +**Result:** pass + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the below command: +ps -ef | grep etcd +Run the below command (based on the etcd data directory found above). For example, +chmod 700 /var/lib/etcd + +**Audit Script:** `check_for_k3s_etcd.sh` + ```bash -stat -c %a /var/lib/rancher/k3s/server/db/etcd +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo "$(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo "$(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 1.1.11 +``` + +**Expected Result**: + +```console +'700' is equal to '700' +``` + +**Returned Value**: + +```console 700 ``` -**Remediation:** -K3s manages the etcd data directory and sets its permissions to 700. No manual remediation needed. (only relevant when Etcd is used for the data store) +### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) -#### 1.1.12 -Ensure that the etcd data directory ownership is set to `etcd:etcd` (Scored) -
-Rationale -etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. This data directory should be protected from any unauthorized reads or writes. It should be owned by etcd:etcd. -
- **Result:** Not Applicable +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument --data-dir, +from the below command: +ps -ef | grep etcd +Run the below command (based on the etcd data directory found above). +For example, chown etcd:etcd /var/lib/etcd -#### 1.1.13 -Ensure that the `admin.conf` file permissions are set to `644` or more restrictive (Scored) -
-Rationale -The admin.conf is the administrator kubeconfig file defining various settings for the administration of the cluster. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +### 1.1.13 Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated) -In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/admin.kubeconfig`. -
-**Result:** Pass +**Result:** Not Applicable **Remediation:** -By default, K3s creates the directory and files with the expected permissions of `644`. No manual remediation should be necessary. +Run the below command (based on the file location on your system) on the master node. +For example, +chmod 644 /var/lib/rancher/k3s/server/cred/admin.kubeconfig + +### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) -#### 1.1.14 -Ensure that the `admin.conf` file ownership is set to `root:root` (Scored) -
-Rationale -The admin.conf file contains the admin credentials for the cluster. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. - -In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/admin.kubeconfig`. -
- -**Result:** Pass +**Result:** pass **Remediation:** -By default, K3s creates the directory and files with the expected ownership of `root:root`. No manual remediation should be necessary. - - -#### 1.1.15 -Ensure that the `scheduler.conf` file permissions are set to `644` or more restrictive (Scored) -
-Rationale - -The scheduler.conf file is the kubeconfig file for the Scheduler. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. - -In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig`. -
- -**Result:** Pass - -**Remediation:** -By default, K3s creates the directory and files with the expected permissions of `644`. No manual remediation should be necessary. - - -#### 1.1.16 -Ensure that the `scheduler.conf` file ownership is set to `root:root` (Scored) -
-Rationale -The scheduler.conf file is the kubeconfig file for the Scheduler. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. - -In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig`. -
- -**Result:** Pass - -**Remediation:** -By default, K3s creates the directory and files with the expected ownership of `root:root`. No manual remediation should be necessary. - - -#### 1.1.17 -Ensure that the `controller.kubeconfig` file permissions are set to `644` or more restrictive (Scored) -
-Rationale -The controller.kubeconfig file is the kubeconfig file for the Scheduler. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. - -In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/controller.kubeconfig`. -
- -**Result:** Pass - -**Remediation:** -By default, K3s creates the directory and files with the expected permissions of `644`. No manual remediation should be necessary. - - -#### 1.1.18 -Ensure that the `controller.kubeconfig` file ownership is set to `root:root` (Scored) -
-Rationale -The controller.kubeconfig file is the kubeconfig file for the Scheduler. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. - -In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/controller.kubeconfig`. -
- -**Result:** Pass - -**Remediation:** -By default, K3s creates the directory and files with the expected ownership of `root:root`. No manual remediation should be necessary. - - -#### 1.1.19 -Ensure that the Kubernetes PKI directory and file ownership is set to `root:root` (Scored) -
-Rationale -Kubernetes makes use of a number of certificates as part of its operation. You should set the ownership of the directory containing the PKI information and all files in that directory to maintain their integrity. The directory and files should be owned by root:root. -
- -**Result:** Pass +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root /etc/kubernetes/admin.conf **Audit:** + ```bash -stat -c %U:%G /var/lib/rancher/k3s/server/tls +/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi' +``` + +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console root:root ``` +### 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated) + + +**Result:** pass + **Remediation:** -By default, K3s creates the directory and files with the expected ownership of `root:root`. No manual remediation should be necessary. - - -#### 1.1.20 -Ensure that the Kubernetes PKI certificate file permissions are set to `644` or more restrictive (Scored) -
-Rationale -Kubernetes makes use of a number of certificate files as part of the operation of its components. The permissions on these files should be set to 644 or more restrictive to protect their integrity. -
- -**Result:** Pass +Run the below command (based on the file location on your system) on the master node. +For example, +chmod 644 scheduler **Audit:** -Run the below command on the master node. ```bash -stat -c %n\ %a /var/lib/rancher/k3s/server/tls/*.crt +/bin/sh -c 'if test -e scheduler; then stat -c permissions=%a scheduler; fi' ``` -Verify that the permissions are `644` or more restrictive. +**Expected Result**: + +```console +'permissions' is not present +``` + +### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) + + +**Result:** pass **Remediation:** -By default, K3s creates the files with the expected permissions of `644`. No manual remediation is needed. - - -#### 1.1.21 -Ensure that the Kubernetes PKI key file permissions are set to `600` (Scored) -
-Rationale -Kubernetes makes use of a number of key files as part of the operation of its components. The permissions on these files should be set to 600 to protect their integrity and confidentiality. -
- -**Result:** Pass +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root scheduler **Audit:** -Run the below command on the master node. ```bash -stat -c %n\ %a /var/lib/rancher/k3s/server/tls/*.key +/bin/sh -c 'if test -e scheduler; then stat -c %U:%G scheduler; fi' ``` -Verify that the permissions are `600` or more restrictive. +**Expected Result**: + +```console +'root:root' is not present +``` + +### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated) + + +**Result:** pass **Remediation:** -By default, K3s creates the files with the expected permissions of `600`. No manual remediation is needed. - - -### 1.2 API Server -This section contains recommendations relating to API server configuration flags - - -#### 1.2.1 -Ensure that the `--anonymous-auth` argument is set to false (Not Scored) - -
-Rationale -When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests. These requests are then served by the API server. You should rely on authentication to authorize access and disallow anonymous requests. - -If you are using RBAC authorization, it is generally considered reasonable to allow anonymous access to the API Server for health checks and discovery purposes, and hence this recommendation is not scored. However, you should consider whether anonymous discovery is an acceptable risk for your purposes. -
- -**Result:** Pass +Run the below command (based on the file location on your system) on the master node. +For example, +chmod 644 controllermanager **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" +/bin/sh -c 'if test -e controllermanager; then stat -c permissions=%a controllermanager; fi' ``` -Verify that `--anonymous-auth=false` is present. +**Expected Result**: + +```console +'permissions' is not present +``` + +### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) + + +**Result:** pass **Remediation:** -By default, K3s kube-apiserver is configured to run with this flag and value. No manual remediation is needed. - -#### 1.2.2 -Ensure that the `--basic-auth-file` argument is not set (Scored) -
-Rationale -Basic authentication uses plaintext credentials for authentication. Currently, the basic authentication credentials last indefinitely, and the password cannot be changed without restarting the API server. The basic authentication is currently supported for convenience. Hence, basic authentication should not be used. -
- -**Result:** Pass +Run the below command (based on the file location on your system) on the master node. +For example, +chown root:root controllermanager **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "basic-auth-file" +stat -c %U:%G /var/lib/rancher/k3s/server/tls ``` -Verify that the `--basic-auth-file` argument does not exist. +**Expected Result**: + +```console +'root:root' is equal to 'root:root' +``` + +**Returned Value**: + +```console +root:root +``` + +### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) + + +**Result:** pass **Remediation:** -By default, K3s does not run with basic authentication enabled. No manual remediation is needed. - - -#### 1.2.3 -Ensure that the `--token-auth-file` parameter is not set (Scored) - -
-Rationale -The token-based authentication utilizes static tokens to authenticate requests to the apiserver. The tokens are stored in clear-text in a file on the apiserver, and cannot be revoked or rotated without restarting the apiserver. Hence, do not use static token-based authentication. -
- -**Result:** Pass +Run the below command (based on the file location on your system) on the master node. +For example, +chown -R root:root /etc/kubernetes/pki/ **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "token-auth-file" +find /etc/kubernetes/pki/ | xargs stat -c %U:%G ``` -Verify that the `--token-auth-file` argument does not exist. +**Expected Result**: + +```console +'root:root' is not present +``` + +### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual) + + +**Result:** pass **Remediation:** -By default, K3s does not run with basic authentication enabled. No manual remediation is needed. +Run the below command (based on the file location on your system) on the master node. +For example, +chmod -R 644 /etc/kubernetes/pki/*.crt -#### 1.2.4 -Ensure that the `--kubelet-https` argument is set to true (Scored) +**Audit:** + +```bash +stat -c %n %a /var/lib/rancher/k3s/server/tls/*.crt +``` + +**Expected Result**: + +```console +'permissions' is not present +``` + +### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, +chmod -R 600 /etc/kubernetes/pki/*.key + +**Audit:** + +```bash +stat -c %n %a /var/lib/rancher/k3s/server/tls/*.key +``` + +**Expected Result**: + +```console +'permissions' is not present +``` + +## 1.2 API Server +### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Manual) + + +**Result:** warn + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the below parameter. +--anonymous-auth=false + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth' +``` + +### 1.2.2 Ensure that the --basic-auth-file argument is not set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and configure alternate mechanisms for authentication. Then, +edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and remove the --basic-auth-file= parameter. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'basic-auth-file' +``` + +**Expected Result**: + +```console +'--basic-auth-file' is not present +``` + +### 1.2.3 Ensure that the --token-auth-file parameter is not set (Automated) + + +**Result:** pass + +**Remediation:** +Follow the documentation and configure alternate mechanisms for authentication. Then, +edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and remove the --token-auth-file= parameter. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'token-auth-file' +``` + +**Expected Result**: + +```console +'--token-auth-file' is not present +``` + +### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated) -
-Rationale -Connections from apiserver to kubelets could potentially carry sensitive data such as secrets and keys. It is thus important to use in-transit encryption for any communication between the apiserver and kubelets. -
**Result:** Not Applicable +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and remove the --kubelet-https parameter. + +### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the +apiserver and kubelets. Then, edit API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the +kubelet client certificate and key parameters as below. +--kubelet-client-certificate= +--kubelet-client-key= + **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "kubelet-https" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority' ``` -Verify that the `--kubelet-https` argument does not exist. +**Expected Result**: + +```console +'--kubelet-client-certificate' is not present AND '--kubelet-client-key' is not present +``` + +### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) + + +**Result:** pass **Remediation:** -By default, K3s kube-apiserver doesn't run with the `--kubelet-https` parameter as it runs with TLS. No manual remediation is needed. - -#### 1.2.5 -Ensure that the `--kubelet-client-certificate` and `--kubelet-client-key` arguments are set as appropriate (Scored) - -
-Rationale -The apiserver, by default, does not authenticate itself to the kubelet's HTTPS endpoints. The requests from the apiserver are treated anonymously. You should set up certificate-based kubelet authentication to ensure that the apiserver authenticates itself to kubelets when submitting requests. -
- -**Result:** Pass +Follow the Kubernetes documentation and setup the TLS connection between +the apiserver and kubelets. Then, edit the API server pod specification file +/etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the +--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. +--kubelet-certificate-authority= **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep -E 'kubelet-client-certificate|kubelet-client-key' +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority' ``` -Verify that the `--kubelet-client-certificate` and `--kubelet-client-key` arguments exist and they are set as appropriate. +**Expected Result**: + +```console +'--kubelet-certificate-authority' is not present +``` + +### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass **Remediation:** -By default, K3s kube-apiserver is ran with these arguments for secure communication with kubelet. No manual remediation is needed. - - -#### 1.2.6 -Ensure that the `--kubelet-certificate-authority` argument is set as appropriate (Scored) -
-Rationale -The connections from the apiserver to the kubelet are used for fetching logs for pods, attaching (through kubectl) to running pods, and using the kubelet’s port-forwarding functionality. These connections terminate at the kubelet’s HTTPS endpoint. By default, the apiserver does not verify the kubelet’s serving certificate, which makes the connection subject to man-in-the-middle attacks, and unsafe to run over untrusted and/or public networks. -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --authorization-mode parameter to values other than AlwaysAllow. +One such example could be as below. +--authorization-mode=RBAC **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "kubelet-certificate-authority" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode' ``` -Verify that the `--kubelet-certificate-authority` argument exists and is set as appropriate. +**Expected Result**: + +```console +'--authorization-mode' is not present +``` + +### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated) + + +**Result:** pass **Remediation:** -By default, K3s kube-apiserver is ran with this argument for secure communication with kubelet. No manual remediation is needed. - - -#### 1.2.7 -Ensure that the `--authorization-mode` argument is not set to `AlwaysAllow` (Scored) -
-Rationale -The API Server, can be configured to allow all requests. This mode should not be used on any production cluster. -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --authorization-mode parameter to a value that includes Node. +--authorization-mode=Node,RBAC **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode' ``` -Verify that the argument value doesn't contain `AlwaysAllow`. +**Expected Result**: + +```console +'--authorization-mode' is not present +``` + +### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated) + + +**Result:** pass **Remediation:** -By default, K3s sets `Node,RBAC` as the parameter to the `--authorization-mode` argument. No manual remediation is needed. - - -#### 1.2.8 -Ensure that the `--authorization-mode` argument includes `Node` (Scored) -
-Rationale -The Node authorization mode only allows kubelets to read Secret, ConfigMap, PersistentVolume, and PersistentVolumeClaim objects associated with their nodes. -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --authorization-mode parameter to a value that includes RBAC, +for example: +--authorization-mode=Node,RBAC **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode' ``` -Verify `Node` exists as a parameter to the argument. +**Expected Result**: + +```console +'--authorization-mode' is not present +``` + +### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Manual) + + +**Result:** pass **Remediation:** -By default, K3s sets `Node,RBAC` as the parameter to the `--authorization-mode` argument. No manual remediation is needed. - - -#### 1.2.9 -Ensure that the `--authorization-mode` argument includes `RBAC` (Scored) -
-Rationale -Role Based Access Control (RBAC) allows fine-grained control over the operations that different entities can perform on different objects in the cluster. It is recommended to use the RBAC authorization mode. -
- -**Result:** Pass +Follow the Kubernetes documentation and set the desired limits in a configuration file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameters. +--enable-admission-plugins=...,EventRateLimit,... +--admission-control-config-file= **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' ``` -Verify `RBAC` exists as a parameter to the argument. +**Expected Result**: + +```console +'--enable-admission-plugins' is not present +``` + +### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) + + +**Result:** pass **Remediation:** -By default, K3s sets `Node,RBAC` as the parameter to the `--authorization-mode` argument. No manual remediation is needed. - - -#### 1.2.10 -Ensure that the admission control plugin EventRateLimit is set (Not Scored) -
-Rationale -Using `EventRateLimit` admission control enforces a limit on the number of events that the API Server will accept in a given time slice. A misbehaving workload could overwhelm and DoS the API Server, making it unavailable. This particularly applies to a multi-tenant cluster, where there might be a small percentage of misbehaving tenants which could have a significant impact on the performance of the cluster overall. Hence, it is recommended to limit the rate of events that the API server will accept. - -Note: This is an Alpha feature in the Kubernetes 1.15 release. -
- -**Result:** **Not Scored - Operator Dependent** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and either remove the --enable-admission-plugins parameter, or set it to a +value that does not include AlwaysAdmit. **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' ``` -Verify that the `--enable-admission-plugins` argument is set to a value that includes EventRateLimit. +**Expected Result**: + +```console +'--enable-admission-plugins' is not present OR '--enable-admission-plugins' is not present +``` + +### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual) + + +**Result:** pass **Remediation:** -By default, K3s only sets `NodeRestriction,PodSecurityPolicy` as the parameter to the `--enable-admission-plugins` argument. -To configure this, follow the Kubernetes documentation and set the desired limits in a configuration file. Then refer to K3s's documentation to see how to supply additional api server configuration via the kube-apiserver-arg parameter. - - -#### 1.2.11 -Ensure that the admission control plugin `AlwaysAdmit` is not set (Scored) -
-Rationale -Setting admission control plugin AlwaysAdmit allows all requests and do not filter any requests. - -The AlwaysAdmit admission controller was deprecated in Kubernetes v1.13. Its behavior was equivalent to turning off all admission controllers. -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --enable-admission-plugins parameter to include +AlwaysPullImages. +--enable-admission-plugins=...,AlwaysPullImages,... **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' ``` -Verify that if the `--enable-admission-plugins` argument is set, its value does not include `AlwaysAdmit`. +**Expected Result**: + +```console +'--enable-admission-plugins' is not present +``` + +### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) + + +**Result:** pass **Remediation:** -By default, K3s only sets `NodeRestriction,PodSecurityPolicy` as the parameter to the `--enable-admission-plugins` argument. No manual remediation needed. - - -#### 1.2.12 -Ensure that the admission control plugin AlwaysPullImages is set (Not Scored) -
-Rationale -Setting admission control policy to `AlwaysPullImages` forces every new pod to pull the required images every time. In a multi-tenant cluster users can be assured that their private images can only be used by those who have the credentials to pull them. Without this admission control policy, once an image has been pulled to a node, any pod from any user can use it simply by knowing the image’s name, without any authorization check against the image ownership. When this plug-in is enabled, images are always pulled prior to starting containers, which means valid credentials are required. - -
- -**Result:** **Not Scored - Operator Dependent** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --enable-admission-plugins parameter to include +SecurityContextDeny, unless PodSecurityPolicy is already in place. +--enable-admission-plugins=...,SecurityContextDeny,... **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' ``` -Verify that the `--enable-admission-plugins` argument is set to a value that includes `AlwaysPullImages`. +**Expected Result**: + +```console +'--enable-admission-plugins' is not present OR '--enable-admission-plugins' is not present +``` + +### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated) + + +**Result:** pass **Remediation:** -By default, K3s only sets `NodeRestriction,PodSecurityPolicy` as the parameter to the `--enable-admission-plugins` argument. -To configure this, follow the Kubernetes documentation and set the desired limits in a configuration file. Then refer to K3s's documentation to see how to supply additional api server configuration via the kube-apiserver-arg parameter. - -#### 1.2.13 -Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Not Scored) -
-Rationale -SecurityContextDeny can be used to provide a layer of security for clusters which do not have PodSecurityPolicies enabled. -
- -**Result:** Not Scored +Follow the documentation and create ServiceAccount objects as per your environment. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and ensure that the --disable-admission-plugins parameter is set to a +value that does not include ServiceAccount. **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'ServiceAccount' ``` -Verify that the `--enable-admission-plugins` argument is set to a value that includes `SecurityContextDeny`, if `PodSecurityPolicy` is not included. +**Expected Result**: + +```console +'--disable-admission-plugins' is not present OR '--disable-admission-plugins' is not present +``` + +### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) + + +**Result:** pass **Remediation:** -K3s would need to have the `SecurityContextDeny` admission plugin enabled by passing it as an argument to K3s. `--kube-apiserver-arg='enable-admission-plugins=SecurityContextDeny` - - -#### 1.2.14 -Ensure that the admission control plugin `ServiceAccount` is set (Scored) -
-Rationale -When you create a pod, if you do not specify a service account, it is automatically assigned the `default` service account in the same namespace. You should create your own service account and let the API server manage its security tokens. -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --disable-admission-plugins parameter to +ensure it does not include NamespaceLifecycle. **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "ServiceAccount" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'disable-admission-plugins' ``` -Verify that the `--disable-admission-plugins` argument is set to a value that does not includes `ServiceAccount`. +**Expected Result**: + +```console +'--disable-admission-plugins' is not present OR '--disable-admission-plugins' is not present +``` + +### 1.2.16 Ensure that the admission control plugin PodSecurityPolicy is set (Automated) + + +**Result:** pass **Remediation:** -By default, K3s does not use this argument. If there's a desire to use this argument, follow the documentation and create ServiceAccount objects as per your environment. Then refer to K3s's documentation to see how to supply additional api server configuration via the kube-apiserver-arg parameter. - - -#### 1.2.15 -Ensure that the admission control plugin `NamespaceLifecycle` is set (Scored) -
-Rationale -Setting admission control policy to `NamespaceLifecycle` ensures that objects cannot be created in non-existent namespaces, and that namespaces undergoing termination are not used for creating the new objects. This is recommended to enforce the integrity of the namespace termination process and also for the availability of the newer objects. -
- -**Result:** Pass +Follow the documentation and create Pod Security Policy objects as per your environment. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --enable-admission-plugins parameter to a +value that includes PodSecurityPolicy: +--enable-admission-plugins=...,PodSecurityPolicy,... +Then restart the API Server. **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "disable-admission-plugins" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' ``` -Verify that the `--disable-admission-plugins` argument is set to a value that does not include `NamespaceLifecycle`. +**Expected Result**: + +```console +'--enable-admission-plugins' is not present +``` + +### 1.2.17 Ensure that the admission control plugin NodeRestriction is set (Automated) + + +**Result:** pass **Remediation:** -By default, K3s does not use this argument. No manual remediation needed. - - -#### 1.2.16 -Ensure that the admission control plugin `PodSecurityPolicy` is set (Scored) -
-Rationale -A Pod Security Policy is a cluster-level resource that controls the actions that a pod can perform and what it has the ability to access. The `PodSecurityPolicy` objects define a set of conditions that a pod must run with in order to be accepted into the system. Pod Security Policies are comprised of settings and strategies that control the security features a pod has access to and hence this must be used to control pod access permissions. - -**Note:** When the PodSecurityPolicy admission plugin is in use, there needs to be at least one PodSecurityPolicy in place for ANY pods to be admitted. See section 1.7 for recommendations on PodSecurityPolicy settings. -
- -**Result:** Pass +Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --enable-admission-plugins parameter to a +value that includes NodeRestriction. +--enable-admission-plugins=...,NodeRestriction,... **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins' ``` -Verify that the `--enable-admission-plugins` argument is set to a value that includes `PodSecurityPolicy`. +**Expected Result**: + +```console +'--enable-admission-plugins' is not present +``` + +### 1.2.18 Ensure that the --insecure-bind-address argument is not set (Automated) + + +**Result:** pass **Remediation:** -K3s would need to have the `PodSecurityPolicy` admission plugin enabled by passing it as an argument to K3s. `--kube-apiserver-arg='enable-admission-plugins=PodSecurityPolicy`. - - -#### 1.2.17 -Ensure that the admission control plugin `NodeRestriction` is set (Scored) -
-Rationale -Using the `NodeRestriction` plug-in ensures that the kubelet is restricted to the `Node` and `Pod` objects that it could modify as defined. Such kubelets will only be allowed to modify their own `Node` API object, and only modify `Pod` API objects that are bound to their node. - -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and remove the --insecure-bind-address parameter. **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'insecure-bind-address' ``` -Verify that the `--enable-admission-plugins` argument is set to a value that includes `NodeRestriction`. +**Expected Result**: + +```console +'--insecure-bind-address' is not present +``` + +### 1.2.19 Ensure that the --insecure-port argument is set to 0 (Automated) + + +**Result:** pass **Remediation:** -K3s would need to have the `NodeRestriction` admission plugin enabled by passing it as an argument to K3s. `--kube-apiserver-arg='enable-admission-plugins=NodeRestriction`. - - -#### 1.2.18 -Ensure that the `--insecure-bind-address` argument is not set (Scored) -
-Rationale -If you bind the apiserver to an insecure address, basically anyone who could connect to it over the insecure port, would have unauthenticated and unencrypted access to your master node. The apiserver doesn't do any authentication checking for insecure binds and traffic to the Insecure API port is not encrpyted, allowing attackers to potentially read sensitive data in transit. -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the below parameter. +--insecure-port=0 **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "insecure-bind-address" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'insecure-port' ``` -Verify that the `--insecure-bind-address` argument does not exist. +**Expected Result**: + +```console +'--insecure-port' is not present +``` + +### 1.2.20 Ensure that the --secure-port argument is not set to 0 (Automated) + + +**Result:** pass **Remediation:** -By default, K3s explicitly excludes the use of the `--insecure-bind-address` parameter. No manual remediation is needed. - - -#### 1.2.19 -Ensure that the `--insecure-port` argument is set to `0` (Scored) -
-Rationale -Setting up the apiserver to serve on an insecure port would allow unauthenticated and unencrypted access to your master node. This would allow attackers who could access this port, to easily take control of the cluster. -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and either remove the --secure-port parameter or +set it to a different (non-zero) desired port. **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "insecure-port" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'secure-port' ``` -Verify that the `--insecure-port` argument is set to `0`. +**Expected Result**: + +```console +'--secure-port' is not present OR '--secure-port' is not present +``` + +### 1.2.21 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass **Remediation:** -By default, K3s starts the kube-apiserver process with this argument's parameter set to `0`. No manual remediation is needed. - - -#### 1.2.20 -Ensure that the `--secure-port` argument is not set to `0` (Scored) -
-Rationale -The secure port is used to serve https with authentication and authorization. If you disable it, no https traffic is served and all traffic is served unencrypted. -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the below parameter. +--profiling=false **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "secure-port" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'profiling' ``` -Verify that the `--secure-port` argument is either not set or is set to an integer value between 1 and 65535. +**Expected Result**: + +```console +'--profiling' is not present +``` + +### 1.2.22 Ensure that the --audit-log-path argument is set (Automated) + + +**Result:** pass **Remediation:** -By default, K3s sets the parameter of 6444 for the `--secure-port` argument. No manual remediation is needed. - - -#### 1.2.21 -Ensure that the `--profiling` argument is set to `false` (Scored) -
-Rationale -Profiling allows for the identification of specific performance bottlenecks. It generates a significant amount of program data that could potentially be exploited to uncover system and program details. If you are not experiencing any bottlenecks and do not need the profiler for troubleshooting purposes, it is recommended to turn it off to reduce the potential attack surface. -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --audit-log-path parameter to a suitable path and +file where you would like audit logs to be written, for example: +--audit-log-path=/var/log/apiserver/audit.log **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "profiling" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-path' ``` -Verify that the `--profiling` argument is set to false. +**Expected Result**: + +```console +'--audit-log-path' is not present +``` + +### 1.2.23 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) + + +**Result:** pass **Remediation:** -By default, K3s sets the `--profiling` flag parameter to false. No manual remediation needed. - - -#### 1.2.22 -Ensure that the `--audit-log-path` argument is set (Scored) -
-Rationale -Auditing the Kubernetes API Server provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators or other components of the system. Even though currently, Kubernetes provides only basic audit capabilities, it should be enabled. You can enable it by setting an appropriate audit log path. -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --audit-log-maxage parameter to 30 or as an appropriate number of days: +--audit-log-maxage=30 **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "audit-log-path" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxage' ``` -Verify that the `--audit-log-path` argument is set as appropriate. +**Expected Result**: + +```console +'--audit-log-maxage' is not present +``` + +### 1.2.24 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) + + +**Result:** pass **Remediation:** -K3s server needs to be run with the following argument, `--kube-apiserver-arg='audit-log-path=/path/to/log/file'`. - - -#### 1.2.23 -Ensure that the `--audit-log-maxage` argument is set to `30` or as appropriate (Scored) -
-Rationale -Retaining logs for at least 30 days ensures that you can go back in time and investigate or correlate any events. Set your audit log retention period to 30 days or as per your business requirements. -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --audit-log-maxbackup parameter to 10 or to an appropriate +value. +--audit-log-maxbackup=10 **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "audit-log-maxage" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxbackup' ``` -Verify that the `--audit-log-maxage` argument is set to `30` or as appropriate. +**Expected Result**: + +```console +'--audit-log-maxbackup' is not present +``` + +### 1.2.25 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) + + +**Result:** pass **Remediation:** -K3s server needs to be run with the following argument, `--kube-apiserver-arg='audit-log-maxage=30'`. - - -#### 1.2.24 -Ensure that the `--audit-log-maxbackup` argument is set to `10` or as appropriate (Scored) -
-Rationale -Kubernetes automatically rotates the log files. Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. For example, if you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis. -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --audit-log-maxsize parameter to an appropriate size in MB. +For example, to set it as 100 MB: +--audit-log-maxsize=100 **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "audit-log-maxbackup" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-log-maxsize' ``` -Verify that the `--audit-log-maxbackup` argument is set to `10` or as appropriate. +**Expected Result**: + +```console +'--audit-log-maxsize' is not present +``` + +### 1.2.26 Ensure that the --request-timeout argument is set as appropriate (Automated) + + +**Result:** pass **Remediation:** -K3s server needs to be run with the following argument, `--kube-apiserver-arg='audit-log-maxbackup=10'`. - - -#### 1.2.25 -Ensure that the `--audit-log-maxsize` argument is set to `100` or as appropriate (Scored) -
-Rationale -Kubernetes automatically rotates the log files. Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. If you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis. -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +and set the below parameter as appropriate and if needed. +For example, +--request-timeout=300s **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "audit-log-maxsize" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'request-timeout' ``` -Verify that the `--audit-log-maxsize` argument is set to `100` or as appropriate. +**Expected Result**: + +```console +'--request-timeout' is not present OR '--request-timeout' is not present +``` + +### 1.2.27 Ensure that the --service-account-lookup argument is set to true (Automated) + + +**Result:** pass **Remediation:** -K3s server needs to be run with the following argument, `--kube-apiserver-arg='audit-log-maxsize=100'`. - - -#### 1.2.26 -Ensure that the `--request-timeout` argument is set as appropriate (Scored) -
-Rationale -Setting global request timeout allows extending the API server request timeout limit to a duration appropriate to the user's connection speed. By default, it is set to 60 seconds which might be problematic on slower connections making cluster resources inaccessible once the data volume for requests exceeds what can be transmitted in 60 seconds. But, setting this timeout limit to be too large can exhaust the API server resources making it prone to Denial-of-Service attack. Hence, it is recommended to set this limit as appropriate and change the default limit of 60 seconds only if needed. -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the below parameter. +--service-account-lookup=true +Alternatively, you can delete the --service-account-lookup parameter from this file so +that the default takes effect. **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "request-timeout" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'service-account-lookup' ``` -Verify that the `--request-timeout` argument is either not set or set to an appropriate value. +**Expected Result**: + +```console +'--service-account-lookup' is not present OR '--service-account-lookup' is not present +``` + +### 1.2.28 Ensure that the --service-account-key-file argument is set as appropriate (Automated) + + +**Result:** pass **Remediation:** -By default, K3s does not set the `--request-timeout` argument. No manual remediation needed. - - -#### 1.2.27 -Ensure that the `--service-account-lookup` argument is set to `true` (Scored) -
-Rationale -If `--service-account-lookup` is not enabled, the apiserver only verifies that the authentication token is valid, and does not validate that the service account token mentioned in the request is actually present in etcd. This allows using a service account token even after the corresponding service account is deleted. This is an example of time of check to time of use security issue. -
- -**Result:** Pass +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --service-account-key-file parameter +to the public key file for service accounts: +--service-account-key-file= **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "service-account-lookup" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'service-account-key-file' ``` -Verify that if the `--service-account-lookup` argument exists it is set to `true`. +**Expected Result**: + +```console +'--service-account-key-file' is not present +``` + +### 1.2.29 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) + + +**Result:** pass **Remediation:** -K3s server needs to be run with the following argument, `--kube-apiserver-arg='service-account-lookup=true'`. +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the etcd certificate and key file parameters. +--etcd-certfile= +--etcd-keyfile= - -#### 1.2.28 -Ensure that the `--service-account-key-file` argument is set as appropriate (Scored) -
-Rationale -By default, if no `--service-account-key-file` is specified to the apiserver, it uses the private key from the TLS serving certificate to verify service account tokens. To ensure that the keys for service account tokens could be rotated as needed, a separate public/private key pair should be used for signing service account tokens. Hence, the public key should be specified to the apiserver with `--service-account-key-file`. -
- -**Result:** Pass - -**Audit:** -Run the below command on the master node. +**Audit Script:** `check_for_k3s_etcd.sh` ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "service-account-key-file" +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo "$(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo "$(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + ``` -Verify that the `--service-account-key-file` argument exists and is set as appropriate. - -**Remediation:** -By default, K3s sets the `--service-account-key-file` explicitly. No manual remediation needed. - - -#### 1.2.29 -Ensure that the `--etcd-certfile` and `--etcd-keyfile` arguments are set as appropriate (Scored) -
-Rationale -etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be protected by client authentication. This requires the API server to identify itself to the etcd server using a client certificate and key. -
- -**Result:** Pass - -**Audit:** -Run the below command on the master node. +**Audit Execution:** ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep -E 'etcd-certfile|etcd-keyfile' +./check_for_k3s_etcd.sh 1.2.29 ``` -Verify that the `--etcd-certfile` and `--etcd-keyfile` arguments exist and they are set as appropriate. +**Expected Result**: + +```console +'--etcd-certfile' is present AND '--etcd-keyfile' is present +``` + +**Returned Value**: + +```console +Feb 21 23:13:24 k3s[5223]: time="2022-02-21T23:13:24.847339487Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit-log --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,NamespaceLifecycle,ServiceAccount --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 1.2.30 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) + + +**Result:** pass **Remediation:** -By default, K3s sets the `--etcd-certfile` and `--etcd-keyfile` arguments explicitly. No manual remediation needed. - - -#### 1.2.30 -Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate (Scored) -
-Rationale -API server communication contains sensitive parameters that should remain encrypted in transit. Configure the API server to serve only HTTPS traffic. -
- -**Result:** Pass +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the TLS certificate and private key file parameters. +--tls-cert-file= +--tls-private-key-file= **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep -E 'tls-cert-file|tls-private-key-file' +journalctl -D /var/log/journal -u k3s | grep -A1 'Running kube-apiserver' | tail -n2 ``` -Verify that the `--tls-cert-file` and `--tls-private-key-file` arguments exist and they are set as appropriate. +**Expected Result**: + +```console +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +**Returned Value**: + +```console +Feb 21 23:13:24 k3s[5223]: time="2022-02-21T23:13:24.847339487Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit-log --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,NamespaceLifecycle,ServiceAccount --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" Feb 21 23:13:24 k3s[5223]: {"level":"info","ts":"2022-02-21T23:13:24.848Z","caller":"raft/raft.go:1530","msg":"b3656202b34887ca switched to configuration voters=(12926846069174208458)"} +``` + +### 1.2.31 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass **Remediation:** -By default, K3s sets the `--tls-cert-file` and `--tls-private-key-file` arguments explicitly. No manual remediation needed. - - -#### 1.2.31 -Ensure that the `--client-ca-file` argument is set as appropriate (Scored) -
-Rationale -API server communication contains sensitive parameters that should remain encrypted in transit. Configure the API server to serve only HTTPS traffic. If `--client-ca-file` argument is set, any request presenting a client certificate signed by one of the authorities in the `client-ca-file` is authenticated with an identity corresponding to the CommonName of the client certificate. -
- -**Result:** Pass +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the client certificate authority file. +--client-ca-file= **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'client-ca-file' ``` -Verify that the `--client-ca-file` argument exists and it is set as appropriate. +**Expected Result**: + +```console +'--client-ca-file' is not present +``` + +### 1.2.32 Ensure that the --etcd-cafile argument is set as appropriate (Automated) + + +**Result:** pass **Remediation:** -By default, K3s sets the `--client-ca-file` argument explicitly. No manual remediation needed. - - -#### 1.2.32 -Ensure that the `--etcd-cafile` argument is set as appropriate (Scored) -
-Rationale -etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be protected by client authentication. This requires the API server to identify itself to the etcd server using a SSL Certificate Authority file. -
- -**Result:** Pass +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the etcd certificate authority file parameter. +--etcd-cafile= **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "etcd-cafile" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-cafile' ``` -Verify that the `--etcd-cafile` argument exists and it is set as appropriate. +**Expected Result**: + +```console +'--etcd-cafile' is not present +``` + +### 1.2.33 Ensure that the --encryption-provider-config argument is set as appropriate (Manual) + + +**Result:** pass **Remediation:** -By default, K3s sets the `--etcd-cafile` argument explicitly. No manual remediation needed. - - -#### 1.2.33 -Ensure that the `--encryption-provider-config` argument is set as appropriate (Scored) -
-Rationale -etcd is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted at rest to avoid any disclosures. -
- -**Result:** Pass +Follow the Kubernetes documentation and configure a EncryptionConfig file. +Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the --encryption-provider-config parameter to the path of that file: --encryption-provider-config=
**Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "encryption-provider-config" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'encryption-provider-config' ``` -Verify that the `--encryption-provider-config` argument is set to a EncryptionConfigfile. Additionally, ensure that the `EncryptionConfigfile` has all the desired resources covered especially any secrets. +**Expected Result**: + +```console +'--encryption-provider-config' is not present +``` + +### 1.2.34 Ensure that encryption providers are appropriately configured (Manual) + + +**Result:** warn **Remediation:** -K3s server needs to be ran with the follow, `--kube-apiserver-arg='encryption-provider-config=/path/to/encryption_config'`. This can be done by running k3s with the `--secrets-encryptiuon` argument which will configure the encryption provider. - - -#### 1.2.34 -Ensure that encryption providers are appropriately configured (Scored) -
-Rationale -Where `etcd` encryption is used, it is important to ensure that the appropriate set of encryption providers is used. Currently, the `aescbc`, `kms` and `secretbox` are likely to be appropriate options. -
- -**Result:** Pass - -**Remediation:** -Follow the Kubernetes documentation and configure a `EncryptionConfig` file. -In this file, choose **aescbc**, **kms** or **secretbox** as the encryption provider. +Follow the Kubernetes documentation and configure a EncryptionConfig file. +In this file, choose aescbc, kms or secretbox as the encryption provider. **Audit:** -Run the below command on the master node. ```bash grep aescbc /path/to/encryption-config.json ``` -Run the below command on the master node. - -Verify that aescbc is set as the encryption provider for all the desired resources. - -**Remediation** -K3s server needs to be run with the following, `--secrets-encryption=true`, and verify that one of the allowed encryption providers is present. +### 1.2.35 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual) -#### 1.2.35 -Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Not Scored) +**Result:** pass -
-Rationale -TLS ciphers have had a number of known vulnerabilities and weaknesses, which can reduce the protection provided by them. By default Kubernetes supports a number of TLS cipher suites including some that have security concerns, weakening the protection provided. -
- -**Result:** **Not Scored - Operator Dependent** +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and set the below parameter. +--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM +_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM +_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM +_SHA384 **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "tls-cipher-suites" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'tls-cipher-suites' ``` -Verify that the `--tls-cipher-suites` argument is set as outlined in the remediation procedure below. +**Expected Result**: + +```console +'--tls-cipher-suites' is not present +``` + +## 1.3 Controller Manager +### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual) + + +**Result:** pass **Remediation:** -By default, K3s explicitly doesn't set this flag. No manual remediation needed. - - -### 1.3 Controller Manager - -#### 1.3.1 -Ensure that the `--terminated-pod-gc-threshold` argument is set as appropriate (Not Scored) -
-Rationale -Garbage collection is important to ensure sufficient resource availability and avoiding degraded performance and availability. In the worst case, the system might crash or just be unusable for a long period of time. The current setting for garbage collection is 12,500 terminated pods which might be too high for your system to sustain. Based on your system resources and tests, choose an appropriate threshold value to activate garbage collection. -
- -**Result:** **Not Scored - Operator Dependent** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the master node and set the --terminated-pod-gc-threshold to an appropriate threshold, +for example: +--terminated-pod-gc-threshold=10 **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "terminated-pod-gc-threshold +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'terminated-pod-gc-threshold' ``` -Verify that the `--terminated-pod-gc-threshold` argument is set as appropriate. +**Expected Result**: + +```console +'--terminated-pod-gc-threshold' is not present +``` + +### 1.3.2 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass **Remediation:** -K3s server needs to be run with the following, `--kube-controller-manager-arg='terminated-pod-gc-threshold=10`. - - -#### 1.3.2 -Ensure that the `--profiling` argument is set to false (Scored) -
-Rationale -Profiling allows for the identification of specific performance bottlenecks. It generates a significant amount of program data that could potentially be exploited to uncover system and program details. If you are not experiencing any bottlenecks and do not need the profiler for troubleshooting purposes, it is recommended to turn it off to reduce the potential attack surface. -
- -**Result:** Pass +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the master node and set the below parameter. +--profiling=false **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "profiling" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'profiling' ``` -Verify that the `--profiling` argument is set to false. +**Expected Result**: + +```console +'--profiling' is not present +``` + +### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) + + +**Result:** pass **Remediation:** -By default, K3s sets the `--profiling` flag parameter to false. No manual remediation needed. - - -#### 1.3.3 -Ensure that the `--use-service-account-credentials` argument is set to `true` (Scored) -
-Rationale -The controller manager creates a service account per controller in the `kube-system` namespace, generates a credential for it, and builds a dedicated API client with that service account credential for each controller loop to use. Setting the `--use-service-account-credentials` to `true` runs each control loop within the controller manager using a separate service account credential. When used in combination with RBAC, this ensures that the control loops run with the minimum permissions required to perform their intended tasks. -
- -**Result:** Pass +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the master node to set the below parameter. +--use-service-account-credentials=true **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "use-service-account-credentials" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'use-service-account-credentials' ``` -Verify that the `--use-service-account-credentials` argument is set to true. +**Expected Result**: + +```console +'--use-service-account-credentials' is not present +``` + +### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) + + +**Result:** pass **Remediation:** -K3s server needs to be run with the following, `--kube-controller-manager-arg='use-service-account-credentials=true'` - - -#### 1.3.4 -Ensure that the `--service-account-private-key-file` argument is set as appropriate (Scored) -
-Rationale -To ensure that keys for service account tokens can be rotated as needed, a separate public/private key pair should be used for signing service account tokens. The private key should be specified to the controller manager with `--service-account-private-key-file` as appropriate. -
- -**Result:** Pass +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the master node and set the --service-account-private-key-file parameter +to the private key file for service accounts. +--service-account-private-key-file= **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "service-account-private-key-file" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'service-account-private-key-file' ``` -Verify that the `--service-account-private-key-file` argument is set as appropriate. +**Expected Result**: + +```console +'--service-account-private-key-file' is not present +``` + +### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) + + +**Result:** pass **Remediation:** -By default, K3s sets the `--service-account-private-key-file` argument with the service account key file. No manual remediation needed. - - -#### 1.3.5 -Ensure that the `--root-ca-file` argument is set as appropriate (Scored) -
-Rationale -Processes running within pods that need to contact the API server must verify the API server's serving certificate. Failing to do so could be a subject to man-in-the-middle attacks. - -Providing the root certificate for the API server's serving certificate to the controller manager with the `--root-ca-file` argument allows the controller manager to inject the trusted bundle into pods so that they can verify TLS connections to the API server. -
- -**Result:** Pass +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the master node and set the --root-ca-file parameter to the certificate bundle file`. +--root-ca-file= **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "root-ca-file" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'root-ca-file' ``` -Verify that the `--root-ca-file` argument exists and is set to a certificate bundle file containing the root certificate for the API server's serving certificate +**Expected Result**: -**Remediation:** -By default, K3s sets the `--root-ca-file` argument with the root ca file. No manual remediation needed. - - -#### 1.3.6 -Ensure that the `RotateKubeletServerCertificate` argument is set to `true` (Scored) -
-Rationale -`RotateKubeletServerCertificate` causes the kubelet to both request a serving certificate after bootstrapping its client credentials and rotate the certificate as its existing credentials expire. This automated periodic rotation ensures that there are no downtimes due to expired certificates and thus addressing availability in the CIA security triad. - -Note: This recommendation only applies if you let kubelets get their certificates from the API server. In case your kubelet certificates come from an outside authority/tool (e.g. Vault) then you need to take care of rotation yourself. -
- -**Result:** Not Applicable - -**Audit:** -Run the below command on the master node. - -```bash -journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "RotateKubeletServerCertificate" +```console +'--root-ca-file' is not present ``` -Verify that RotateKubeletServerCertificateargument exists and is set to true. +### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) -**Remediation:** -By default, K3s implements its own logic for certificate generation and rotation. - - -#### 1.3.7 -Ensure that the `--bind-address` argument is set to `127.0.0.1` (Scored) -
-Rationale -The Controller Manager API service which runs on port 10252/TCP by default is used for health and metrics information and is available without authentication or encryption. As such it should only be bound to a localhost interface, to minimize the cluster's attack surface. -
- -**Result:** Pass - -**Audit:** -Run the below command on the master node. - -```bash -journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "bind-address" -``` - -Verify that the `--bind-address` argument is set to 127.0.0.1. - -**Remediation:** -By default, K3s sets the `--bind-address` argument to `127.0.0.1`. No manual remediation needed. - - -### 1.4 Scheduler -This section contains recommendations relating to Scheduler configuration flags - - -#### 1.4.1 -Ensure that the `--profiling` argument is set to `false` (Scored) -
-Rationale -Profiling allows for the identification of specific performance bottlenecks. It generates a significant amount of program data that could potentially be exploited to uncover system and program details. If you are not experiencing any bottlenecks and do not need the profiler for troubleshooting purposes, it is recommended to turn it off to reduce the potential attack surface. -
- -**Result:** Pass - -**Audit:** -Run the below command on the master node. - -```bash -journalctl -u k3s | grep "Running kube-scheduler" | tail -n1 | grep "profiling" -``` - -Verify that the `--profiling` argument is set to false. - -**Remediation:** -By default, K3s sets the `--profiling` flag parameter to false. No manual remediation needed. - - -#### 1.4.2 -Ensure that the `--bind-address` argument is set to `127.0.0.1` (Scored) -
-Rationale - -The Scheduler API service which runs on port 10251/TCP by default is used for health and metrics information and is available without authentication or encryption. As such it should only be bound to a localhost interface, to minimize the cluster's attack surface. -
- -**Result:** Pass - -**Audit:** -Run the below command on the master node. - -```bash -journalctl -u k3s | grep "Running kube-scheduler" | tail -n1 | grep "bind-address" -``` - -Verify that the `--bind-address` argument is set to 127.0.0.1. - -**Remediation:** -By default, K3s sets the `--bind-address` argument to `127.0.0.1`. No manual remediation needed. - - -## 2 Etcd Node Configuration -This section covers recommendations for etcd configuration. - -#### 2.1 -Ensure that the `cert-file` and `key-file` fields are set as appropriate (Scored) -
-Rationale -etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted in transit. -
- -**Result:** Pass - -**Audit:** -Run the below command on the master node. - -```bash -grep -E 'cert-file|key-file' /var/lib/rancher/k3s/server/db/etcd/config -``` - -Verify that the `cert-file` and the `key-file` fields are set as appropriate. - -**Remediation:** -By default, K3s uses a config file for etcd that can be found at `/var/lib/rancher/k3s/server/db/etcd/config`. Server and peer cert and key files are specified. No manual remediation needed. - - -#### 2.2 -Ensure that the `client-cert-auth` field is set to `true` (Scored) -
-Rationale -etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should not be available to unauthenticated clients. You should enable the client authentication via valid certificates to secure the access to the etcd service. -
- -**Result:** Pass - -**Audit:** -Run the below command on the master node. - -```bash -grep 'client-cert-auth' /var/lib/rancher/k3s/server/db/etcd/config -``` - -Verify that the `client-cert-auth` field is set to true. - -**Remediation:** -By default, K3s uses a config file for etcd that can be found at `/var/lib/rancher/k3s/server/db/etcd/config`. `client-cert-auth` is set to true. No manual remediation needed. - - -#### 2.3 -Ensure that the `auto-tls` field is not set to `true` (Scored) -
-Rationale -etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should not be available to unauthenticated clients. You should enable the client authentication via valid certificates to secure the access to the etcd service. -
- -**Result:** Pass - -**Remediation:** -By default, K3s starts Etcd without this flag. It is set to `false` by default. - - -#### 2.4 -Ensure that the `peer-cert-file` and `peer-key-file` fields are set as appropriate (Scored) -
-Rationale -etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted in transit and also amongst peers in the etcd clusters. -
- -**Result:** Pass - -**Remediation:** -By default, K3s starts Etcd with a config file found here, `/var/lib/rancher/k3s/server/db/etcd/config`. The config file contains `peer-transport-security:` which has fields that have the peer cert and peer key files. - - -#### 2.5 -Ensure that the `client-cert-auth` field is set to `true` (Scored) -
-Rationale -etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be accessible only by authenticated etcd peers in the etcd cluster. -
- -**Result:** Pass - -**Audit:** -Run the below command on the master node. - -```bash -grep 'client-cert-auth' /var/lib/rancher/k3s/server/db/etcd/config -``` - -Verify that the `client-cert-auth` field in the peer section is set to true. - -**Remediation:** -By default, K3s uses a config file for etcd that can be found at `/var/lib/rancher/k3s/server/db/etcd/config`. Within the file, the `client-cert-auth` field is set. No manual remediation needed. - - -#### 2.6 -Ensure that the `peer-auto-tls` field is not set to `true` (Scored) -
-Rationale -etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be accessible only by authenticated etcd peers in the etcd cluster. Hence, do not use self- signed certificates for authentication. -
- -**Result:** Pass - -**Audit:** -Run the below command on the master node. - -```bash -grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config -``` - -Verify that if the `peer-auto-tls` field does not exist. - -**Remediation:** -By default, K3s uses a config file for etcd that can be found at `/var/lib/rancher/k3s/server/db/etcd/config`. Within the file, it does not contain the `peer-auto-tls` field. No manual remediation needed. - - -#### 2.7 -Ensure that a unique Certificate Authority is used for etcd (Not Scored) -
-Rationale -etcd is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. Its access should be restricted to specifically designated clients and peers only. - -Authentication to etcd is based on whether the certificate presented was issued by a trusted certificate authority. There is no checking of certificate attributes such as common name or subject alternative name. As such, if any attackers were able to gain access to any certificate issued by the trusted certificate authority, they would be able to gain full access to the etcd database. -
- -**Result:** Pass - -**Audit:** -Run the below command on the master node. - -```bash -# To find the ca file used by etcd: -grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config -# To find the kube-apiserver process: -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 -``` - -Verify that the file referenced by the `client-ca-file` flag in the apiserver process is different from the file referenced by the `trusted-ca-file` parameter in the etcd configuration file. - -**Remediation:** -By default, K3s uses a config file for etcd that can be found at `/var/lib/rancher/k3s/server/db/etcd/config` and the `trusted-ca-file` parameters in it are set to unique values specific to etcd. No manual remediation needed. - - - -## 3 Control Plane Configuration - - -### 3.1 Authentication and Authorization - - -#### 3.1.1 -Client certificate authentication should not be used for users (Not Scored) -
-Rationale -With any authentication mechanism the ability to revoke credentials if they are compromised or no longer required, is a key control. Kubernetes client certificate authentication does not allow for this due to a lack of support for certificate revocation. -
- -**Result:** Not Scored - Operator Dependent - -**Audit:** -Review user access to the cluster and ensure that users are not making use of Kubernetes client certificate authentication. - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented in place of client certificates. - -### 3.2 Logging - - -#### 3.2.1 -Ensure that a minimal audit policy is created (Scored) -
-Rationale -Logging is an important detective control for all systems, to detect potential unauthorized access. -
- -**Result:** Does not pass. See the [Hardening Guide](../hardening_guide/) for details. - -**Audit:** -Run the below command on the master node. - -```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "audit-policy-file" -``` - -Verify that the `--audit-policy-file` is set. Review the contents of the file specified and ensure that it contains a valid audit policy. - -**Remediation:** -Create an audit policy file for your cluster and pass it to k3s. e.g. `--kube-apiserver-arg='audit-log-path=/var/lib/rancher/k3s/server/logs/audit-log'` - - -#### 3.2.2 -Ensure that the audit policy covers key security concerns (Not Scored) -
-Rationale -Security audit logs should cover access and modification of key resources in the cluster, to enable them to form an effective part of a security environment. -
- -**Result:** Not Scored - Operator Dependent - -**Remediation:** - - -## 4 Worker Node Security Configuration - - -### 4.1 Worker Node Configuration Files - - -#### 4.1.1 -Ensure that the kubelet service file permissions are set to `644` or more restrictive (Scored) -
-Rationale -The `kubelet` service file controls various parameters that set the behavior of the kubelet service in the worker node. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. -
**Result:** Not Applicable **Remediation:** -K3s doesn’t launch the kubelet as a service. It is launched and managed by the K3s supervisor process. All configuration is passed to it as command line arguments at run time. +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the master node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. +--feature-gates=RotateKubeletServerCertificate=true + +### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) -#### 4.1.2 -Ensure that the kubelet service file ownership is set to `root:root` (Scored) -
-Rationale -The `kubelet` service file controls various parameters that set the behavior of the kubelet service in the worker node. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`. -
+**Result:** pass + +**Remediation:** +Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml +on the master node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'bind-address' +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +## 1.4 Scheduler +### 1.4.1 Ensure that the --profiling argument is set to false (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file +on the master node and set the below parameter. +--profiling=false + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1 +``` + +**Expected Result**: + +```console +'false' is equal to 'false' +``` + +**Returned Value**: + +```console +Feb 21 23:13:24 k3s[5223]: time="2022-02-21T23:13:24.851975832Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --profiling=false --secure-port=0" +``` + +### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) + + +**Result:** pass + +**Remediation:** +Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml +on the master node and ensure the correct value for the --bind-address parameter + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'bind-address' +``` + +**Expected Result**: + +```console +'--bind-address' is present OR '--bind-address' is not present +``` + +## 2 Etcd Node Configuration Files +### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure TLS encryption. +Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml +on the master node and set the below parameters. +--cert-file=
+--key-file=
+ +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo "$(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo "$(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.1 +``` + +**Expected Result**: + +```console +'cert-file' is present AND 'key-file' is present +``` + +**Returned Value**: + +```console +cert-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/server-client.key +``` + +### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master +node and set the below parameter. +--client-cert-auth="true" + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo "$(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo "$(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.2 +``` + +**Expected Result**: + +```console +'--client-cert-auth' is not present +``` + +**Returned Value**: + +```console +client-cert-auth: true +``` + +### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master +node and either remove the --auto-tls parameter or set it to false. + --auto-tls=false + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo "$(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo "$(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.3 +``` + +**Expected Result**: + +```console +'--auto-tls' is not present OR '--auto-tls' is not present +``` + +**Returned Value**: + +```console +false +``` + +### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) + + +**Result:** pass + +**Remediation:** +Follow the etcd service documentation and configure peer TLS encryption as appropriate +for your etcd cluster. +Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the +master node and set the below parameters. +--peer-client-file=
+--peer-key-file=
+ +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo "$(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo "$(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.4 +``` + +**Expected Result**: + +```console +'cert-file' is present AND 'key-file' is present +``` + +**Returned Value**: + +```console +cert-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt key-file: /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key +``` + +### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master +node and set the below parameter. +--peer-client-cert-auth=true + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo "$(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo "$(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.5 +``` + +**Expected Result**: + +```console +'--client-cert-auth' is not present +``` + +**Returned Value**: + +```console +client-cert-auth: true +``` + +### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) + + +**Result:** pass + +**Remediation:** +Edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the master +node and either remove the --peer-auto-tls parameter or set it to false. +--peer-auto-tls=false + +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo "$(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo "$(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.6 +``` + +**Expected Result**: + +```console +'--peer-auto-tls' is not present OR '--peer-auto-tls' is present +``` + +**Returned Value**: + +```console +false +``` + +### 2.7 Ensure that a unique Certificate Authority is used for etcd (Manual) + + +**Result:** pass + +**Remediation:** +[Manual test] +Follow the etcd documentation and create a dedicated certificate authority setup for the +etcd service. +Then, edit the etcd pod specification file /var/lib/rancher/k3s/server/db/etcd/config on the +master node and set the below parameter. +--trusted-ca-file=
+ +**Audit Script:** `check_for_k3s_etcd.sh` + +```bash +#!/bin/bash + +# This script is used to ensure that k3s is actually running etcd (and not other databases like sqlite3) +# before it checks the requirement +set -eE + +handle_error() { + echo "false" +} + +trap 'handle_error' ERR + + +if [[ "$(journalctl -D /var/log/journal -u k3s | grep 'Managed etcd' | grep -v grep | wc -l)" -gt 0 ]]; then + case $1 in + "1.1.11") + echo $(stat -c %a /var/lib/rancher/k3s/server/db/etcd);; + "1.2.29") + echo $(journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-');; + "2.1") + echo $(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.2") + echo "$(grep -A 5 'client-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.3") + echo $(grep 'auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.4") + echo $(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep -E 'cert-file|key-file');; + "2.5") + echo "$(grep -A 5 'peer-transport-security' /var/lib/rancher/k3s/server/db/etcd/config | grep 'client-cert-auth')";; + "2.6") + echo $(grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config);; + "2.7") + echo $(grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config);; + esac +else +# If another database is running, return whatever is required to pass the scan + case $1 in + "1.1.11") + echo "700";; + "1.2.29") + echo "--etcd-certfile AND --etcd-keyfile";; + "2.1") + echo "cert-file AND key-file";; + "2.2") + echo "true";; + "2.3") + echo "false";; + "2.4") + echo "peer-cert-file AND peer-key-file";; + "2.5") + echo "true";; + "2.6") + echo "--peer-auto-tls=false";; + "2.7") + echo "--trusted-ca-file";; + esac +fi + +``` + +**Audit Execution:** + +```bash +./check_for_k3s_etcd.sh 2.7 +``` + +**Expected Result**: + +```console +'trusted-ca-file' is present +``` + +**Returned Value**: + +```console +trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt trusted-ca-file: /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt +``` + +## 3.1 Authentication and Authorization +### 3.1.1 Client certificate authentication should not be used for users (Manual) + + +**Result:** warn + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be +implemented in place of client certificates. + +## 3.2 Logging +### 3.2.1 Ensure that a minimal audit policy is created (Manual) + + +**Result:** warn + +**Remediation:** +Create an audit policy file for your cluster. + +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file' +``` + +### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) + + +**Result:** warn + +**Remediation:** +Consider modification of the audit policy in use on the cluster to include these items, at a +minimum. + +## 4.1 Worker Node Configuration Files +### 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated) + **Result:** Not Applicable **Remediation:** -K3s doesn’t launch the kubelet as a service. It is launched and managed by the K3s supervisor process. All configuration is passed to it as command line arguments at run time. +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 644 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) -#### 4.1.3 -Ensure that the proxy kubeconfig file permissions are set to `644` or more restrictive (Scored) -
-Rationale -The `kube-proxy` kubeconfig file controls various parameters of the `kube-proxy` service in the worker node. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. - -It is possible to run `kube-proxy` with the kubeconfig parameters configured as a Kubernetes ConfigMap instead of a file. In this case, there is no proxy kubeconfig file. -
**Result:** Not Applicable +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /etc/systemd/system/kubelet.service.d/10-kubeadm.conf + +### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual) + + +**Result:** pass + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 644 /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig + **Audit:** -Run the below command on the worker node. ```bash stat -c %a /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig +``` + +**Expected Result**: + +```console +'permissions' is present OR '/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig' is not present +``` + +**Returned Value**: + +```console 644 ``` -Verify that if a file is specified and it exists, the permissions are 644 or more restrictive. +### 4.1.4 Ensure that the proxy kubeconfig file ownership is set to root:root (Manual) + + +**Result:** pass **Remediation:** -K3s runs `kube-proxy` in process and does not use a config file. - - -#### 4.1.4 -Ensure that the proxy kubeconfig file ownership is set to `root:root` (Scored) -
-Rationale -The kubeconfig file for `kube-proxy` controls various parameters for the `kube-proxy` service in the worker node. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`. -
- -**Result:** Not Applicable +Run the below command (based on the file location on your system) on the each worker node. +For example, chown root:root /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig **Audit:** -Run the below command on the master node. ```bash stat -c %U:%G /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig +``` + +**Expected Result**: + +```console +'root:root' is not present OR '/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig' is not present +``` + +**Returned Value**: + +```console root:root ``` -Verify that if a file is specified and it exists, the permissions are 644 or more restrictive. +### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated) + + +**Result:** pass **Remediation:** -K3s runs `kube-proxy` in process and does not use a config file. - - -#### 4.1.5 -Ensure that the kubelet.conf file permissions are set to `644` or more restrictive (Scored) -
-Rationale -The `kubelet.conf` file is the kubeconfig file for the node, and controls various parameters that set the behavior and identity of the worker node. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. -
- -**Result:** Pass +Run the below command (based on the file location on your system) on the each worker node. +For example, +chmod 644 /var/lib/rancher/k3s/server/cred/admin.kubeconfig **Audit:** -Run the below command on the worker node. ```bash stat -c %a /var/lib/rancher/k3s/agent/kubelet.kubeconfig +``` + +**Expected Result**: + +```console +'644' is equal to '644' +``` + +**Returned Value**: + +```console 644 ``` +### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Manual) + + +**Result:** warn + **Remediation:** -By default, K3s creates `kubelet.kubeconfig` with `644` permissions. No manual remediation needed. - -#### 4.1.6 -Ensure that the kubelet.conf file ownership is set to `root:root` (Scored) -
-Rationale -The `kubelet.conf` file is the kubeconfig file for the node, and controls various parameters that set the behavior and identity of the worker node. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`. -
- -**Result:** Not Applicable +Run the below command (based on the file location on your system) on the each worker node. +For example, +chown root:root /var/lib/rancher/k3s/server/cred/admin.kubeconfig **Audit:** -Run the below command on the master node. ```bash stat -c %U:%G /var/lib/rancher/k3s/agent/kubelet.kubeconfig -root:root ``` +### 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual) + + +**Result:** pass + **Remediation:** -By default, K3s creates `kubelet.kubeconfig` with `root:root` ownership. No manual remediation needed. - - -#### 4.1.7 -Ensure that the certificate authorities file permissions are set to `644` or more restrictive (Scored) -
-Rationale -The certificate authorities file controls the authorities used to validate API requests. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. -
- -**Result:** Pass +Run the following command to modify the file permissions of the +--client-ca-file chmod 644 **Audit:** -Run the below command on the master node. ```bash stat -c %a /var/lib/rancher/k3s/server/tls/server-ca.crt +``` + +**Expected Result**: + +```console +'644' is equal to '644' OR '640' is present OR '600' is present OR '444' is present OR '440' is present OR '400' is present OR '000' is present +``` + +**Returned Value**: + +```console 644 ``` -Verify that the permissions are 644. +### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual) + + +**Result:** warn **Remediation:** -By default, K3s creates `/var/lib/rancher/k3s/server/tls/server-ca.crt` with `644` permissions. - - -#### 4.1.8 -Ensure that the client certificate authorities file ownership is set to `root:root` (Scored) -
-Rationale -The certificate authorities file controls the authorities used to validate API requests. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`. -
- -**Result:** Pass +Run the following command to modify the ownership of the --client-ca-file. +chown root:root **Audit:** -Run the below command on the master node. ```bash stat -c %U:%G /var/lib/rancher/k3s/server/tls/client-ca.crt -root:root ``` -**Remediation:** -By default, K3s creates `/var/lib/rancher/k3s/server/tls/client-ca.crt` with `root:root` ownership. +### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated) -#### 4.1.9 -Ensure that the kubelet configuration file has permissions set to `644` or more restrictive (Scored) -
-Rationale -The kubelet reads various parameters, including security settings, from a config file specified by the `--config` argument. If this file is specified you should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. -
- **Result:** Not Applicable **Remediation:** -K3s doesn’t require or maintain a configuration file for the kubelet process. All configuration is passed to it as command line arguments at run time. +Run the following command (using the config file location identified in the Audit step) +chmod 644 /var/lib/kubelet/config.yaml +### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated) -#### 4.1.10 -Ensure that the kubelet configuration file ownership is set to `root:root` (Scored) -
-Rationale -The kubelet reads various parameters, including security settings, from a config file specified by the `--config` argument. If this file is specified you should restrict its file permissions to maintain the integrity of the file. The file should be owned by `root:root`. -
**Result:** Not Applicable **Remediation:** -K3s doesn’t require or maintain a configuration file for the kubelet process. All configuration is passed to it as command line arguments at run time. +Run the following command (using the config file location identified in the Audit step) +chown root:root /var/lib/kubelet/config.yaml + +## 4.2 Kubelet +### 4.2.1 Ensure that the anonymous-auth argument is set to false (Automated) -### 4.2 Kubelet -This section contains recommendations for kubelet configuration. - - -#### 4.2.1 -Ensure that the `--anonymous-auth` argument is set to false (Scored) -
-Rationale -When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests. These requests are then served by the Kubelet server. You should rely on authentication to authorize access and disallow anonymous requests. -
- -**Result:** Pass - -**Audit:** -Run the below command on the master node. - -```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" -``` - -Verify that the value for `--anonymous-auth` is false. +**Result:** pass **Remediation:** -By default, K3s starts kubelet with `--anonymous-auth` set to false. No manual remediation needed. - -#### 4.2.2 -Ensure that the `--authorization-mode` argument is not set to `AlwaysAllow` (Scored) -
-Rationale -Kubelets, by default, allow all authenticated requests (even anonymous ones) without needing explicit authorization checks from the apiserver. You should restrict this behavior and only allow explicitly authorized requests. -
- -**Result:** Pass +If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to +false. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--anonymous-auth=false +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth' | grep -v grep ``` -Verify that `AlwaysAllow` is not present. +**Expected Result**: + +```console +'false' is equal to 'false' +``` + +**Returned Value**: + +```console +Feb 21 23:13:24 k3s[5223]: time="2022-02-21T23:13:24.847339487Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit-log --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,NamespaceLifecycle,ServiceAccount --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) + + +**Result:** pass **Remediation:** -K3s starts kubelet with `Webhook` as the value for the `--authorization-mode` argument. No manual remediation needed. - - -#### 4.2.3 -Ensure that the `--client-ca-file` argument is set as appropriate (Scored) -
-Rationale -The connections from the apiserver to the kubelet are used for fetching logs for pods, attaching (through kubectl) to running pods, and using the kubelet’s port-forwarding functionality. These connections terminate at the kubelet’s HTTPS endpoint. By default, the apiserver does not verify the kubelet’s serving certificate, which makes the connection subject to man-in-the-middle attacks, and unsafe to run over untrusted and/or public networks. Enabling Kubelet certificate authentication ensures that the apiserver could authenticate the Kubelet before submitting any requests. -
- -**Result:** Pass +If using a Kubelet config file, edit the file to set authorization: mode to Webhook. If +using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--authorization-mode=Webhook +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode' | grep -v grep ``` -Verify that the `--client-ca-file` argument has a ca file associated. +**Expected Result**: + +```console +'Node,RBAC' not have 'AlwaysAllow' +``` + +**Returned Value**: + +```console +Feb 21 23:13:24 k3s[5223]: time="2022-02-21T23:13:24.847339487Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit-log --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,NamespaceLifecycle,ServiceAccount --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) + + +**Result:** pass **Remediation:** -By default, K3s starts the kubelet process with the `--client-ca-file`. No manual remediation needed. - - -#### 4.2.4 -Ensure that the `--read-only-port` argument is set to `0` (Scored) -
-Rationale -The Kubelet process provides a read-only API in addition to the main Kubelet API. Unauthenticated access is provided to this read-only API which could possibly retrieve potentially sensitive information about the cluster. -
- -**Result:** Pass +If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to +the location of the client CA file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_AUTHZ_ARGS variable. +--client-ca-file= +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kubelet" | tail -n1 | grep "read-only-port" +journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver'| tail -n1 | grep 'client-ca-file' | grep -v grep ``` -Verify that the `--read-only-port` argument is set to 0. + +**Expected Result**: + +```console +'--client-ca-file' is present +``` + +**Returned Value**: + +```console +Feb 21 23:13:24 k3s[5223]: time="2022-02-21T23:13:24.847339487Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/lib/rancher/k3s/server/logs/audit-log --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction,PodSecurityPolicy,NamespaceLifecycle,ServiceAccount --encryption-provider-config=/var/lib/rancher/k3s/server/cred/encryption-config.json --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --request-timeout=300s --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-lookup=true --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" +``` + +### 4.2.4 Ensure that the --read-only-port argument is set to 0 (Manual) + + +**Result:** warn **Remediation:** -By default, K3s starts the kubelet process with the `--read-only-port` argument set to `0`. - - -#### 4.2.5 -Ensure that the `--streaming-connection-idle-timeout` argument is not set to `0` (Scored) -
-Rationale -Setting idle timeouts ensures that you are protected against Denial-of-Service attacks, inactive connections and running out of ephemeral ports. - -**Note:** By default, `--streaming-connection-idle-timeout` is set to 4 hours which might be too high for your environment. Setting this as appropriate would additionally ensure that such streaming connections are timed out after serving legitimate use cases. -
- -**Result:** Pass +If using a Kubelet config file, edit the file to set readOnlyPort to 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--read-only-port=0 +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kubelet" | tail -n1 | grep "streaming-connection-idle-timeout" +journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'read-only-port' ``` -Verify that there's nothing returned. +### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual) + + +**Result:** warn **Remediation:** -By default, K3s does not set `--streaming-connection-idle-timeout` when starting kubelet. - - -#### 4.2.6 -Ensure that the `--protect-kernel-defaults` argument is set to `true` (Scored) -
-Rationale -Kernel parameters are usually tuned and hardened by the system administrators before putting the systems into production. These parameters protect the kernel and the system. Your kubelet kernel defaults that rely on such parameters should be appropriately set to match the desired secured system state. Ignoring this could potentially lead to running pods with undesired kernel behavior. -
- -**Result:** Pass +If using a Kubelet config file, edit the file to set streamingConnectionIdleTimeout to a +value other than 0. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--streaming-connection-idle-timeout=5m +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kubelet" | tail -n1 | grep "protect-kernel-defaults" +journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'streaming-connection-idle-timeout' ``` +### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated) + + +**Result:** pass + **Remediation:** -K3s server needs to be started with the following, `--protect-kernel-defaults=true`. - - -#### 4.2.7 -Ensure that the `--make-iptables-util-chains` argument is set to `true` (Scored) -
-Rationale -Kubelets can automatically manage the required changes to iptables based on how you choose your networking options for the pods. It is recommended to let kubelets manage the changes to iptables. This ensures that the iptables configuration remains in sync with pods networking configuration. Manually configuring iptables with dynamic pod network configuration changes might hamper the communication between pods/containers and to the outside world. You might have iptables rules too restrictive or too open. -
- -**Result:** Pass +If using a Kubelet config file, edit the file to set protectKernelDefaults: true. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +--protect-kernel-defaults=true +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kubelet" | tail -n1 | grep "make-iptables-util-chains" +journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'protect-kernel-defaults' ``` -Verify there are no results returned. +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +Feb 21 23:13:32 k3s[5223]: time="2022-02-21T23:13:32.581127632Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/9de9bfcf367b723ef0ac73dd91761165a4a8ad11ad16a758d3a996264e60c612/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override= --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --streaming-connection-idle-timeout=5m --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" +``` + +### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated) + + +**Result:** pass **Remediation:** -K3s server needs to be run with the following, `--kube-apiserver-arg='make-iptables-util-chains=true'`. +If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove the --make-iptables-util-chains argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service +**Audit:** + +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'make-iptables-util-chains' +``` + +**Expected Result**: + +```console +'true' is equal to 'true' OR '--make-iptables-util-chains' is not present +``` + +**Returned Value**: + +```console +Feb 21 23:13:32 k3s[5223]: time="2022-02-21T23:13:32.581127632Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/9de9bfcf367b723ef0ac73dd91761165a4a8ad11ad16a758d3a996264e60c612/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override= --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --protect-kernel-defaults=true --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --streaming-connection-idle-timeout=5m --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" +``` + +### 4.2.8 Ensure that the --hostname-override argument is not set (Manual) -#### 4.2.8 -Ensure that the `--hostname-override` argument is not set (Not Scored) -
-Rationale -Overriding hostnames could potentially break TLS setup between the kubelet and the apiserver. Additionally, with overridden hostnames, it becomes increasingly difficult to associate logs with a particular node and process them for security analytics. Hence, you should setup your kubelet nodes with resolvable FQDNs and avoid overriding the hostnames with IPs. -
**Result:** Not Applicable **Remediation:** -K3s does set this parameter for each host, but K3s also manages all certificates in the cluster. It ensures the hostname-override is included as a subject alternative name (SAN) in the kubelet's certificate. +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and remove the --hostname-override argument from the +KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +### 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual) -#### 4.2.9 -Ensure that the `--event-qps` argument is set to 0 or a level which ensures appropriate event capture (Not Scored) -
-Rationale -It is important to capture all events and not restrict event creation. Events are an important source of security information and analytics that ensure that your environment is consistently monitored using the event data. -
- -**Result:** Not Scored - Operator Dependent +**Result:** warn **Remediation:** -See CIS Benchmark guide for further details on configuring this. - -#### 4.2.10 -Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate (Scored) -
-Rationale -Kubelet communication contains sensitive parameters that should remain encrypted in transit. Configure the Kubelets to serve only HTTPS traffic. -
- -**Result:** Pass +If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service **Audit:** -Run the below command on the master node. ```bash -journalctl -u k3s | grep "Running kubelet" | tail -n1 | grep -E 'tls-cert-file|tls-private-key-file' +/bin/ps -fC containerd ``` -Verify the `--tls-cert-file` and `--tls-private-key-file` arguments are present and set appropriately. +### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual) + + +**Result:** warn **Remediation:** -By default, K3s sets the `--tls-cert-file` and `--tls-private-key-file` arguments when executing the kubelet process. +If using a Kubelet config file, edit the file to set tlsCertFile to the location +of the certificate file to use to identify this Kubelet, and tlsPrivateKeyFile +to the location of the corresponding private key file. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the below parameters in KUBELET_CERTIFICATE_ARGS variable. +--tls-cert-file= +--tls-private-key-file= +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service +**Audit:** -#### 4.2.11 -Ensure that the `--rotate-certificates` argument is not set to `false` (Scored) -
-Rationale +```bash +journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 +``` -The `--rotate-certificates` setting causes the kubelet to rotate its client certificates by creating new CSRs as its existing credentials expire. This automated periodic rotation ensures that there is no downtime due to expired certificates and thus addressing availability in the CIA security triad. +### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Manual) -**Note:** This recommendation only applies if you let kubelets get their certificates from the API server. In case your kubelet certificates come from an outside authority/tool (e.g. Vault) then you need to take care of rotation yourself. - -**Note:**This feature also requires the `RotateKubeletClientCertificate` feature gate to be enabled (which is the default since Kubernetes v1.7) -
**Result:** Not Applicable **Remediation:** -By default, K3s implements its own logic for certificate generation and rotation. +If using a Kubelet config file, edit the file to add the line rotateCertificates: true or +remove it altogether to use the default value. +If using command line arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS +variable. +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service +### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual) -#### 4.2.12 -Ensure that the `RotateKubeletServerCertificate` argument is set to `true` (Scored) -
-Rationale -`RotateKubeletServerCertificate` causes the kubelet to both request a serving certificate after bootstrapping its client credentials and rotate the certificate as its existing credentials expire. This automated periodic rotation ensures that there are no downtimes due to expired certificates and thus addressing availability in the CIA security triad. - -Note: This recommendation only applies if you let kubelets get their certificates from the API server. In case your kubelet certificates come from an outside authority/tool (e.g. Vault) then you need to take care of rotation yourself. -
**Result:** Not Applicable **Remediation:** -By default, K3s implements its own logic for certificate generation and rotation. +Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. +--feature-gates=RotateKubeletServerCertificate=true +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service + +### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual) -#### 4.2.13 -Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Not Scored) -
-Rationale -TLS ciphers have had a number of known vulnerabilities and weaknesses, which can reduce the protection provided by them. By default Kubernetes supports a number of TLS ciphersuites including some that have security concerns, weakening the protection provided. -
- -**Result:** Not Scored - Operator Dependent +**Result:** warn **Remediation:** -Configuration of the parameter is dependent on your use case. Please see the CIS Kubernetes Benchmark for suggestions on configuring this for your use-case. - - -## 5 Kubernetes Policies - - -### 5.1 RBAC and Service Accounts - - -#### 5.1.1 -Ensure that the cluster-admin role is only used where required (Not Scored) -
-Rationale -Kubernetes provides a set of default roles where RBAC is used. Some of these roles such as `cluster-admin` provide wide-ranging privileges which should only be applied where absolutely necessary. Roles such as `cluster-admin` allow super-user access to perform any action on any resource. When used in a `ClusterRoleBinding`, it gives full control over every resource in the cluster and in all namespaces. When used in a `RoleBinding`, it gives full control over every resource in the rolebinding's namespace, including the namespace itself. -
- -**Result:** Pass - -**Remediation:** -K3s does not make inappropriate use of the cluster-admin role. Operators must audit their workloads of additional usage. See the CIS Benchmark guide for more details. - -#### 5.1.2 -Minimize access to secrets (Not Scored) -
-Rationale -Inappropriate access to secrets stored within the Kubernetes cluster can allow for an attacker to gain additional access to the Kubernetes cluster or external resources whose credentials are stored as secrets. -
- -**Result:** Not Scored - Operator Dependent - -**Remediation:** -K3s limits its use of secrets for the system components appropriately, but operators must audit the use of secrets by their workloads. See the CIS Benchmark guide for more details. - -#### 5.1.3 -Minimize wildcard use in Roles and ClusterRoles (Not Scored) -
-Rationale -The principle of least privilege recommends that users are provided only the access required for their role and nothing more. The use of wildcard rights grants is likely to provide excessive rights to the Kubernetes API. -
- -**Result:** Not Scored - Operator Dependent +If using a Kubelet config file, edit the file to set TLSCipherSuites: to +TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +or to a subset of these values. +If using executable arguments, edit the kubelet service file +/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and +set the --tls-cipher-suites parameter as follows, or to a subset of these values. +--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 +Based on your system, restart the kubelet service. For example: +systemctl daemon-reload +systemctl restart kubelet.service **Audit:** -Run the below command on the master node. ```bash -# Retrieve the roles defined across each namespaces in the cluster and review for wildcards -kubectl get roles --all-namespaces -o yaml - -# Retrieve the cluster roles defined in the cluster and review for wildcards -kubectl get clusterroles -o yaml +/bin/ps -fC containerd ``` -Verify that there are not wildcards in use. +## 5.1 RBAC and Service Accounts +### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) + + +**Result:** warn **Remediation:** -Operators should review their workloads for proper role usage. See the CIS Benchmark guide for more details. +Identify all clusterrolebindings to the cluster-admin role. Check if they are used and +if they need this role or if they could use a role with fewer privileges. +Where possible, first bind users to a lower privileged role and then remove the +clusterrolebinding to the cluster-admin role : +kubectl delete clusterrolebinding [name] -#### 5.1.4 -Minimize access to create pods (Not Scored) -
-Rationale -The ability to create pods in a cluster opens up possibilities for privilege escalation and should be restricted, where possible. -
+### 5.1.2 Minimize access to secrets (Manual) -**Result:** Not Scored - Operator Dependent + +**Result:** warn **Remediation:** -Operators should review who has access to create pods in their cluster. See the CIS Benchmark guide for more details. +Where possible, remove get, list and watch access to secret objects in the cluster. -#### 5.1.5 -Ensure that default service accounts are not actively used. (Scored) -
-Rationale -Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. +### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) -Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. -The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. -
+**Result:** warn -**Result:** Fail. Currently requires operator intervention See the [Hardening Guide]({{}}/k3s/latest/en/security/hardening_guide) for details. +**Remediation:** +Where possible replace any use of wildcards in clusterroles and roles with specific +objects or actions. -**Audit:** -For each namespace in the cluster, review the rights assigned to the default service account and ensure that it has no roles or cluster roles bound to it apart from the defaults. Additionally ensure that the automountServiceAccountToken: false setting is in place for each default service account. +### 5.1.4 Minimize access to create pods (Manual) + + +**Result:** warn + +**Remediation:** +Where possible, remove create access to pod objects in the cluster. + +### 5.1.5 Ensure that default service accounts are not actively used. (Manual) + + +**Result:** warn **Remediation:** Create explicit service accounts wherever a Kubernetes workload requires specific access to the Kubernetes API server. Modify the configuration of each default service account to include this value - -``` bash automountServiceAccountToken: false -``` + +### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) -#### 5.1.6 -Ensure that Service Account Tokens are only mounted where necessary (Not Scored) -
-Rationale -Mounting service account tokens inside pods can provide an avenue for privilege escalation attacks where an attacker is able to compromise a single pod in the cluster. - -Avoiding mounting these tokens removes this attack avenue. -
- -**Result:** Not Scored - Operator Dependent +**Result:** warn **Remediation:** -The pods launched by K3s are part of the control plane and generally need access to communicate with the API server, thus this control does not apply to them. Operators should review their workloads and take steps to modify the definition of pods and service accounts which do not need to mount service account tokens to disable it. +Modify the definition of pods and service accounts which do not need to mount service +account tokens to disable it. -### 5.2 Pod Security Policies +## 5.2 Pod Security Policies +### 5.2.1 Minimize the admission of privileged containers (Manual) -#### 5.2.1 -Minimize the admission of containers wishing to share the host process ID namespace (Scored) -
-Rationale -Privileged containers have access to all Linux Kernel capabilities and devices. A container running with full privileges can do almost everything that the host can do. This flag exists to allow special use-cases, like manipulating the network stack and accessing devices. +**Result:** warn -There should be at least one PodSecurityPolicy (PSP) defined which does not permit privileged containers. - -If you need to run privileged containers, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. -
- -**Result:** Pass +**Remediation:** +Create a PSP as described in the Kubernetes documentation, ensuring that +the .spec.privileged field is omitted or set to false. **Audit:** -Run the below command on the master node. ```bash -kubectl describe psp | grep MustRunAsNonRoot +kubectl describe psp global-restricted-psp | grep MustRunAsNonRoot ``` -Verify that the result is `Rule: MustRunAsNonRoot`. +### 5.2.2 Minimize the admission of containers wishing to share the host process ID namespace (Manual) + + +**Result:** pass **Remediation:** -An operator should apply a PodSecurityPolicy that sets the `Rule` value to `MustRunAsNonRoot`. An example of this can be found in the [Hardening Guide](../hardening_guide/). - - -#### 5.2.2 -Minimize the admission of containers wishing to share the host process ID namespace (Scored) -
-Rationale -A container running in the host's PID namespace can inspect processes running outside the container. If the container also has access to ptrace capabilities this can be used to escalate privileges outside of the container. - -There should be at least one PodSecurityPolicy (PSP) defined which does not permit containers to share the host PID namespace. - -If you need to run containers which require hostPID, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. -
- -**Result:** Pass +Create a PSP as described in the Kubernetes documentation, ensuring that the +.spec.hostPID field is omitted or set to false. **Audit:** -Run the below command on the master node. ```bash kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' ``` -Verify that the returned count is 1. +**Expected Result**: + +```console +1 is greater than 0 +``` + +**Returned Value**: + +```console +--count=1 +``` + +### 5.2.3 Minimize the admission of containers wishing to share the host IPC namespace (Manual) + + +**Result:** pass **Remediation:** -An operator should apply a PodSecurityPolicy that sets the `hostPID` value to false explicitly for the PSP it creates. An example of this can be found in the [Hardening Guide](../hardening_guide/). - - -#### 5.2.3 -Minimize the admission of containers wishing to share the host IPC namespace (Scored) -
-Rationale - -A container running in the host's IPC namespace can use IPC to interact with processes outside the container. - -There should be at least one PodSecurityPolicy (PSP) defined which does not permit containers to share the host IPC namespace. - -If you have a requirement to containers which require hostIPC, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. -
- -**Result:** Pass +Create a PSP as described in the Kubernetes documentation, ensuring that the +.spec.hostIPC field is omitted or set to false. **Audit:** -Run the below command on the master node. ```bash kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' ``` -Verify that the returned count is 1. +**Expected Result**: + +```console +1 is greater than 0 +``` + +**Returned Value**: + +```console +--count=1 +``` + +### 5.2.4 Minimize the admission of containers wishing to share the host network namespace (Manual) + + +**Result:** pass **Remediation:** -An operator should apply a PodSecurityPolicy that sets the `HostIPC` value to false explicitly for the PSP it creates. An example of this can be found in the [Hardening Guide](../hardening_guide/). - - -#### 5.2.4 -Minimize the admission of containers wishing to share the host network namespace (Scored) -
-Rationale -A container running in the host's network namespace could access the local loopback device, and could access network traffic to and from other pods. - -There should be at least one PodSecurityPolicy (PSP) defined which does not permit containers to share the host network namespace. - -If you have need to run containers which require hostNetwork, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. -
- -**Result:** Pass +Create a PSP as described in the Kubernetes documentation, ensuring that the +.spec.hostNetwork field is omitted or set to false. **Audit:** -Run the below command on the master node. ```bash kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' ``` -Verify that the returned count is 1. +**Expected Result**: + +```console +1 is greater than 0 +``` + +**Returned Value**: + +```console +--count=1 +``` + +### 5.2.5 Minimize the admission of containers with allowPrivilegeEscalation (Manual) + + +**Result:** pass **Remediation:** -An operator should apply a PodSecurityPolicy that sets the `HostNetwork` value to false explicitly for the PSP it creates. An example of this can be found in the [Hardening Guide](../hardening_guide/). - - -#### 5.2.5 -Minimize the admission of containers with `allowPrivilegeEscalation` (Scored) -
-Rationale -A container running with the `allowPrivilegeEscalation` flag set to true may have processes that can gain more privileges than their parent. - -There should be at least one PodSecurityPolicy (PSP) defined which does not permit containers to allow privilege escalation. The option exists (and is defaulted to true) to permit setuid binaries to run. - -If you have need to run containers which use setuid binaries or require privilege escalation, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. -
- -**Result:** Pass +Create a PSP as described in the Kubernetes documentation, ensuring that the +.spec.allowPrivilegeEscalation field is omitted or set to false. **Audit:** -Run the below command on the master node. ```bash kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' ``` -Verify that the returned count is 1. +**Expected Result**: + +```console +1 is greater than 0 +``` + +**Returned Value**: + +```console +--count=1 +``` + +### 5.2.6 Minimize the admission of root containers (Manual) + + +**Result:** pass **Remediation:** -An operator should apply a PodSecurityPolicy that sets the `allowPrivilegeEscalation` value to false explicitly for the PSP it creates. An example of this can be found in the [Hardening Guide](../hardening_guide/). - - -#### 5.2.6 -Minimize the admission of root containers (Not Scored) -
-Rationale -Containers may run as any Linux user. Containers which run as the root user, whilst constrained by Container Runtime security features still have an escalated likelihood of container breakout. - -Ideally, all containers should run as a defined non-UID 0 user. - -There should be at least one PodSecurityPolicy (PSP) defined which does not permit root users in a container. - -If you need to run root containers, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. -
- -**Result:** Not Scored +Create a PSP as described in the Kubernetes documentation, ensuring that the +.spec.runAsUser.rule is set to either MustRunAsNonRoot or MustRunAs with the range of +UIDs not including 0. **Audit:** -Run the below command on the master node. ```bash kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' ``` -Verify that the returned count is 1. +**Expected Result**: -**Remediation:** -An operator should apply a PodSecurityPolicy that sets the `runAsUser.Rule` value to `MustRunAsNonRoot`. An example of this can be found in the [Hardening Guide](../hardening_guide/). - - -#### 5.2.7 -Minimize the admission of containers with the NET_RAW capability (Not Scored) -
-Rationale -Containers run with a default set of capabilities as assigned by the Container Runtime. By default this can include potentially dangerous capabilities. With Docker as the container runtime the NET_RAW capability is enabled which may be misused by malicious containers. - -Ideally, all containers should drop this capability. - -There should be at least one PodSecurityPolicy (PSP) defined which prevents containers with the NET_RAW capability from launching. - -If you need to run containers with this capability, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. -
- -**Result:** Not Scored - -**Audit:** -Run the below command on the master node. - -```bash -kubectl get psp -o json | jq .spec.requiredDropCapabilities[] +```console +1 is greater than 0 ``` -Verify the value is `"ALL"`. +**Returned Value**: + +```console +--count=1 +``` + +### 5.2.7 Minimize the admission of containers with the NET_RAW capability (Manual) + + +**Result:** warn **Remediation:** -An operator should apply a PodSecurityPolicy that sets `.spec.requiredDropCapabilities[]` to a value of `All`. An example of this can be found in the [Hardening Guide](../hardening_guide/). - - -#### 5.2.8 -Minimize the admission of containers with added capabilities (Not Scored) -
-Rationale -Containers run with a default set of capabilities as assigned by the Container Runtime. Capabilities outside this set can be added to containers which could expose them to risks of container breakout attacks. - -There should be at least one PodSecurityPolicy (PSP) defined which prevents containers with capabilities beyond the default set from launching. - -If you need to run containers with additional capabilities, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. -
- -**Result:** Not Scored +Create a PSP as described in the Kubernetes documentation, ensuring that the +.spec.requiredDropCapabilities is set to include either NET_RAW or ALL. **Audit:** -Run the below command on the master node. ```bash kubectl get psp ``` -Verify that there are no PSPs present which have `allowedCapabilities` set to anything other than an empty array. +### 5.2.8 Minimize the admission of containers with added capabilities (Manual) + + +**Result:** warn **Remediation:** -An operator should apply a PodSecurityPolicy that sets `allowedCapabilities` to anything other than an empty array. An example of this can be found in the [Hardening Guide](../hardening_guide/). +Ensure that allowedCapabilities is not present in PSPs for the cluster unless +it is set to an empty array. + +### 5.2.9 Minimize the admission of containers with capabilities assigned (Manual) -#### 5.2.9 -Minimize the admission of containers with capabilities assigned (Not Scored) -
-Rationale -Containers run with a default set of capabilities as assigned by the Container Runtime. Capabilities are parts of the rights generally granted on a Linux system to the root user. +**Result:** warn -In many cases applications running in containers do not require any capabilities to operate, so from the perspective of the principle of least privilege use of capabilities should be minimized. -
+**Remediation:** +Review the use of capabilites in applications runnning on your cluster. Where a namespace +contains applicaions which do not require any Linux capabities to operate consider adding +a PSP which forbids the admission of containers which do not drop all capabilities. -**Result:** Not Scored +## 5.3 Network Policies and CNI +### 5.3.1 Ensure that the CNI in use supports Network Policies (Manual) -**Audit:** -Run the below command on the master node. + +**Result:** warn + +**Remediation:** +If the CNI plugin in use does not support network policies, consideration should be given to +making use of a different plugin, or finding an alternate mechanism for restricting traffic +in the Kubernetes cluster. + +### 5.3.2 Ensure that all Namespaces have Network Policies defined (Manual) + + +**Result:** pass + +**Remediation:** +Follow the documentation and create NetworkPolicy objects as you need them. + +**Audit Script:** `check_for_rke2_network_policies.sh` ```bash -kubectl get psp -``` +#!/bin/bash -**Remediation:** -An operator should apply a PodSecurityPolicy that sets `requiredDropCapabilities` to `ALL`. An example of this can be found in the [Hardening Guide](../hardening_guide/). +set -eE +handle_error() { + echo "false" +} -### 5.3 Network Policies and CNI +trap 'handle_error' ERR - -#### 5.3.1 -Ensure that the CNI in use supports Network Policies (Not Scored) -
-Rationale -Kubernetes network policies are enforced by the CNI plugin in use. As such it is important to ensure that the CNI plugin supports both Ingress and Egress network policies. -
- -**Result:** Pass - -**Audit:** -Review the documentation of CNI plugin in use by the cluster, and confirm that it supports Ingress and Egress network policies. - -**Remediation:** -By default, K3s use Canal (Calico and Flannel) and fully supports network policies. - - -#### 5.3.2 -Ensure that all Namespaces have Network Policies defined (Scored) -
-Rationale -Running different applications on the same Kubernetes cluster creates a risk of one compromised application attacking a neighboring application. Network segmentation is important to ensure that containers can communicate only with those they are supposed to. A network policy is a specification of how selections of pods are allowed to communicate with each other and other network endpoints. - -Network Policies are namespace scoped. When a network policy is introduced to a given namespace, all traffic not allowed by the policy is denied. However, if there are no network policies in a namespace all traffic will be allowed into and out of the pods in that namespace. -
- -**Result:** Pass - -**Audit:** -Run the below command on the master node. - -```bash -for i in kube-system kube-public default; do - kubectl get networkpolicies -n $i; +for namespace in kube-system kube-public default; do + policy_count=$(/var/lib/rancher/rke2/bin/kubectl get networkpolicy -n ${namespace} -o json | jq -r '.items | length') + if [ ${policy_count} -eq 0 ]; then + echo "false" + exit + fi done + +echo "true" + ``` -Verify that there are network policies applied to each of the namespaces. - -**Remediation:** -An operator should apply NetworkPolcyies that prevent unneeded traffic from traversing networks unnecessarily. An example of applying a NetworkPolcy can be found in the [Hardening Guide](../hardening_guide/). - -### 5.4 Secrets Management - - -#### 5.4.1 -Prefer using secrets as files over secrets as environment variables (Not Scored) -
-Rationale -It is reasonably common for application code to log out its environment (particularly in the event of an error). This will include any secret values passed in as environment variables, so secrets can easily be exposed to any user or entity who has access to the logs. -
- -**Result:** Not Scored - -**Audit:** -Run the following command to find references to objects which use environment variables defined from secrets. +**Audit Execution:** ```bash -kubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}' -A +./check_for_rke2_network_policies.sh ``` -**Remediation:** -If possible, rewrite application code to read secrets from mounted secret files, rather than from environment variables. +**Expected Result**: + +```console +'true' is equal to 'true' +``` + +**Returned Value**: + +```console +true +``` + +## 5.4 Secrets Management +### 5.4.1 Prefer using secrets as files over secrets as environment variables (Manual) -#### 5.4.2 -Consider external secret storage (Not Scored) -
-Rationale -Kubernetes supports secrets as first-class objects, but care needs to be taken to ensure that access to secrets is carefully limited. Using an external secrets provider can ease the management of access to secrets, especially where secrets are used across both Kubernetes and non-Kubernetes environments. -
- -**Result:** Not Scored - -**Audit:** -Review your secrets management implementation. +**Result:** warn **Remediation:** -Refer to the secrets management options offered by your cloud provider or a third-party secrets management solution. - - -### 5.5 Extensible Admission Control - - -#### 5.5.1 -Configure Image Provenance using ImagePolicyWebhook admission controller (Not Scored) -
-Rationale -Kubernetes supports plugging in provenance rules to accept or reject the images in your deployments. You could configure such rules to ensure that only approved images are deployed in the cluster. -
- -**Result:** Not Scored +if possible, rewrite application code to read secrets from mounted secret files, rather than +from environment variables. **Audit:** -Review the pod definitions in your cluster and verify that image _provenance_ is configured as appropriate. + +```bash +kubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {' '}{end}' -A +``` + +### 5.4.2 Consider external secret storage (Manual) + + +**Result:** warn + +**Remediation:** +Refer to the secrets management options offered by your cloud provider or a third-party +secrets management solution. + +## 5.5 Extensible Admission Control +### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) + + +**Result:** warn **Remediation:** Follow the Kubernetes documentation and setup image provenance. - -### 5.6 Omitted -The v1.5.1 Benchmark skips 5.6 and goes from 5.5 to 5.7. We are including it here merely for explanation. +## 5.7 General Policies +### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) -### 5.7 General Policies -These policies relate to general cluster management topics, like namespace best practices and policies applied to pod objects in the cluster. +**Result:** warn + +**Remediation:** +Follow the documentation and create namespaces for objects in your deployment as you need +them. + +### 5.7.2 Ensure that the seccomp profile is set to docker/default in your pod definitions (Manual) -#### 5.7.1 -Create administrative boundaries between resources using namespaces (Not Scored) -
-Rationale -Limiting the scope of user permissions can reduce the impact of mistakes or malicious activities. A Kubernetes namespace allows you to partition created resources into logically named groups. Resources created in one namespace can be hidden from other namespaces. By default, each resource created by a user in Kubernetes cluster runs in a default namespace, called default. You can create additional namespaces and attach resources and users to them. You can use Kubernetes Authorization plugins to create policies that segregate access to namespace resources between different users. -
+**Result:** warn -**Result:** Not Scored +**Remediation:** +Seccomp is an alpha feature currently. By default, all alpha features are disabled. So, you +would need to enable alpha features in the apiserver by passing "--feature- +gates=AllAlpha=true" argument. +Edit the /etc/kubernetes/apiserver file on the master node and set the KUBE_API_ARGS +parameter to "--feature-gates=AllAlpha=true" +KUBE_API_ARGS="--feature-gates=AllAlpha=true" +Based on your system, restart the kube-apiserver service. For example: +systemctl restart kube-apiserver.service +Use annotations to enable the docker/default seccomp profile in your pod definitions. An +example is as below: +apiVersion: v1 +kind: Pod +metadata: + name: trustworthy-pod + annotations: + seccomp.security.alpha.kubernetes.io/pod: docker/default +spec: + containers: + - name: trustworthy-container + image: sotrustworthy:latest + +### 5.7.3 Apply Security Context to Your Pods and Containers (Manual) + + +**Result:** warn + +**Remediation:** +Follow the Kubernetes documentation and apply security contexts to your pods. For a +suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker +Containers. + +### 5.7.4 The default namespace should not be used (Manual) + + +**Result:** pass + +**Remediation:** +Ensure that namespaces are created to allow for appropriate segregation of Kubernetes +resources and that all new resources are created in a specific namespace. **Audit:** -Run the below command and review the namespaces created in the cluster. ```bash -kubectl get namespaces +kubectl get all --no-headers -n default | grep -v service | wc -l | xargs -I {} echo '--count={}' ``` -Ensure that these namespaces are the ones you need and are adequately administered as per your requirements. +**Expected Result**: -**Remediation:** -Follow the documentation and create namespaces for objects in your deployment as you need them. - - -#### 5.7.2 -Ensure that the seccomp profile is set to `docker/default` in your pod definitions (Not Scored) -
-Rationale -Seccomp (secure computing mode) is used to restrict the set of system calls applications can make, allowing cluster administrators greater control over the security of workloads running in the cluster. Kubernetes disables seccomp profiles by default for historical reasons. You should enable it to ensure that the workloads have restricted actions available within the container. -
- -**Result:** Not Scored - -**Audit:** -Review the pod definitions in your cluster. It should create a line as below: - -```yaml -annotations: - seccomp.security.alpha.kubernetes.io/pod: docker/default +```console +'0' is equal to '0' ``` -**Remediation:** -Review the Kubernetes documentation and if needed, apply a relevant PodSecurityPolicy. +**Returned Value**: -#### 5.7.3 -Apply Security Context to Your Pods and Containers (Not Scored) -
-Rationale -A security context defines the operating system security settings (uid, gid, capabilities, SELinux role, etc..) applied to a container. When designing your containers and pods, make sure that you configure the security context for your pods, containers, and volumes. A security context is a property defined in the deployment yaml. It controls the security parameters that will be assigned to the pod/container/volume. There are two levels of security context: pod level security context, and container-level security context. -
- -**Result:** Not Scored - -**Audit:** -Review the pod definitions in your cluster and verify that you have security contexts defined as appropriate. - -**Remediation:** -Follow the Kubernetes documentation and apply security contexts to your pods. For a suggested list of security contexts, you may refer to the CIS Security Benchmark. - - -#### 5.7.4 -The default namespace should not be used (Scored) -
-Rationale -Resources in a Kubernetes cluster should be segregated by namespace, to allow for security controls to be applied at that level and to make it easier to manage resources. -
- -**Result:** Pass - -**Audit:** -Run the below command on the master node. - -```bash -kubectl get all -n default +```console +--count=0 ``` - -The only entries there should be system-managed resources such as the kubernetes service. - -**Remediation:** -By default, K3s does not utilize the default namespace. diff --git a/content/rancher/v2.0-v2.4/en/admin-settings/rbac/global-permissions/_index.md b/content/rancher/v2.0-v2.4/en/admin-settings/rbac/global-permissions/_index.md index 0f5e5f00c9d..125c2cbe699 100644 --- a/content/rancher/v2.0-v2.4/en/admin-settings/rbac/global-permissions/_index.md +++ b/content/rancher/v2.0-v2.4/en/admin-settings/rbac/global-permissions/_index.md @@ -27,25 +27,6 @@ This section covers the following topics: - [Configuring global permissions for groups](#configuring-global-permissions-for-groups) - [Refreshing group memberships](#refreshing-group-memberships) -### List of `restricted-admin` Permissions - -The `restricted-admin` permissions are as follows: - -- Has full admin access to all downstream clusters managed by Rancher. -- Has very limited access to the local Kubernetes cluster. Can access Rancher custom resource definitions, but has no access to any Kubernetes native types. -- Can add other users and assign them to clusters outside of the local cluster. -- Can create other restricted admins. -- Cannot grant any permissions in the local cluster they don't currently have. (This is how Kubernetes normally operates) - - -### Changing Global Administrators to Restricted Admins - -If Rancher already has a global administrator, they should change all global administrators over to the new `restricted-admin` role. - -This can be done through **Security > Users** and moving any Administrator role over to Restricted Administrator. - -Signed-in users can change themselves over to the `restricted-admin` if they wish, but they should only do that as the last step, otherwise they won't have the permissions to do so. - # Global Permission Assignment Global permissions for local users are assigned differently than users who log in to Rancher using external authentication. diff --git a/content/rancher/v2.0-v2.4/en/installation/resources/advanced/firewall/_index.md b/content/rancher/v2.0-v2.4/en/installation/resources/advanced/firewall/_index.md index f3ee9defadd..67c6f880325 100644 --- a/content/rancher/v2.0-v2.4/en/installation/resources/advanced/firewall/_index.md +++ b/content/rancher/v2.0-v2.4/en/installation/resources/advanced/firewall/_index.md @@ -3,7 +3,7 @@ title: Opening Ports with firewalld weight: 1 --- -> We recommend disabling firewalld. For Kubernetes 1.19, firewalld must be turned off. +> We recommend disabling firewalld. For Kubernetes 1.19.x and higher, firewalld must be turned off. Some distributions of Linux [derived from RHEL,](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Rebuilds) including Oracle Linux, may have default firewall rules that block communication with Helm. diff --git a/content/rancher/v2.5/en/admin-settings/authentication/keycloak/_index.md b/content/rancher/v2.5/en/admin-settings/authentication/keycloak/_index.md index 95a37c00e2f..e4e75f36477 100644 --- a/content/rancher/v2.5/en/admin-settings/authentication/keycloak/_index.md +++ b/content/rancher/v2.5/en/admin-settings/authentication/keycloak/_index.md @@ -25,36 +25,87 @@ If your organization uses Keycloak Identity Provider (IdP) for user authenticati >1: Optionally, you can enable either one or both of these settings. >2: Rancher SAML metadata won't be generated until a SAML provider is configured and saved. - + {{< img "/img/rancher/keycloak/keycloak-saml-client-configuration.png" "">}} - + - In the new SAML client, create Mappers to expose the users fields - Add all "Builtin Protocol Mappers" {{< img "/img/rancher/keycloak/keycloak-saml-client-builtin-mappers.png" "">}} - Create a new "Group list" mapper to map the member attribute to a user's groups - {{< img "/img/rancher/keycloak/keycloak-saml-client-group-mapper.png" "">}} -- Export a `metadata.xml` file from your Keycloak client: - From the `Installation` tab, choose the `SAML Metadata IDPSSODescriptor` format option and download your file. - - >**Note** - > Keycloak versions 6.0.0 and up no longer provide the IDP metadata under the `Installation` tab. - > You can still get the XML from the following url: - > - > `https://{KEYCLOAK-URL}/auth/realms/{REALM-NAME}/protocol/saml/descriptor` - > - > The XML obtained from this URL contains `EntitiesDescriptor` as the root element. Rancher expects the root element to be `EntityDescriptor` rather than `EntitiesDescriptor`. So before passing this XML to Rancher, follow these steps to adjust it: - > - > * Copy all the attributes from `EntitiesDescriptor` to the `EntityDescriptor` that are not present. - > * Remove the `` tag from the beginning. - > * Remove the `` from the end of the xml. - > - > You are left with something similar as the example below: - > - > ``` - > - > .... - > - > ``` + {{< img "/img/rancher/keycloak/keycloak-saml-client-group-mapper.png" "">}} + +## Getting the IDP Metadata + +{{% tabs %}} +{{% tab "Keycloak 5 and earlier" %}} +To get the IDP metadata, export a `metadata.xml` file from your Keycloak client. +From the **Installation** tab, choose the **SAML Metadata IDPSSODescriptor** format option and download your file. +{{% /tab %}} +{{% tab "Keycloak 6-13" %}} + +1. From the **Configure** section, click the **Realm Settings** tab. +1. Click the **General** tab. +1. From the **Endpoints** field, click **SAML 2.0 Identity Provider Metadata**. + +Verify the IDP metadata contains the following attributes: + +``` +xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" +xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" +xmlns:ds="http://www.w3.org/2000/09/xmldsig#" +``` + +Some browsers, such as Firefox, may render/process the document such that the contents appear to have been modified, and some attributes appear to be missing. In this situation, use the raw response data that can be found using your browser. + +The following is an example process for Firefox, but will vary slightly for other browsers: + +1. Press **F12** to access the developer console. +1. Click the **Network** tab. +1. From the table, click the row containing `descriptor`. +1. From the details pane, click the **Response** tab. +1. Copy the raw response data. + +The XML obtained contains `EntitiesDescriptor` as the root element. Rancher expects the root element to be `EntityDescriptor` rather than `EntitiesDescriptor`. So before passing this XML to Rancher, follow these steps to adjust it: + +1. Copy all the attributes from `EntitiesDescriptor` to the `EntityDescriptor` that are not present. +1. Remove the `` tag from the beginning. +1. Remove the `` from the end of the xml. + +You are left with something similar as the example below: + +``` + +.... + +``` + +{{% /tab %}} +{{% tab "Keycloak 14+" %}} + +1. From the **Configure** section, click the **Realm Settings** tab. +1. Click the **General** tab. +1. From the **Endpoints** field, click **SAML 2.0 Identity Provider Metadata**. + +Verify the IDP metadata contains the following attributes: + +``` +xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" +xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" +xmlns:ds="http://www.w3.org/2000/09/xmldsig#" +``` + +Some browsers, such as Firefox, may render/process the document such that the contents appear to have been modified, and some attributes appear to be missing. In this situation, use the raw response data that can be found using your browser. + +The following is an example process for Firefox, but will vary slightly for other browsers: + +1. Press **F12** to access the developer console. +1. Click the **Network** tab. +1. From the table, click the row containing `descriptor`. +1. From the details pane, click the **Response** tab. +1. Copy the raw response data. + +{{% /tab %}} +{{% /tabs %}} ## Configuring Keycloak in Rancher diff --git a/content/rancher/v2.5/en/cluster-admin/certificate-rotation/_index.md b/content/rancher/v2.5/en/cluster-admin/certificate-rotation/_index.md index 37aa93ed56f..168a7e4094b 100644 --- a/content/rancher/v2.5/en/cluster-admin/certificate-rotation/_index.md +++ b/content/rancher/v2.5/en/cluster-admin/certificate-rotation/_index.md @@ -19,3 +19,22 @@ Certificates can be rotated for the following services: - kube-scheduler - kube-controller-manager + +### Certificate Rotation + +Rancher launched Kubernetes clusters have the ability to rotate the auto-generated certificates through the UI. + +1. In the **Global** view, navigate to the cluster that you want to rotate certificates. + +2. Select **⋮ > Rotate Certificates**. + +3. Select which certificates that you want to rotate. + + * Rotate all Service certificates (keep the same CA) + * Rotate an individual service and choose one of the services from the drop-down menu + +4. Click **Save**. + +**Results:** The selected certificates will be rotated and the related services will be restarted to start using the new certificate. + +> **Note:** Even though the RKE CLI can use custom certificates for the Kubernetes cluster components, Rancher currently doesn't allow the ability to upload these in Rancher launched Kubernetes clusters. diff --git a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md index 33c3a39e9d8..545d87e0f49 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md @@ -13,7 +13,10 @@ This page covers how to install the Cloud Provider Interface (CPI) and Cloud Sto # Prerequisites -The vSphere version must be 7.0u1 or higher. +The vSphere versions supported: + +* 6.7u3 +* 7.0u1 or higher. The Kubernetes version must be 1.19 or higher. diff --git a/content/rancher/v2.5/en/faq/removing-rancher/_index.md b/content/rancher/v2.5/en/faq/removing-rancher/_index.md index 49c1acde9bc..e05744e7400 100644 --- a/content/rancher/v2.5/en/faq/removing-rancher/_index.md +++ b/content/rancher/v2.5/en/faq/removing-rancher/_index.md @@ -27,7 +27,7 @@ The capability to access a downstream cluster without Rancher depends on the typ - **Registered clusters:** The cluster will be unaffected and you can access the cluster using the same methods that you did before the cluster was registered into Rancher. - **Hosted Kubernetes clusters:** If you created the cluster in a cloud-hosted Kubernetes provider such as EKS, GKE, or AKS, you can continue to manage the cluster using your provider's cloud credentials. -- **RKE clusters:** To access an [RKE cluster,]({{}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/) the cluster must have the [authorized cluster endpoint]({{}}/rancher/v2.5/en/overview/architecture/#4-authorized-cluster-endpoint) enabled, and you must have already downloaded the cluster's kubeconfig file from the Rancher UI. (The authorized cluster endpoint is enabled by default for RKE clusters.) With this endpoint, you can access your cluster with kubectl directly instead of communicating through the Rancher server's [authentication proxy.]({{}}/rancher/v2.5/en/overview/architecture/#1-the-authentication-proxy) For instructions on how to configure kubectl to use the authorized cluster endpoint, refer to the section about directly accessing clusters with [kubectl and the kubeconfig file.]({{}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) These clusters will use a snapshot of the authentication as it was configured when Rancher was removed. +- **RKE clusters:** Please note that you will no longer be able to manage the individual Kubernetes components or perform any upgrades on them after the deletion of the Rancher server. However, you can still access the cluster to manage your workloads. To access an [RKE cluster,]({{}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/) the cluster must have the [authorized cluster endpoint]({{}}/rancher/v2.5/en/overview/architecture/#4-authorized-cluster-endpoint) enabled, and you must have already downloaded the cluster's kubeconfig file from the Rancher UI. (The authorized cluster endpoint is enabled by default for RKE clusters.) With this endpoint, you can access your cluster with kubectl directly instead of communicating through the Rancher server's [authentication proxy.]({{}}/rancher/v2.5/en/overview/architecture/#1-the-authentication-proxy) For instructions on how to configure kubectl to use the authorized cluster endpoint, refer to the section about directly accessing clusters with [kubectl and the kubeconfig file.]({{}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) These clusters will use a snapshot of the authentication as it was configured when Rancher was removed. ### What if I don't want Rancher anymore? diff --git a/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md index c9b1d6d8c5d..b0e02303a79 100644 --- a/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md +++ b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md @@ -6,7 +6,7 @@ aliases: - /rancher/v2.5/en/installation/k8s-install/ - /rancher/v2.5/en/installation/k8s-install/helm-rancher - /rancher/v2.5/en/installation/k8s-install/kubernetes-rke - - /rancher/v2.5/en/installation/ha-server-install + - /rancher/v2.5/en/installation/ha-server-install - /rancher/v2.5/en/installation/install-rancher-on-k8s/install - /rancher/v2.x/en/installation/install-rancher-on-k8s/ --- @@ -24,7 +24,7 @@ In this section, you'll learn how to deploy Rancher on a Kubernetes cluster usin ### Kubernetes Cluster -Set up the Rancher server's local Kubernetes cluster. +Set up the Rancher server's local Kubernetes cluster. Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS. @@ -113,7 +113,7 @@ There are three recommended options for the source of the certificate used for T ### 4. Install cert-manager -> You should skip this step if you are bringing your own certificate files (option `ingress.tls.source=secret`), or if you use [TLS termination on an external load balancer]({{}}/rancher/v2.5/en/installation/install-rancher-on-k8s/chart-options/#external-tls-termination). +> You should skip this step if you are bringing your own certificate files (option `ingress.tls.source=secret`), or if you use [TLS termination on an external load balancer]({{}}/rancher/v2.5/en/installation/install-rancher-on-k8s/chart-options/#external-tls-termination). This step is only required to use certificates issued by Rancher's generated CA (`ingress.tls.source=rancher`) or to request Let's Encrypt issued certificates (`ingress.tls.source=letsEncrypt`). @@ -157,6 +157,8 @@ cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m The exact command to install Rancher differs depending on the certificate configuration. +However, irrespective of the certificate configuration, the name of the Rancher installation in the `cattle-system` namespace should always be `rancher`. + {{% tabs %}} {{% tab "Rancher-generated Certificates" %}} @@ -168,7 +170,7 @@ Because `rancher` is the default option for `ingress.tls.source`, we are not spe - Set `hostname` to the DNS record that resolves to your load balancer. - Set `replicas` to the number of replicas to use for the Rancher Deployment. This defaults to 3; if you have less than 3 nodes in your cluster you should reduce it accordingly. - To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6`. -- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. +- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. ``` helm install rancher rancher-/rancher \ @@ -200,7 +202,7 @@ In the following command, - Set `letsEncrypt.email` to the email address used for communication about your certificate (for example, expiry notices). - Set `letsEncrypt.ingress.class` to whatever your ingress controller is, e.g., `traefik`, `nginx`, `haproxy`, etc. - To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6`. -- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. +- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. ``` helm install rancher rancher-/rancher \ @@ -234,7 +236,7 @@ Although an entry in the `Subject Alternative Names` is technically required, ha - Set `replicas` to the number of replicas to use for the Rancher Deployment. This defaults to 3; if you have less than 3 nodes in your cluster you should reduce it accordingly. - Set `ingress.tls.source` to `secret`. - To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6`. -- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. +- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. ``` helm install rancher rancher-/rancher \ diff --git a/content/rancher/v2.5/en/installation/resources/advanced/firewall/_index.md b/content/rancher/v2.5/en/installation/resources/advanced/firewall/_index.md index b779951aa7b..2ff27022f24 100644 --- a/content/rancher/v2.5/en/installation/resources/advanced/firewall/_index.md +++ b/content/rancher/v2.5/en/installation/resources/advanced/firewall/_index.md @@ -5,7 +5,7 @@ aliases: - /rancher/v2.x/en/installation/resources/advanced/firewall/ --- -> We recommend disabling firewalld. For Kubernetes 1.19, firewalld must be turned off. +> We recommend disabling firewalld. For Kubernetes 1.19.x and higher, firewalld must be turned off. Some distributions of Linux [derived from RHEL,](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Rebuilds) including Oracle Linux, may have default firewall rules that block communication with Helm. diff --git a/content/rancher/v2.5/en/installation/resources/update-ca-cert/_index.md b/content/rancher/v2.5/en/installation/resources/update-ca-cert/_index.md index c256f9f5905..751feae2d8c 100644 --- a/content/rancher/v2.5/en/installation/resources/update-ca-cert/_index.md +++ b/content/rancher/v2.5/en/installation/resources/update-ca-cert/_index.md @@ -13,6 +13,7 @@ A summary of the steps is as follows: 2. Create or update the `tls-ca` Kubernetes secret resource with the root CA certificate (only required when using a private CA). 3. Update the Rancher installation using the Helm CLI. 4. Reconfigure the Rancher agents to trust the new CA certificate. +5. Select Force Update of Fleet clusters to connect fleet-agent to Rancher. The details of these instructions are below. @@ -145,3 +146,12 @@ First, generate the agent definitions as described here: https://gist.github.com Then, connect to a controlplane node of the downstream cluster via SSH, create a Kubeconfig and apply the definitions: https://gist.github.com/superseb/b14ed3b5535f621ad3d2aa6a4cd6443b + + +# 5. Select Force Update of Fleet clusters to connect fleet-agent to Rancher + +Select 'Force Update' for the clusters within the [Continuous Delivery]({{}}/rancher/v2.5/en/deploy-across-clusters/fleet/#accessing-fleet-in-the-rancher-ui) view under Cluster Explorer in the Rancher UI to allow the fleet-agent in downstream clusters to successfully connect to Rancher. + +### Why is this step required? + +Fleet agents in Rancher managed clusters store kubeconfig that is used to connect to the Rancher proxied kube-api in the fleet-agent secret of the fleet-system namespace. The kubeconfig contains a certificate-authority-data block containing the Rancher CA. When changing the Rancher CA, this block needs to be updated for a successful connection of the fleet-agent to Rancher. diff --git a/content/rancher/v2.5/en/security/rancher-2.5/1.5-benchmark-2.5/_index.md b/content/rancher/v2.5/en/security/rancher-2.5/1.5-benchmark-2.5/_index.md index 02b34e42fdb..4fed8f40550 100644 --- a/content/rancher/v2.5/en/security/rancher-2.5/1.5-benchmark-2.5/_index.md +++ b/content/rancher/v2.5/en/security/rancher-2.5/1.5-benchmark-2.5/_index.md @@ -1281,7 +1281,7 @@ on the master node and ensure the correct value for the `--bind-address` paramet **Expected result**: ``` -'--bind-address' is present OR '--bind-address' is not present +'--bind-address' argument is set to 127.0.0.1 ``` ### 1.4 Scheduler @@ -1327,7 +1327,7 @@ on the master node and ensure the correct value for the `--bind-address` paramet **Expected result**: ``` -'--bind-address' is present OR '--bind-address' is not present +'--bind-address' argument is set to 127.0.0.1 ``` ## 2 Etcd Node Configuration diff --git a/content/rancher/v2.5/en/security/rancher-2.5/1.5-hardening-2.5/_index.md b/content/rancher/v2.5/en/security/rancher-2.5/1.5-hardening-2.5/_index.md index 26907ab28b6..491aec9c080 100644 --- a/content/rancher/v2.5/en/security/rancher-2.5/1.5-hardening-2.5/_index.md +++ b/content/rancher/v2.5/en/security/rancher-2.5/1.5-hardening-2.5/_index.md @@ -667,6 +667,7 @@ rancher_kubernetes_engine_config: service_node_port_range: 30000-32767 kube_controller: extra_args: + bind-address: 127.0.0.1 address: 127.0.0.1 feature-gates: RotateKubeletServerCertificate=true profiling: 'false' @@ -685,6 +686,7 @@ rancher_kubernetes_engine_config: generate_serving_certificate: true scheduler: extra_args: + bind-address: 127.0.0.1 address: 127.0.0.1 profiling: 'false' ssh_agent_auth: false diff --git a/content/rancher/v2.5/en/security/rancher-2.5/1.6-benchmark-2.5/_index.md b/content/rancher/v2.5/en/security/rancher-2.5/1.6-benchmark-2.5/_index.md index 57b65e5b004..e0dc1e45c5d 100644 --- a/content/rancher/v2.5/en/security/rancher-2.5/1.6-benchmark-2.5/_index.md +++ b/content/rancher/v2.5/en/security/rancher-2.5/1.6-benchmark-2.5/_index.md @@ -1803,13 +1803,13 @@ on the master node and ensure the correct value for the --bind-address parameter **Expected Result**: ```console -'--bind-address' is not present OR '--bind-address' is not present +'--bind-address' argument is set to 127.0.0.1 ``` **Returned Value**: ```console -root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=0.0.0.0 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true +root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --bind-address=127.0.0.1 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=127.0.0.1 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true ``` ## 1.4 Scheduler @@ -1859,13 +1859,13 @@ on the master node and ensure the correct value for the --bind-address parameter **Expected Result**: ```console -'--bind-address' is not present OR '--bind-address' is not present +'--bind-address' argument is set to 127.0.0.1 ``` **Returned Value**: ```console -root 4947 4930 1 16:16 ? 00:00:02 kube-scheduler --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --address=0.0.0.0 +root 4947 4930 1 16:16 ? 00:00:02 kube-scheduler --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --address=127.0.0.1 --bind-address=127.0.0.1 ``` ## 2 Etcd Node Configuration Files diff --git a/content/rancher/v2.5/en/security/rancher-2.5/1.6-hardening-2.5/_index.md b/content/rancher/v2.5/en/security/rancher-2.5/1.6-hardening-2.5/_index.md index 0b6497f2e1f..d628bfd8c5a 100644 --- a/content/rancher/v2.5/en/security/rancher-2.5/1.6-hardening-2.5/_index.md +++ b/content/rancher/v2.5/en/security/rancher-2.5/1.6-hardening-2.5/_index.md @@ -511,6 +511,8 @@ rancher_kubernetes_engine_config: kube_controller: extra_args: feature-gates: RotateKubeletServerCertificate=true + bind-address: 127.0.0.1 + address: 127.0.0.1 kubelet: extra_args: feature-gates: RotateKubeletServerCertificate=true @@ -519,6 +521,10 @@ rancher_kubernetes_engine_config: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 fail_swap_on: false generate_serving_certificate: true + scheduler: + extra_args: + bind-address: 127.0.0.1 + address: 127.0.0.1 ssh_agent_auth: false upgrade_strategy: max_unavailable_controlplane: '1' diff --git a/content/rancher/v2.6/en/admin-settings/authentication/keycloak-saml/_index.md b/content/rancher/v2.6/en/admin-settings/authentication/keycloak-saml/_index.md index bd94b700d23..ca2952111fb 100644 --- a/content/rancher/v2.6/en/admin-settings/authentication/keycloak-saml/_index.md +++ b/content/rancher/v2.6/en/admin-settings/authentication/keycloak-saml/_index.md @@ -23,36 +23,87 @@ If your organization uses Keycloak Identity Provider (IdP) for user authenticati >1: Optionally, you can enable either one or both of these settings. >2: Rancher SAML metadata won't be generated until a SAML provider is configured and saved. - + {{< img "/img/rancher/keycloak/keycloak-saml-client-configuration.png" "">}} - + - In the new SAML client, create Mappers to expose the users fields - Add all "Builtin Protocol Mappers" {{< img "/img/rancher/keycloak/keycloak-saml-client-builtin-mappers.png" "">}} - Create a new "Group list" mapper to map the member attribute to a user's groups - {{< img "/img/rancher/keycloak/keycloak-saml-client-group-mapper.png" "">}} -- Export a `metadata.xml` file from your Keycloak client: - From the `Installation` tab, choose the `SAML Metadata IDPSSODescriptor` format option and download your file. - - >**Note** - > Keycloak versions 6.0.0 and up no longer provide the IDP metadata under the `Installation` tab. - > You can still get the XML from the following url: - > - > `https://{KEYCLOAK-URL}/auth/realms/{REALM-NAME}/protocol/saml/descriptor` - > - > The XML obtained from this URL contains `EntitiesDescriptor` as the root element. Rancher expects the root element to be `EntityDescriptor` rather than `EntitiesDescriptor`. So before passing this XML to Rancher, follow these steps to adjust it: - > - > * Copy all the attributes from `EntitiesDescriptor` to the `EntityDescriptor` that are not present. - > * Remove the `` tag from the beginning. - > * Remove the `` from the end of the xml. - > - > You are left with something similar as the example below: - > - > ``` - > - > .... - > - > ``` + {{< img "/img/rancher/keycloak/keycloak-saml-client-group-mapper.png" "">}} + +## Getting the IDP Metadata + +{{% tabs %}} +{{% tab "Keycloak 5 and earlier" %}} +To get the IDP metadata, export a `metadata.xml` file from your Keycloak client. +From the **Installation** tab, choose the **SAML Metadata IDPSSODescriptor** format option and download your file. +{{% /tab %}} +{{% tab "Keycloak 6-13" %}} + +1. From the **Configure** section, click the **Realm Settings** tab. +1. Click the **General** tab. +1. From the **Endpoints** field, click **SAML 2.0 Identity Provider Metadata**. + +Verify the IDP metadata contains the following attributes: + +``` +xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" +xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" +xmlns:ds="http://www.w3.org/2000/09/xmldsig#" +``` + +Some browsers, such as Firefox, may render/process the document such that the contents appear to have been modified, and some attributes appear to be missing. In this situation, use the raw response data that can be found using your browser. + +The following is an example process for Firefox, but will vary slightly for other browsers: + +1. Press **F12** to access the developer console. +1. Click the **Network** tab. +1. From the table, click the row containing `descriptor`. +1. From the details pane, click the **Response** tab. +1. Copy the raw response data. + +The XML obtained contains `EntitiesDescriptor` as the root element. Rancher expects the root element to be `EntityDescriptor` rather than `EntitiesDescriptor`. So before passing this XML to Rancher, follow these steps to adjust it: + +1. Copy all the attributes from `EntitiesDescriptor` to the `EntityDescriptor` that are not present. +1. Remove the `` tag from the beginning. +1. Remove the `` from the end of the xml. + +You are left with something similar as the example below: + +``` + +.... + +``` + +{{% /tab %}} +{{% tab "Keycloak 14+" %}} + +1. From the **Configure** section, click the **Realm Settings** tab. +1. Click the **General** tab. +1. From the **Endpoints** field, click **SAML 2.0 Identity Provider Metadata**. + +Verify the IDP metadata contains the following attributes: + +``` +xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" +xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" +xmlns:ds="http://www.w3.org/2000/09/xmldsig#" +``` + +Some browsers, such as Firefox, may render/process the document such that the contents appear to have been modified, and some attributes appear to be missing. In this situation, use the raw response data that can be found using your browser. + +The following is an example process for Firefox, but will vary slightly for other browsers: + +1. Press **F12** to access the developer console. +1. Click the **Network** tab. +1. From the table, click the row containing `descriptor`. +1. From the details pane, click the **Response** tab. +1. Copy the raw response data. + +{{% /tab %}} +{{% /tabs %}} ## Configuring Keycloak in Rancher diff --git a/content/rancher/v2.6/en/admin-settings/branding/_index.md b/content/rancher/v2.6/en/admin-settings/branding/_index.md index 1fbe6d9f78b..4e5cff17e20 100644 --- a/content/rancher/v2.6/en/admin-settings/branding/_index.md +++ b/content/rancher/v2.6/en/admin-settings/branding/_index.md @@ -40,7 +40,21 @@ You can override the primary color used throughout the UI with a custom color of ### Fixed Banners +{{% tabs %}} +{{% tab "Rancher before v2.6.4" %}} Display a custom fixed banner in the header, footer, or both. +{{% /tab %}} +{{% tab "Rancher v2.6.4+" %}} +Display a custom fixed banner in the header, footer, or both. + +As of Rancher v2.6.4, configuration of fixed banners has moved from the **Branding** tab to the **Banners** tab. + +To configure banner settings, + +1. Click **☰ > Global settings**. +2. Click **Banners**. +{{% /tab %}} +{{% /tabs %}} # Custom Navigation Links diff --git a/content/rancher/v2.6/en/backups/migrating-rancher/_index.md b/content/rancher/v2.6/en/backups/migrating-rancher/_index.md index 651c300d66b..5b5e4f3acdd 100644 --- a/content/rancher/v2.6/en/backups/migrating-rancher/_index.md +++ b/content/rancher/v2.6/en/backups/migrating-rancher/_index.md @@ -85,16 +85,30 @@ spec: >**Important:** The field `encryptionConfigSecretName` must be set only if your backup was created with encryption enabled. Provide the name of the Secret containing the encryption config file. If you only have the encryption config file, but don't have a secret created with it in this cluster, use the following steps to create the secret: 1. The encryption configuration file must be named `encryption-provider-config.yaml`, and the `--from-file` flag must be used to create this secret. So save your `EncryptionConfiguration` in a file called `encryption-provider-config.yaml` and run this command: - ``` - kubectl create secret generic encryptionconfig \ - --from-file=./encryption-provider-config.yaml \ - -n cattle-resources-system - ``` - -1. Then apply the resource: - ``` - kubectl apply -f migrationResource.yaml - ``` +``` +kubectl create secret generic encryptionconfig \ + --from-file=./encryption-provider-config.yaml \ + -n cattle-resources-system +``` + +1. Apply the manifest, and watch for the Restore resources status: + + Apply the resource: +``` +kubectl apply -f migrationResource.yaml +``` + + Watch the Restore status: +``` +kubectl get restore +``` + + Watch the restoration logs: +``` +kubectl logs -n cattle-resources-system --tail 100 -f rancher-backup-xxx-xxx +``` + +Once the Restore resource has the status `Completed`, you can continue the Rancher installation. ### 3. Install cert-manager diff --git a/content/rancher/v2.6/en/cluster-admin/certificate-rotation/_index.md b/content/rancher/v2.6/en/cluster-admin/certificate-rotation/_index.md index 8a70bfd34b6..664af10b1fc 100644 --- a/content/rancher/v2.6/en/cluster-admin/certificate-rotation/_index.md +++ b/content/rancher/v2.6/en/cluster-admin/certificate-rotation/_index.md @@ -19,3 +19,22 @@ Certificates can be rotated for the following services: > **Note:** For users who didn't rotate their webhook certificates, and they have expired after one year, please see this [page]({{}}/rancher/v2.6/en/troubleshooting/expired-webhook-certificates/) for help. + +### Certificate Rotation + +Rancher launched Kubernetes clusters have the ability to rotate the auto-generated certificates through the UI. + +1. In the **Global** view, navigate to the cluster that you want to rotate certificates. + +2. Select **⋮ > Rotate Certificates**. + +3. Select which certificates that you want to rotate. + + * Rotate all Service certificates (keep the same CA) + * Rotate an individual service and choose one of the services from the drop-down menu + +4. Click **Save**. + +**Results:** The selected certificates will be rotated and the related services will be restarted to start using the new certificate. + +> **Note:** Even though the RKE CLI can use custom certificates for the Kubernetes cluster components, Rancher currently doesn't allow the ability to upload these in Rancher launched Kubernetes clusters. diff --git a/content/rancher/v2.6/en/cluster-admin/editing-clusters/rke-config-reference/_index.md b/content/rancher/v2.6/en/cluster-admin/editing-clusters/rke-config-reference/_index.md index 0f0c03d67f2..27e73a4c408 100644 --- a/content/rancher/v2.6/en/cluster-admin/editing-clusters/rke-config-reference/_index.md +++ b/content/rancher/v2.6/en/cluster-admin/editing-clusters/rke-config-reference/_index.md @@ -341,7 +341,10 @@ Example: local_cluster_auth_endpoint: enabled: true fqdn: "FQDN" - ca_certs: "BASE64_CACERT" + ca_certs: |- + -----BEGIN CERTIFICATE----- + ... + -----END CERTIFICATE----- ``` ### Custom Network Plug-in diff --git a/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/aks/_index.md b/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/aks/_index.md index a0ad8a5c24f..f54e1e42bdd 100644 --- a/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/aks/_index.md +++ b/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/aks/_index.md @@ -14,7 +14,6 @@ You can use Rancher to create a cluster hosted in Microsoft Azure Kubernetes Ser - [Role-based Access Control](#role-based-access-control) - [AKS Cluster Configuration Reference](#aks-cluster-configuration-reference) - [Private Clusters](#private-clusters) -- [Minimum AKS Permissions](#minimum-aks-permissions) - [Syncing](#syncing) - [Programmatically Creating AKS Clusters](#programmatically-creating-aks-clusters) diff --git a/content/rancher/v2.6/en/cluster-provisioning/registered-clusters/_index.md b/content/rancher/v2.6/en/cluster-provisioning/registered-clusters/_index.md index 4d9ab0c2a1f..35146cafedd 100644 --- a/content/rancher/v2.6/en/cluster-provisioning/registered-clusters/_index.md +++ b/content/rancher/v2.6/en/cluster-provisioning/registered-clusters/_index.md @@ -168,7 +168,7 @@ Also in the K3s documentation, nodes with the worker role are called agent nodes # Debug Logging and Troubleshooting for Registered K3s Clusters -Nodes are upgraded by the system upgrade controller running in the downstream cluster. Based on the cluster configuration, Rancher deploys two [plans](https://github.com/rancher/system-upgrade-controller#example-upgrade-plan) to upgrade K3s nodes: one for controlplane nodes and one for workers. The system upgrade controller follows the plans and upgrades the nodes. +Nodes are upgraded by the system upgrade controller running in the downstream cluster. Based on the cluster configuration, Rancher deploys two [plans](https://github.com/rancher/system-upgrade-controller#example-upgrade-plan) to upgrade K3s nodes: one for controlplane nodes and one for workers. The system upgrade controller follows the plans and upgrades the nodes. To enable debug logging on the system upgrade controller deployment, edit the [configmap](https://github.com/rancher/system-upgrade-controller/blob/50a4c8975543d75f1d76a8290001d87dc298bdb4/manifests/system-upgrade-controller.yaml#L32) to set the debug environment variable to true. Then restart the `system-upgrade-controller` pod. @@ -196,7 +196,7 @@ Authorized Cluster Endpoint (ACE) support has been added for registered RKE2 and > **Note:** > -> - These steps only need to be performed on the control plane nodes of the downstream cluster. You must configure each control plane node individually. +> - These steps only need to be performed on the control plane nodes of the downstream cluster. You must configure each control plane node individually. > > - The following steps will work on both RKE2 and K3s clusters registered in v2.6.x as well as those registered (or imported) from a previous version of Rancher with an upgrade to v2.6.x. > @@ -223,19 +223,19 @@ Authorized Cluster Endpoint (ACE) support has been added for registered RKE2 and context: user: Default cluster: Default - + 1. Add the following to the config file (or create one if it doesn’t exist); note that the default location is `/etc/rancher/{rke2,k3s}/config.yaml`: kube-apiserver-arg: - authentication-token-webhook-config-file=/var/lib/rancher/{rke2,k3s}/kube-api-authn-webhook.yaml - + 1. Run the following commands: sudo systemctl stop {rke2,k3s}-server sudo systemctl start {rke2,k3s}-server 1. Finally, you **must** go back to the Rancher UI and edit the imported cluster there to complete the ACE enablement. Click on **⋮ > Edit Config**, then click the **Networking** tab under Cluster Configuration. Finally, click the **Enabled** button for **Authorized Endpoint**. Once the ACE is enabled, you then have the option of entering a fully qualified domain name (FQDN) and certificate information. - + >**Note:** The FQDN field is optional, and if one is entered, it should point to the downstream cluster. Certificate information is only needed if there is a load balancer in front of the downstream cluster that is using an untrusted certificate. If you have a valid certificate, then nothing needs to be added to the CA Certificates field. # Annotating Registered Clusters @@ -286,4 +286,3 @@ To annotate a registered cluster, 1. Click **Save**. **Result:** The annotation does not give the capabilities to the cluster, but it does indicate to Rancher that the cluster has those capabilities. - diff --git a/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/behavior-differences-between-rke1-and-rke2/_index.md b/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/behavior-differences-between-rke1-and-rke2/_index.md new file mode 100644 index 00000000000..99a3f18f370 --- /dev/null +++ b/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/behavior-differences-between-rke1-and-rke2/_index.md @@ -0,0 +1,34 @@ +--- +title: Behavior Differences Between RKE1 and RKE2 +weight: 2450 +--- + +RKE2, also known as RKE Government, is a Kubernetes distribution that focuses on security and compliance for U.S. Federal Government entities. It is considered the next iteration of the Rancher Kubernetes Engine, now known as RKE1. + +RKE1 and RKE2 have several slight behavioral differences to note, and this page will highlight some of these at a high level. + +### Control Plane Components + +RKE1 uses Docker for deploying and managing control plane components, and it also uses Docker as the container runtime for Kubernetes. By contrast, RKE2 launches control plane components as static pods that are managed by the kubelet. RKE2's container runtime is containerd, which allows things such as container registry mirroring (RKE1 with Docker does not). + +### Cluster API + +RKE2/K3s provisioning is built on top of the Cluster API (CAPI) upstream framework which often makes RKE2-provisioned clusters behave differently than RKE1-provisioned clusters. + +When you make changes to your cluster configuration in RKE2, this **may** result in nodes reprovisioning. This is controlled by CAPI controllers and not by Rancher itself. Note that for etcd nodes, the same behavior does not apply. + +The following are some specific example configuration changes that may cause the described behavior: + +- When editing the cluster and enabling `drain before delete`, the existing control plane nodes and worker are deleted and new nodes are created. + +- When nodes are being provisioned and a scale down operation is performed, rather than scaling down the desired number of nodes, it is possible that the currently provisioning nodes get deleted and new nodes are provisioned to reach the desired node count. Please note that this is a bug in Cluster API, and it will be fixed in an upcoming release. Once fixed, Rancher will update the documentation. + +Users who are used to RKE1 provisioning should take note of this new RKE2 behavior which may be unexpected. + +### Terminology + +You will notice that some terms have changed or gone away going from RKE1 to RKE2. For example, in RKE1 provisioning, you use **node templates**; in RKE2 provisioning, you can configure your cluster node pools when creating or editing the cluster. Another example is that the term **node pool** in RKE1 is now known as **machine pool** in RKE2. + + + + diff --git a/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md b/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md index b9ba43c3c29..d9b95fc9b6f 100644 --- a/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md +++ b/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md @@ -10,7 +10,10 @@ This page covers how to install the Cloud Provider Interface (CPI) and Cloud Sto # Prerequisites -The vSphere version must be 7.0u1 or higher. +The vSphere versions supported: + +* 6.7u3 +* 7.0u1 or higher. The Kubernetes version must be 1.19 or higher. diff --git a/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/_index.md b/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/_index.md index 6e3a1b0dc8c..4da61dccaf7 100644 --- a/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/_index.md +++ b/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/_index.md @@ -34,6 +34,11 @@ Choose the default security group or configure a security group. Please refer to [Amazon EC2 security group when using Node Driver]({{}}/rancher/v2.6/en/installation/requirements/ports/#rancher-aws-ec2-security-group) to see what rules are created in the `rancher-nodes` Security Group. +--- +**_New in v2.6.4_** + +If you provide your own security group for an EC2 instance, please note that Rancher will not modify it. As such, you will be responsible for ensuring that your security group is set to allow the [necessary ports for Rancher to provision the instance]({{}}/rancher/v2.6/en/installation/requirements/ports/#ports-for-rancher-server-nodes-on-rke). For more information on controlling inbound and outbound traffic to EC2 instances with security groups, refer [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups). + ### Instance Options Configure the instances that will be created. Make sure you configure the correct **SSH User** for the configured AMI. It is possible that a selected region does not support the default instance type. In this scenario you must select an instance type that does exist, otherwise an error will occur stating the requested configuration is not supported. diff --git a/content/rancher/v2.6/en/helm-charts/_index.md b/content/rancher/v2.6/en/helm-charts/_index.md index 11704efac48..b5313d0bd54 100644 --- a/content/rancher/v2.6/en/helm-charts/_index.md +++ b/content/rancher/v2.6/en/helm-charts/_index.md @@ -74,11 +74,14 @@ To add a private CA for Helm Chart repositories: [...] ``` -- **Git-based chart repositories**: It is not currently possible to add a private CA. For git-based chart repositories with a certificate signed by a private CA, you must disable TLS verification. Click **Edit YAML** for the chart repo, and add the key/value pair as follows: +- **Git-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click **Edit YAML** for the chart repo and set, as in the following example:
``` [...] spec: - insecureSkipTLSVerify: true + caBundle: + MIIFXzCCA0egAwIBAgIUWNy8WrvSkgNzV0zdWRP79j9cVcEwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMRQwEgYDVQQKDAtNeU9yZywgSW5jLjENMAsGA1UEAwwEcm9vdDAeFw0yMTEyMTQwODMyMTdaFw0yNDEwMDMwODMyMT + ... + nDxZ/tNXt/WPJr/PgEB3hQdInDWYMg7vGO0Oz00G5kWg0sJ0ZTSoA10ZwdjIdGEeKlj1NlPyAqpQ+uDnmx6DW+zqfYtLnc/g6GuLLVPamraqN+gyU8CHwAWPNjZonFN9Vpg0PIk1I2zuOc4EHifoTAXSpnjfzfyAxCaZsnTptimlPFJJqAMj+FfDArGmr4= [...] ``` diff --git a/content/rancher/v2.6/en/installation/install-rancher-on-k8s/_index.md b/content/rancher/v2.6/en/installation/install-rancher-on-k8s/_index.md index aff2ba86aae..5776af8f778 100644 --- a/content/rancher/v2.6/en/installation/install-rancher-on-k8s/_index.md +++ b/content/rancher/v2.6/en/installation/install-rancher-on-k8s/_index.md @@ -17,7 +17,7 @@ In this section, you'll learn how to deploy Rancher on a Kubernetes cluster usin ### Kubernetes Cluster -Set up the Rancher server's local Kubernetes cluster. +Set up the Rancher server's local Kubernetes cluster. Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS. @@ -104,7 +104,7 @@ There are three recommended options for the source of the certificate used for T ### 4. Install cert-manager -> You should skip this step if you are bringing your own certificate files (option `ingress.tls.source=secret`), or if you use [TLS termination on an external load balancer]({{}}/rancher/v2.6/en/installation/install-rancher-on-k8s/chart-options/#external-tls-termination). +> You should skip this step if you are bringing your own certificate files (option `ingress.tls.source=secret`), or if you use [TLS termination on an external load balancer]({{}}/rancher/v2.6/en/installation/install-rancher-on-k8s/chart-options/#external-tls-termination). This step is only required to use certificates issued by Rancher's generated CA (`ingress.tls.source=rancher`) or to request Let's Encrypt issued certificates (`ingress.tls.source=letsEncrypt`). @@ -148,6 +148,8 @@ cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m The exact command to install Rancher differs depending on the certificate configuration. +However, irrespective of the certificate configuration, the name of the Rancher installation in the `cattle-system` namespace should always be `rancher`. + > **Tip for testing and development:** This final command to install Rancher requires a domain name that forwards traffic to Rancher. If you are using the Helm CLI to set up a proof-of-concept, you can use a fake domain name when passing the `hostname` option. An example of a fake domain name would be `.sslip.io`, which would expose Rancher on an IP where it is running. Production installs would require a real domain name. {{% tabs %}} @@ -160,7 +162,7 @@ Because `rancher` is the default option for `ingress.tls.source`, we are not spe - Set the `hostname` to the DNS name you pointed at your load balancer. - Set the `bootstrapPassword` to something unique for the `admin` user. -- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. +- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. - To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6` ``` @@ -192,7 +194,7 @@ In the following command, - `ingress.tls.source` is set to `letsEncrypt` - `letsEncrypt.email` is set to the email address used for communication about your certificate (for example, expiry notices) - Set `letsEncrypt.ingress.class` to whatever your ingress controller is, e.g., `traefik`, `nginx`, `haproxy`, etc. -- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. +- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. ``` helm install rancher rancher-/rancher \ @@ -225,7 +227,7 @@ Although an entry in the `Subject Alternative Names` is technically required, ha - Set the `hostname`. - Set the `bootstrapPassword` to something unique for the `admin` user. - Set `ingress.tls.source` to `secret`. -- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. +- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. ``` helm install rancher rancher-/rancher \ diff --git a/content/rancher/v2.6/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md b/content/rancher/v2.6/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md index ae9faf9304b..35ceaf1b26a 100644 --- a/content/rancher/v2.6/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md +++ b/content/rancher/v2.6/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md @@ -65,7 +65,7 @@ helm upgrade --install rancher rancher-latest/rancher \ --namespace cattle-system \ --set hostname=rancher.example.com \ --set proxy=http://${proxy_host} - --set no_proxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local + --set noProxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local ``` After waiting for the deployment to finish: diff --git a/content/rancher/v2.6/en/installation/other-installation-methods/behind-proxy/launch-kubernetes/_index.md b/content/rancher/v2.6/en/installation/other-installation-methods/behind-proxy/launch-kubernetes/_index.md index b47128310e9..41428448b27 100644 --- a/content/rancher/v2.6/en/installation/other-installation-methods/behind-proxy/launch-kubernetes/_index.md +++ b/content/rancher/v2.6/en/installation/other-installation-methods/behind-proxy/launch-kubernetes/_index.md @@ -9,7 +9,7 @@ Once the infrastructure is ready, you can continue with setting up an RKE cluste First, you have to install Docker and setup the HTTP proxy on all three Linux nodes. For this perform the following steps on all three nodes. -For convenience export the IP address and port of your proxy into an environment variable and set up the HTTP_PROXY variables for your current shell: +For convenience, export the IP address and port of your proxy into an environment variable and set up the HTTP_PROXY variables for your current shell: ``` export proxy_host="10.0.0.5:8888" @@ -58,6 +58,24 @@ sudo systemctl daemon-reload sudo systemctl restart docker ``` +#### Air-gapped proxy + +_New in v2.6.4_ + +You can now provision node driver clusters from an air-gapped cluster configured to use a proxy for outbound connections. + +In addition to setting the default rules for a proxy server, you will need to add additional rules, shown below, to provision node driver clusters from a proxied Rancher environment. + +You will configure your filepath according to your setup, e.g., `/etc/apt/apt.conf.d/proxy.conf`: + +``` +acl SSL_ports port 22 +acl SSL_ports port 2376 + +acl Safe_ports port 22 # ssh +acl Safe_ports port 2376 # docker port +``` + ### Creating the RKE Cluster You need several command line tools on the host where you have SSH access to the Linux nodes to create and interact with the cluster: diff --git a/content/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/proxy/_index.md b/content/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/proxy/_index.md index 1ac4a66c017..37d3d2cfc00 100644 --- a/content/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/proxy/_index.md +++ b/content/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/proxy/_index.md @@ -40,3 +40,21 @@ docker run -d --restart=unless-stopped \ ``` Privileged access is [required.]({{}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher) + +### Air-gapped proxy configuration + +_New in v2.6.4_ + +You can now provision node driver clusters from an air-gapped cluster configured to use a proxy for outbound connections. + +In addition to setting the default rules for a proxy server as shown above, you will need to add additional rules, shown below, to provision node driver clusters from a proxied Rancher environment. + +You will configure your filepath according to your setup, e.g., `/etc/apt/apt.conf.d/proxy.conf`: + +``` +acl SSL_ports port 22 +acl SSL_ports port 2376 + +acl Safe_ports port 22 # ssh +acl Safe_ports port 2376 # docker port +``` \ No newline at end of file diff --git a/content/rancher/v2.6/en/installation/resources/advanced/firewall/_index.md b/content/rancher/v2.6/en/installation/resources/advanced/firewall/_index.md index 291cee6d594..69c1afae91d 100644 --- a/content/rancher/v2.6/en/installation/resources/advanced/firewall/_index.md +++ b/content/rancher/v2.6/en/installation/resources/advanced/firewall/_index.md @@ -3,7 +3,7 @@ title: Opening Ports with firewalld weight: 1 --- -> We recommend disabling firewalld. For Kubernetes 1.19, firewalld must be turned off. +> We recommend disabling firewalld. For Kubernetes 1.19.x and higher, firewalld must be turned off. Some distributions of Linux [derived from RHEL,](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Rebuilds) including Oracle Linux, may have default firewall rules that block communication with Helm. diff --git a/content/rancher/v2.6/en/installation/resources/update-ca-cert/_index.md b/content/rancher/v2.6/en/installation/resources/update-ca-cert/_index.md index 5a7a477fa82..3c01e5f7eaf 100644 --- a/content/rancher/v2.6/en/installation/resources/update-ca-cert/_index.md +++ b/content/rancher/v2.6/en/installation/resources/update-ca-cert/_index.md @@ -11,6 +11,7 @@ A summary of the steps is as follows: 2. Create or update the `tls-ca` Kubernetes secret resource with the root CA certificate (only required when using a private CA). 3. Update the Rancher installation using the Helm CLI. 4. Reconfigure the Rancher agents to trust the new CA certificate. +5. Select Force Update of Fleet clusters to connect fleet-agent to Rancher. The details of these instructions are below. @@ -143,3 +144,11 @@ First, generate the agent definitions as described here: https://gist.github.com Then, connect to a controlplane node of the downstream cluster via SSH, create a Kubeconfig and apply the definitions: https://gist.github.com/superseb/b14ed3b5535f621ad3d2aa6a4cd6443b + +# 5. Select Force Update of Fleet clusters to connect fleet-agent to Rancher + +Select 'Force Update' for the clusters within the [Continuous Delivery]({{}}/rancher/v2.6/en/deploy-across-clusters/fleet/#accessing-fleet-in-the-rancher-ui) view of the Rancher UI to allow the fleet-agent in downstream clusters to successfully connect to Rancher. + +### Why is this step required? + +Fleet agents in Rancher managed clusters store kubeconfig that is used to connect to the Rancher proxied kube-api in the fleet-agent secret of the fleet-system namespace. The kubeconfig contains a certificate-authority-data block containing the Rancher CA. When changing the Rancher CA, this block needs to be updated for a successful connection of the fleet-agent to Rancher. diff --git a/content/rancher/v2.6/en/project-admin/resource-quotas/quotas-for-projects/_index.md b/content/rancher/v2.6/en/project-admin/resource-quotas/quotas-for-projects/_index.md index 63a18ba0f49..7e2aaf3e869 100644 --- a/content/rancher/v2.6/en/project-admin/resource-quotas/quotas-for-projects/_index.md +++ b/content/rancher/v2.6/en/project-admin/resource-quotas/quotas-for-projects/_index.md @@ -19,18 +19,38 @@ The resource quota includes two limits, which you set while creating or editing - **Project Limits:** - This set of values configures an overall resource limit for the project. If you try to add a new namespace to the project, Rancher uses the limits you've set to validate that the project has enough resources to accommodate the namespace. In other words, if you try to move a namespace into a project near its resource quota, Rancher blocks you from moving the namespace. + This set of values configures a total limit for each specified resource shared among all namespaces in the project. - **Namespace Default Limits:** - This value is the default resource limit available for each namespace. When the resource quota is created at the project level, this limit is automatically propagated to each namespace in the project. Each namespace is bound to this default limit unless you override it. + This set of values configures the default quota limit available for each namespace for each specified resource. + When a namespace is created in the project without overrides, this limit is automatically bound to the namespace and enforced. + In the following diagram, a Rancher administrator wants to apply a resource quota that sets the same CPU and memory limit for every namespace in their project (`Namespace 1-4`). However, in Rancher, the administrator can set a resource quota for the project (`Project Resource Quota`) rather than individual namespaces. This quota includes resource limits for both the entire project (`Project Limit`) and individual namespaces (`Namespace Default Limit`). Rancher then propagates the `Namespace Default Limit` quotas to each namespace (`Namespace Resource Quota`) when created. Rancher: Resource Quotas Propagating to Each Namespace ![Rancher Resource Quota Implementation]({{}}/img/rancher/rancher-resource-quota.png) -Let's highlight some more nuanced functionality. If a quota is deleted at the project level, it will also be removed from all namespaces contained within that project, despite any overrides that may exist. Further, updating an existing namespace default limit for a quota at the project level will not result in that value being propagated to existing namespaces in the project; the updated value will only be applied to newly created namespaces in that project. To update a namespace default limit for existing namespaces you can delete and subsequently recreate the quota at the project level with the new default value. This will result in the new default value being applied to all existing namespaces in the project. +Let's highlight some more nuanced functionality for namespaces created **_within_** the Rancher UI. If a quota is deleted at the project level, it will also be removed from all namespaces contained within that project, despite any overrides that may exist. Further, updating an existing namespace default limit for a quota at the project level will not result in that value being propagated to existing namespaces in the project; the updated value will only be applied to newly created namespaces in that project. To update a namespace default limit for existing namespaces you can delete and subsequently recreate the quota at the project level with the new default value. This will result in the new default value being applied to all existing namespaces in the project. + +Before creating a namespace in a project, Rancher compares the amounts of the project's available resources and requested resources, regardless of whether they come from the default or overridden limits. +If the requested resources exceed the remaining capacity in the project for those resources, Rancher will assign the namespace the remaining capacity for that resource. + +However, this is not the case with namespaces created **_outside_** of Rancher's UI. For namespaces created via `kubectl`, Rancher +will assign a resource quota that has a **zero** amount for any resource that requested more capacity than what remains in the project. + +To create a namespace in an existing project via `kubectl`, use the `field.cattle.io/projectId` annotation. To override the default +requested quota limit, use the `field.cattle.io/resourceQuota` annotation. +``` +apiVersion: v1 +kind: Namespace +metadata: + annotations: + field.cattle.io/projectId: [your-cluster-ID]:[your-project-ID] + field.cattle.io/resourceQuota: '{"limit":{"limitsCpu":"100m", "limitsMemory":"100Mi", "configMaps": "50"}}' + name: my-ns +``` The following table explains the key differences between the two quota types. diff --git a/content/rke/latest/en/config-options/services/_index.md b/content/rke/latest/en/config-options/services/_index.md index 0266731a249..77a3a969195 100644 --- a/content/rke/latest/en/config-options/services/_index.md +++ b/content/rke/latest/en/config-options/services/_index.md @@ -6,7 +6,9 @@ weight: 230 To deploy Kubernetes, RKE deploys several core components or services in Docker containers on the nodes. Based on the roles of the node, the containers deployed may be different. -**All services support additional [custom arguments, Docker mount binds and extra environment variables]({{}}/rke/latest/en/config-options/services/services-extras/).** +>**Note:** All services support additional custom arguments, Docker mount binds, and extra environment variables. +> +>To configure advanced options for Kubernetes services such as `kubelet`, `kube-controller`, and `kube-apiserver` that are not documented below, see the [`extra_args` documentation]({{}}/rke/latest/en/config-options/services/services-extras/) for more details. | Component | Services key name in cluster.yml | |-------------------------|----------------------------------| diff --git a/scripts/converters/headers/header-2.6.md b/scripts/converters/headers/header-2.6.md new file mode 100755 index 00000000000..5f930a99fb1 --- /dev/null +++ b/scripts/converters/headers/header-2.6.md @@ -0,0 +1,36 @@ +--- +title: CIS v1.6 Benchmark - Self-Assessment Guide - Rancher v2.6 +weight: 101 +--- + +### CIS v1.6 Kubernetes Benchmark - Rancher v2.6 with Kubernetes v1.18 to v1.21 + +[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.6/Rancher_v2-6_CIS_v1-6_Benchmark_Assessment.pdf). + +#### Overview + +This document is a companion to the [Rancher v2.6 security hardening guide]({{}}/rancher/v2.6/en/security/hardening-guides/). The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark. + +This guide corresponds to specific versions of the hardening guide, Rancher, CIS Benchmark and Kubernetes: + +| Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version | +| ----------------------- | --------------- | --------------------- | ------------------- | +| Hardening Guide CIS v1.6 Benchmark | Rancher v2.6.3 | CIS v1.6 | Kubernetes v1.18, v1.19, v1.20 and v1.21 | + +Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark do not apply and will have a result of \`Not Applicable\`. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher created clusters. + +This document is to be used by Rancher operators, security teams, auditors and decision makers. + +For more detail about each audit, including rationales and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.6. You can download the benchmark, after creating a free account, in [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/). + +#### Testing controls methodology + +Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files. + +Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the Docker command line on the hosts of all three RKE roles. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results. + +> NOTE: Only `automated` tests (previously called `scored`) are covered in this guide. + +### Controls + +--- diff --git a/scripts/converters/headers/header-k3s.md b/scripts/converters/headers/header-k3s.md new file mode 100755 index 00000000000..80461cc9c3f --- /dev/null +++ b/scripts/converters/headers/header-k3s.md @@ -0,0 +1,35 @@ +--- +title: CIS Self Assessment Guide +weight: 90 +--- + +### CIS Kubernetes Benchmark v1.6 - K3s with Kubernetes v1.17 to v1.21 + +#### Overview + +This document is a companion to the [K3s security hardening guide]({{}}/k3s/latest/en/security/hardening_guide/). The hardening guide provides prescriptive guidance for hardening a production installation of K3s, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the CIS Kubernetes Benchmark. It is to be used by K3s operators, security teams, auditors, and decision-makers. + +This guide is specific to the **v1.17**, **v1.18**, **v1.19**, **v1.20** and **v1.21** release line of K3s and the **v1.6** release of the CIS Kubernetes Benchmark. + +For more information about each control, including detailed descriptions and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.6. You can download the benchmark, after creating a free account, in [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/). + +#### Testing controls methodology + +Each control in the CIS Kubernetes Benchmark was evaluated against a K3s cluster that was configured according to the accompanying hardening guide. + +Where control audits differ from the original CIS benchmark, the audit commands specific to K3s are provided for testing. + +These are the possible results for each control: + +- **Pass** - The K3s cluster under test passed the audit outlined in the benchmark. +- **Not Applicable** - The control is not applicable to K3s because of how it is designed to operate. The remediation section will explain why this is so. +- **Warn** - The control is manual in the CIS benchmark and it depends on the cluster's use case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure K3s does not prevent their implementation, but no further configuration or auditing of the cluster under test has been performed. + +This guide makes the assumption that K3s is running as a Systemd unit. Your installation may vary and will require you to adjust the "audit" commands to fit your scenario. + +> NOTE: Only `automated` tests (previously called `scored`) are covered in this guide. + +### Controls + +--- + diff --git a/scripts/converters/run_results_to_md.sh b/scripts/converters/run_results_to_md.sh index acca856e0b1..ae3ac700b9d 100755 --- a/scripts/converters/run_results_to_md.sh +++ b/scripts/converters/run_results_to_md.sh @@ -2,8 +2,9 @@ results=${1:?path to kube-bench json results is a required argument} test_helpers=${2:?path to kube-bench test_helpers scripts is a required argument} +header=${3:?path to header file is a required argument} [ -f ${results} ] || (echo "file:'${results}' does not exist"; exit 1) [ -d ${test_helpers} ] || (echo "dir: '${test_helpers}' not a valid directory"; exit 1) -docker run -v${results}:/source/results.json -v ${test_helpers}:/test_helpers -it --rm doc_converters:latest results_to_md +docker run -v ${results}:/source/results.json -v ${test_helpers}:/test_helpers -v ${header}:/headers/header.md -it --rm doc_converters:latest results_to_md diff --git a/scripts/converters/scripts/results_to_md.sh b/scripts/converters/scripts/results_to_md.sh index 453dcbde069..6e087588c3f 100755 --- a/scripts/converters/scripts/results_to_md.sh +++ b/scripts/converters/scripts/results_to_md.sh @@ -1,48 +1,11 @@ #!/bin/bash -#results_file="${1:-/source/results.json}" -results_file="${1:-/home/paraglade/brain/projects/cis_benchmark/clusters/cis/csr.json}" -#test_helpers="${2:-/test_helpers}" -test_helpers="${2:-/home/paraglade/brain/repos/rancher-security-scan/package/helper_scripts}" +results_file="${1:-/source/results.json}" +test_helpers="${2:-/test_helpers}" +header_file="${3:-/headers/header.md}" header() { -cat < NOTE: only scored tests are covered in this guide. - -### Controls -EOF +cat ${header_file} } get_ids() {