mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-12 16:13:23 +00:00
Moving versions 2.6 and 2.7 to the archived_docs directory. Removed the sidebar entries in the versioned_sidebars folder and added the notice page to the versioned_docs folder. Added 'Archived' labels to the docusaurus.config.js file for v2.6/v2.7.
Signed-off-by: Sunil Singh <sunil.singh@suse.com>
This commit is contained in:
+56
@@ -0,0 +1,56 @@
|
||||
---
|
||||
title: Self-Assessment and Hardening Guides for Rancher
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/rancher-security/hardening-guides"/>
|
||||
</head>
|
||||
|
||||
Rancher provides specific security hardening guides for each supported Rancher version's Kubernetes distributions.
|
||||
|
||||
## Rancher Kubernetes Distributions
|
||||
|
||||
Rancher uses the following Kubernetes distributions:
|
||||
|
||||
- [**RKE**](https://rancher.com/docs/rke/latest/en/), Rancher Kubernetes Engine, is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers.
|
||||
- [**RKE2**](https://docs.rke2.io/) is a fully conformant Kubernetes distribution that focuses on security and compliance within the U.S. Federal Government sector.
|
||||
- [**K3s**](https://docs.k3s.io/) is a fully conformant, lightweight Kubernetes distribution. It is easy to install, with half the memory requirement of upstream Kubernetes, all in a binary of less than 100 MB.
|
||||
|
||||
To harden a Kubernetes cluster that's running a distribution other than those listed, refer to your Kubernetes provider docs.
|
||||
|
||||
## Hardening Guides and Benchmark Versions
|
||||
|
||||
Each self-assessment guide is accompanied by a hardening guide. These guides were tested alongside the listed Rancher releases. Each self-assessment guides was tested on a specific Kubernetes version and CIS benchmark version. If a CIS benchmark has not been validated for your Kubernetes version, you can use the existing guides until a guide for your version is added.
|
||||
|
||||
### RKE Guides
|
||||
|
||||
| Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides |
|
||||
|--------------------|-----------------------|-----------------------|------------------|
|
||||
| Kubernetes v1.23 | CIS v1.23 | [Link](rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md) | [Link](rke1-hardening-guide/rke1-hardening-guide.md) |
|
||||
| Kubernetes v1.24 | CIS v1.24 | [Link](rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.24-k8s-v1.24.md) | [Link](rke1-hardening-guide/rke1-hardening-guide.md) |
|
||||
| Kubernetes v1.25/v1.26/v1.27 | CIS v1.7 | [Link](rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md) | [Link](rke1-hardening-guide/rke1-hardening-guide.md) |
|
||||
|
||||
### RKE2 Guides
|
||||
|
||||
| Type | Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides |
|
||||
|------|--------------------|-----------------------|-----------------------|------------------|
|
||||
| Rancher provisioned RKE2 | Kubernetes v1.23 | CIS v1.23 | [Link](rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md) | [Link](rke2-hardening-guide/rke2-hardening-guide.md) |
|
||||
| Rancher provisioned RKE2 | Kubernetes v1.24 | CIS v1.24 | [Link](rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.24-k8s-v1.24.md) | [Link](rke2-hardening-guide/rke2-hardening-guide.md) |
|
||||
| Rancher provisioned RKE2 | Kubernetes v1.25/v1.26/v1.27 | CIS v1.7 | [Link](rke2-hardening-guide/rke2-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md) | [Link](rke2-hardening-guide/rke2-hardening-guide.md) |
|
||||
| Standalone RKE2 | Kubernetes v1.25/v1.26/v1.27 | CIS v1.7 | [Link](https://docs.rke2.io/security/cis_self_assessment123) | [Link](https://docs.rke2.io/security/hardening_guide) |
|
||||
|
||||
### K3s Guides
|
||||
|
||||
| Type | Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides |
|
||||
|------|--------------------|-----------------------|-----------------------|------------------|
|
||||
| Rancher provisioned K3s cluster | Kubernetes v1.23 | CIS v1.23 | [Link](k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.23-k8s-v1.23.md) | [Link](k3s-hardening-guide/k3s-hardening-guide.md) |
|
||||
| Rancher provisioned K3s cluster | Kubernetes v1.24 | CIS v1.24 | [Link](k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.24-k8s-v1.24.md) | [Link](k3s-hardening-guide/k3s-hardening-guide.md) |
|
||||
| Rancher provisioned K3s cluster | Kubernetes v1.25/v1.26/v1.27 | CIS v1.7 | [Link](k3s-hardening-guide/k3s-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md) | [Link](k3s-hardening-guide/k3s-hardening-guide.md) |
|
||||
| Standalone K3s | Kubernetes v1.22 up to v1.24 | CIS v1.23 | [Link](https://docs.k3s.io/security/self-assessment) | [Link](https://docs.k3s.io/security/hardening-guide) |
|
||||
|
||||
## Rancher with SELinux
|
||||
|
||||
[Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a kernel module that adds extra access controls and security tools to Linux. Historically used by government agencies, SELinux is now industry-standard. SELinux is enabled by default on RHEL and CentOS.
|
||||
|
||||
To use Rancher with SELinux, we recommend [installing](../selinux-rpm/about-rancher-selinux.md) the `rancher-selinux` RPM.
|
||||
|
||||
+744
@@ -0,0 +1,744 @@
|
||||
---
|
||||
title: K3s Hardening Guides
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/rancher-security/hardening-guides/k3s-hardening-guide"/>
|
||||
</head>
|
||||
|
||||
This document provides prescriptive guidance for how to harden a K3s cluster intended for production, before provisioning it with Rancher. It outlines the configurations and controls required for Center for Information Security (CIS) Kubernetes benchmark controls.
|
||||
|
||||
:::note
|
||||
This hardening guide describes how to secure the nodes in your cluster. We recommended that you follow this guide before you install Kubernetes.
|
||||
:::
|
||||
|
||||
This hardening guide is intended to be used for K3s clusters and is associated with the following versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher:
|
||||
|
||||
| Rancher Version | CIS Benchmark Version | Kubernetes Version |
|
||||
|-----------------|-----------------------|------------------------------|
|
||||
| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 |
|
||||
| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 |
|
||||
| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 up to v1.26 |
|
||||
|
||||
:::note
|
||||
In Benchmark v1.7, the `--protect-kernel-defaults` (4.2.6) parameter isn't required anymore, and was removed by CIS.
|
||||
:::
|
||||
|
||||
For more details on how to evaluate a hardened K3s cluster against the official CIS benchmark, refer to the K3s self-assessment guides for specific Kubernetes and CIS benchmark versions.
|
||||
|
||||
K3s passes a number of the Kubernetes CIS controls without modification, as it applies several security mitigations by default. There are some notable exceptions to this that require manual intervention to fully comply with the CIS Benchmark:
|
||||
|
||||
1. K3s does not modify the host operating system. Any host-level modifications need to be done manually.
|
||||
2. Certain CIS policy controls for `NetworkPolicies` and `PodSecurityStandards` (`PodSecurityPolicies` on v1.24 and older) restrict cluster functionality.
|
||||
You must opt into having K3s configure these policies. Add the appropriate options to your command-line flags or configuration file (enable admission plugins), and manually apply the appropriate policies.
|
||||
See further for more details.
|
||||
|
||||
The first section (1.1) of the CIS Benchmark primarily focuses on pod manifest permissions and ownership. Since everything in the distribution is packaged in a single binary, this section does not apply to the core components of K3s.
|
||||
|
||||
## Host-level Requirements
|
||||
|
||||
### Ensure `protect-kernel-defaults` is set
|
||||
|
||||
<Tabs groupId="k3s-version">
|
||||
<TabItem value="v1.25 and Newer" default>
|
||||
|
||||
The `protect-kernel-defaults` is no longer required since CIS benchmark 1.7.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="v1.24 and Older">
|
||||
|
||||
This is a kubelet flag that will cause the kubelet to exit if the required kernel parameters are unset or are set to values that are different from the kubelet's defaults.
|
||||
|
||||
The `protect-kernel-defaults` flag can be set in the cluster configuration in Rancher.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
rkeConfig:
|
||||
machineSelectorConfig:
|
||||
- config:
|
||||
protect-kernel-defaults: true
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Set kernel parameters
|
||||
|
||||
The following `sysctl` configuration is recommended for all nodes type in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`:
|
||||
|
||||
```ini
|
||||
vm.panic_on_oom=0
|
||||
vm.overcommit_memory=1
|
||||
kernel.panic=10
|
||||
kernel.panic_on_oops=1
|
||||
```
|
||||
|
||||
Run `sudo sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings.
|
||||
|
||||
This configuration needs to be done before setting the kubelet flag, otherwise K3s will fail to start.
|
||||
|
||||
## Kubernetes Runtime Requirements
|
||||
|
||||
The CIS Benchmark runtime requirements center around pod security (via PSP or PSA), network policies and API Server auditing logs.
|
||||
|
||||
By default, K3s does not include any pod security or network policies. However, K3s ships with a controller that enforces any network policies you create. By default, K3s enables both the `PodSecurity` and `NodeRestriction` admission controllers, among others.
|
||||
|
||||
### Pod Security
|
||||
|
||||
<Tabs groupId="k3s-version">
|
||||
<TabItem value="v1.25 and Newer" default>
|
||||
|
||||
K3s v1.25 and newer support [Pod Security admission (PSA)](https://kubernetes.io/docs/concepts/security/pod-security-admission/) for controlling pod security.
|
||||
|
||||
You can specify the PSA configuration by setting the `defaultPodSecurityAdmissionConfigurationTemplateName` field in the cluster configuration in Rancher:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
defaultPodSecurityAdmissionConfigurationTemplateName: rancher-restricted
|
||||
```
|
||||
|
||||
The `rancher-restricted` template is provided by Rancher to enforce the highly-restrictive Kubernetes upstream [`Restricted`](https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted) profile with best practices for pod hardening.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="v1.24 and Older">
|
||||
|
||||
K3s v1.24 and older support [Pod Security Policy (PSP)](https://github.com/kubernetes/website/blob/release-1.24/content/en/docs/concepts/security/pod-security-policy.md) for controlling pod security.
|
||||
|
||||
You can enable PSPs by passing the following flags in the cluster configuration in Rancher:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
rkeConfig:
|
||||
machineGlobalConfig:
|
||||
kube-apiserver-arg:
|
||||
- enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount
|
||||
```
|
||||
|
||||
This maintains the `NodeRestriction` plugin and enables the `PodSecurityPolicy`.
|
||||
|
||||
Once you enable PSPs, you can apply a policy to satisfy the necessary controls described in section 5.2 of the CIS Benchmark.
|
||||
|
||||
:::note
|
||||
These are manual checks in the CIS Benchmark. The CIS scan flags the results as `warning`, because manual inspection is necessary by the cluster operator.
|
||||
:::
|
||||
|
||||
Here is an example of a compliant PSP:
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: restricted-psp
|
||||
spec:
|
||||
privileged: false # CIS - 5.2.1
|
||||
allowPrivilegeEscalation: false # CIS - 5.2.5
|
||||
requiredDropCapabilities: # CIS - 5.2.7/8/9
|
||||
- ALL
|
||||
volumes:
|
||||
- 'configMap'
|
||||
- 'emptyDir'
|
||||
- 'projected'
|
||||
- 'secret'
|
||||
- 'downwardAPI'
|
||||
- 'csi'
|
||||
- 'persistentVolumeClaim'
|
||||
- 'ephemeral'
|
||||
hostNetwork: false # CIS - 5.2.4
|
||||
hostIPC: false # CIS - 5.2.3
|
||||
hostPID: false # CIS - 5.2.2
|
||||
runAsUser:
|
||||
rule: 'MustRunAsNonRoot' # CIS - 5.2.6
|
||||
seLinux:
|
||||
rule: 'RunAsAny'
|
||||
supplementalGroups:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
- min: 1
|
||||
max: 65535
|
||||
fsGroup:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
- min: 1
|
||||
max: 65535
|
||||
readOnlyRootFilesystem: false
|
||||
```
|
||||
|
||||
For the example PSP to be effective, we need to create a `ClusterRole` and a `ClusterRoleBinding`. We also need to include a "system unrestricted policy" for system-level pods that require additional privileges, and an additional policy that allows the necessary sysctls for full functionality of ServiceLB.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: restricted-psp
|
||||
spec:
|
||||
privileged: false
|
||||
allowPrivilegeEscalation: false
|
||||
requiredDropCapabilities:
|
||||
- ALL
|
||||
volumes:
|
||||
- 'configMap'
|
||||
- 'emptyDir'
|
||||
- 'projected'
|
||||
- 'secret'
|
||||
- 'downwardAPI'
|
||||
- 'csi'
|
||||
- 'persistentVolumeClaim'
|
||||
- 'ephemeral'
|
||||
hostNetwork: false
|
||||
hostIPC: false
|
||||
hostPID: false
|
||||
runAsUser:
|
||||
rule: 'MustRunAsNonRoot'
|
||||
seLinux:
|
||||
rule: 'RunAsAny'
|
||||
supplementalGroups:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
- min: 1
|
||||
max: 65535
|
||||
fsGroup:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
- min: 1
|
||||
max: 65535
|
||||
readOnlyRootFilesystem: false
|
||||
---
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: system-unrestricted-psp
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
|
||||
spec:
|
||||
allowPrivilegeEscalation: true
|
||||
allowedCapabilities:
|
||||
- '*'
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
hostIPC: true
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
hostPorts:
|
||||
- max: 65535
|
||||
min: 0
|
||||
privileged: true
|
||||
runAsUser:
|
||||
rule: RunAsAny
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
volumes:
|
||||
- '*'
|
||||
---
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: svclb-psp
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
|
||||
spec:
|
||||
allowPrivilegeEscalation: false
|
||||
allowedCapabilities:
|
||||
- NET_ADMIN
|
||||
allowedUnsafeSysctls:
|
||||
- net.ipv4.ip_forward
|
||||
- net.ipv6.conf.all.forwarding
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
hostPorts:
|
||||
- max: 65535
|
||||
min: 0
|
||||
runAsUser:
|
||||
rule: RunAsAny
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:restricted-psp
|
||||
rules:
|
||||
- apiGroups:
|
||||
- policy
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
resourceNames:
|
||||
- restricted-psp
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:system-unrestricted-psp
|
||||
rules:
|
||||
- apiGroups:
|
||||
- policy
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
resourceNames:
|
||||
- system-unrestricted-psp
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:svclb-psp
|
||||
rules:
|
||||
- apiGroups:
|
||||
- policy
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
resourceNames:
|
||||
- svclb-psp
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:svc-local-path-provisioner-psp
|
||||
rules:
|
||||
- apiGroups:
|
||||
- policy
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
resourceNames:
|
||||
- system-unrestricted-psp
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:svc-coredns-psp
|
||||
rules:
|
||||
- apiGroups:
|
||||
- policy
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
resourceNames:
|
||||
- system-unrestricted-psp
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:svc-cis-operator-psp
|
||||
rules:
|
||||
- apiGroups:
|
||||
- policy
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
resourceNames:
|
||||
- system-unrestricted-psp
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: default:restricted-psp
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: psp:restricted-psp
|
||||
subjects:
|
||||
- kind: Group
|
||||
name: system:authenticated
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: system-unrestricted-node-psp-rolebinding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: psp:system-unrestricted-psp
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:nodes
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: system-unrestricted-svc-acct-psp-rolebinding
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: psp:system-unrestricted-psp
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: svclb-psp-rolebinding
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: psp:svclb-psp
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: svclb
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: svc-local-path-provisioner-psp-rolebinding
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: psp:svc-local-path-provisioner-psp
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: local-path-provisioner-service-account
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: svc-coredns-psp-rolebinding
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: psp:svc-coredns-psp
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: coredns
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: svc-cis-operator-psp-rolebinding
|
||||
namespace: cis-operator-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: psp:svc-cis-operator-psp
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cis-operator-serviceaccount
|
||||
```
|
||||
|
||||
The policies presented above can be placed in a file named `policy.yaml` in the `/var/lib/rancher/k3s/server/manifests` directory. Both the policy file and the its directory hierarchy must be created before starting K3s. A restrictive access permission is recommended to avoid leaking potential sensitive information.
|
||||
|
||||
```shell
|
||||
sudo mkdir -p -m 700 /var/lib/rancher/k3s/server/manifests
|
||||
```
|
||||
|
||||
:::note
|
||||
The critical Kubernetes additions such as CNI, DNS, and Ingress are run as pods in the `kube-system` namespace. Therefore, this namespace has a less restrictive policy, so that these components can run properly.
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Network Policies
|
||||
|
||||
CIS requires that all namespaces apply a network policy that reasonably limits traffic into namespaces and pods.
|
||||
|
||||
:::note
|
||||
This is a manual check in the CIS Benchmark. The CIS scan flags the result as a `warning`, because manual inspection is necessary by the cluster operator.
|
||||
:::
|
||||
|
||||
The network policies can be placed in the `policy.yaml` file in `/var/lib/rancher/k3s/server/manifests` directory. If the directory was not created as part of the PSP (as described above), it must be created first.
|
||||
|
||||
```shell
|
||||
sudo mkdir -p -m 700 /var/lib/rancher/k3s/server/manifests
|
||||
```
|
||||
|
||||
Here is an example of a compliant network policy:
|
||||
|
||||
```yaml
|
||||
---
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: intra-namespace
|
||||
namespace: kube-system
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
name: kube-system
|
||||
---
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: intra-namespace
|
||||
namespace: default
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
name: default
|
||||
---
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: intra-namespace
|
||||
namespace: kube-public
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
name: kube-public
|
||||
```
|
||||
|
||||
The active restrictions block DNS unless purposely allowed. Below is a network policy that allows DNS-related traffic:
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: default-network-dns-policy
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
ingress:
|
||||
- ports:
|
||||
- port: 53
|
||||
protocol: TCP
|
||||
- port: 53
|
||||
protocol: UDP
|
||||
podSelector:
|
||||
matchLabels:
|
||||
k8s-app: kube-dns
|
||||
policyTypes:
|
||||
- Ingress
|
||||
```
|
||||
|
||||
The metrics-server and Traefik ingress controller are blocked by default if network policies are not created to allow access.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: allow-all-metrics-server
|
||||
namespace: kube-system
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
k8s-app: metrics-server
|
||||
ingress:
|
||||
- {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: allow-all-svclbtraefik-ingress
|
||||
namespace: kube-system
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
svccontroller.k3s.cattle.io/svcname: traefik
|
||||
ingress:
|
||||
- {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: allow-all-traefik-v121-ingress
|
||||
namespace: kube-system
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
ingress:
|
||||
- {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
```
|
||||
|
||||
:::note
|
||||
You must manage network policies as normal for any additional namespaces you create.
|
||||
:::
|
||||
|
||||
### API Server audit configuration
|
||||
|
||||
CIS requirements 1.2.22 to 1.2.25 are related to configuring audit logs for the API Server. K3s does not create by default the log directory and audit policy, as auditing requirements are specific to each user's policies and environment.
|
||||
|
||||
If you need a log directory, it must be created before you start K3s. We recommend a restrictive access permission to avoid leaking sensitive information.
|
||||
|
||||
```bash
|
||||
sudo mkdir -p -m 700 /var/lib/rancher/k3s/server/logs
|
||||
```
|
||||
|
||||
The following is a starter audit policy to log request metadata. This policy should be written to a file named `audit.yaml` in the `/var/lib/rancher/k3s/server` directory. Detailed information about policy configuration for the API server can be found in the [official Kubernetes documentation](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/).
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: audit.k8s.io/v1
|
||||
kind: Policy
|
||||
rules:
|
||||
- level: Metadata
|
||||
```
|
||||
|
||||
Further configurations are also needed to pass CIS checks. These are not configured by default in K3s, because they vary based on your environment and needs:
|
||||
|
||||
- Ensure that the `--audit-log-path` argument is set.
|
||||
- Ensure that the `--audit-log-maxage` argument is set to 30 or as appropriate.
|
||||
- Ensure that the `--audit-log-maxbackup` argument is set to 10 or as appropriate.
|
||||
- Ensure that the `--audit-log-maxsize` argument is set to 100 or as appropriate.
|
||||
|
||||
Combined, to enable and configure audit logs, add the following lines to the K3s cluster configuration file in Rancher:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
rkeConfig:
|
||||
machineGlobalConfig:
|
||||
kube-apiserver-arg:
|
||||
- audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml # CIS 3.2.1
|
||||
- audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log # CIS 1.2.18
|
||||
- audit-log-maxage=30 # CIS 1.2.19
|
||||
- audit-log-maxbackup=10 # CIS 1.2.20
|
||||
- audit-log-maxsize=100 # CIS 1.2.21
|
||||
```
|
||||
|
||||
### Controller Manager Requirements
|
||||
|
||||
CIS requirement 1.3.1 checks for garbage collection settings in the Controller Manager. Garbage collection is important to ensure sufficient resource availability and avoid degraded performance and availability. Based on your system resources and tests, choose an appropriate threshold value to activate garbage collection.
|
||||
|
||||
This can be remediated by setting the following configuration in the K3s cluster file in Rancher. The value below is only an example. The appropriate threshold value is specific to each user's environment.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
rkeConfig:
|
||||
machineGlobalConfig:
|
||||
kube-controller-manager-arg:
|
||||
- terminated-pod-gc-threshold=10 # CIS 1.3.1
|
||||
```
|
||||
|
||||
### Configure `default` Service Account
|
||||
|
||||
Kubernetes provides a `default` service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account.
|
||||
|
||||
For CIS requirement 5.1.5 the `default` service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments.
|
||||
|
||||
This can be remediated by updating the `automountServiceAccountToken` field to `false` for the `default` service account in each namespace.
|
||||
|
||||
For `default` service accounts in the built-in namespaces (`kube-system`, `kube-public`, `kube-node-lease`, and `default)`, K3s does not automatically do this.
|
||||
|
||||
Save the following configuration to a file called `account_update.yaml`.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: default
|
||||
automountServiceAccountToken: false
|
||||
```
|
||||
|
||||
Create a bash script file called `account_update.sh`. Be sure to `chmod +x account_update.sh` so the script has execute permissions.
|
||||
|
||||
```shell
|
||||
#!/bin/bash -e
|
||||
|
||||
for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do
|
||||
kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)"
|
||||
done
|
||||
```
|
||||
|
||||
Run the script every time a new service account is added to your cluster.
|
||||
|
||||
## Reference Hardened K3s Template Configuration
|
||||
|
||||
The following reference template configuration is used in Rancher to create a hardened K3s custom cluster based on each CIS control in this guide. This reference does not include other required **cluster configuration** directives, which vary based on your environment.
|
||||
|
||||
<Tabs groupId="k3s-version">
|
||||
<TabItem value="v1.25 and Newer" default>
|
||||
|
||||
```yaml
|
||||
apiVersion: provisioning.cattle.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: # Define cluster name
|
||||
spec:
|
||||
defaultPodSecurityAdmissionConfigurationTemplateName: rancher-restricted
|
||||
enableNetworkPolicy: true
|
||||
kubernetesVersion: # Define K3s version
|
||||
rkeConfig:
|
||||
machineGlobalConfig:
|
||||
kube-apiserver-arg:
|
||||
- enable-admission-plugins=NodeRestriction,ServiceAccount # CIS 1.2.15, 1.2.13
|
||||
- audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml # CIS 3.2.1
|
||||
- audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log # CIS 1.2.18
|
||||
- audit-log-maxage=30 # CIS 1.2.19
|
||||
- audit-log-maxbackup=10 # CIS 1.2.20
|
||||
- audit-log-maxsize=100 # CIS 1.2.21
|
||||
- request-timeout=300s # CIS 1.2.22
|
||||
- service-account-lookup=true # CIS 1.2.24
|
||||
kube-controller-manager-arg:
|
||||
- terminated-pod-gc-threshold=10 # CIS 1.3.1
|
||||
secrets-encryption: true
|
||||
machineSelectorConfig:
|
||||
- config:
|
||||
kubelet-arg:
|
||||
- make-iptables-util-chains=true # CIS 4.2.7
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="v1.24 and Older">
|
||||
|
||||
```yaml
|
||||
apiVersion: provisioning.cattle.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: # Define cluster name
|
||||
spec:
|
||||
enableNetworkPolicy: true
|
||||
kubernetesVersion: # Define K3s version
|
||||
rkeConfig:
|
||||
machineGlobalConfig:
|
||||
kube-apiserver-arg:
|
||||
- enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount # CIS 1.2.15, 5.2, 1.2.13
|
||||
- audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml # CIS 3.2.1
|
||||
- audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log # CIS 1.2.18
|
||||
- audit-log-maxage=30 # CIS 1.2.19
|
||||
- audit-log-maxbackup=10 # CIS 1.2.20
|
||||
- audit-log-maxsize=100 # CIS 1.2.21
|
||||
- request-timeout=300s # CIS 1.2.22
|
||||
- service-account-lookup=true # CIS 1.2.24
|
||||
kube-controller-manager-arg:
|
||||
- terminated-pod-gc-threshold=10 # CIS 1.3.1
|
||||
secrets-encryption: true
|
||||
machineSelectorConfig:
|
||||
- config:
|
||||
kubelet-arg:
|
||||
- make-iptables-util-chains=true # CIS 4.2.7
|
||||
protect-kernel-defaults: true # CIS 4.2.6
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Conclusion
|
||||
|
||||
If you have followed this guide, your K3s custom cluster provisioned by Rancher will be configured to pass the CIS Kubernetes Benchmark. You can review our K3s self-assessment guides to understand how we verified each of the benchmarks and how you can do the same on your cluster.
|
||||
+3152
File diff suppressed because it is too large
Load Diff
+3208
File diff suppressed because it is too large
Load Diff
+3215
File diff suppressed because it is too large
Load Diff
+514
@@ -0,0 +1,514 @@
|
||||
---
|
||||
title: RKE Hardening Guides
|
||||
---
|
||||
|
||||
<EOLRKE1Warning />
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/rancher-security/hardening-guides/rke1-hardening-guide"/>
|
||||
</head>
|
||||
|
||||
This document provides prescriptive guidance for how to harden an RKE cluster intended for production, before provisioning it with Rancher. It outlines the configurations and controls required for Center for Information Security (CIS) Kubernetes benchmark controls.
|
||||
|
||||
:::note
|
||||
This hardening guide describes how to secure the nodes in your cluster. We recommended that you follow this guide before you install Kubernetes.
|
||||
:::
|
||||
|
||||
This hardening guide is intended to be used for RKE clusters and is associated with the following versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher:
|
||||
|
||||
| Rancher Version | CIS Benchmark Version | Kubernetes Version |
|
||||
|-----------------|-----------------------|------------------------------|
|
||||
| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 |
|
||||
| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 |
|
||||
| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 up to v1.26 |
|
||||
|
||||
:::note
|
||||
- In Benchmark v1.24 and later, check id `4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)` might fail, as `/etc/kubernetes/ssl/kube-ca.pem` is set to 644 by default.
|
||||
- In Benchmark v1.7, the `--protect-kernel-defaults` (`4.2.6`) parameter isn't required anymore, and was removed by CIS.
|
||||
:::
|
||||
|
||||
For more details on how to evaluate a hardened RKE cluster against the official CIS benchmark, refer to the RKE self-assessment guides for specific Kubernetes and CIS benchmark versions.
|
||||
|
||||
## Host-level requirements
|
||||
|
||||
### Configure Kernel Runtime Parameters
|
||||
|
||||
The following `sysctl` configuration is recommended for all nodes types in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`:
|
||||
|
||||
```ini
|
||||
vm.overcommit_memory=1
|
||||
vm.panic_on_oom=0
|
||||
kernel.panic=10
|
||||
kernel.panic_on_oops=1
|
||||
```
|
||||
|
||||
Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings.
|
||||
|
||||
### Configure `etcd` user and group
|
||||
|
||||
A user account and group for the **etcd** service is required to be set up before installing RKE.
|
||||
|
||||
#### Create `etcd` user and group
|
||||
|
||||
To create the **etcd** user and group run the following console commands.
|
||||
The commands below use `52034` for **uid** and **gid** for example purposes.
|
||||
Any valid unused **uid** or **gid** could also be used in lieu of `52034`.
|
||||
|
||||
```bash
|
||||
groupadd --gid 52034 etcd
|
||||
useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd --shell /usr/sbin/nologin
|
||||
```
|
||||
|
||||
When deploying RKE through its cluster configuration `config.yml` file, update the `uid` and `gid` of the `etcd` user:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
etcd:
|
||||
gid: 52034
|
||||
uid: 52034
|
||||
```
|
||||
|
||||
## Kubernetes runtime requirements
|
||||
|
||||
### Configure `default` Service Account
|
||||
|
||||
#### Set `automountServiceAccountToken` to `false` for `default` service accounts
|
||||
|
||||
Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod.
|
||||
Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account.
|
||||
The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments.
|
||||
|
||||
For each namespace including `default` and `kube-system` on a standard RKE install, the `default` service account must include this value:
|
||||
|
||||
```yaml
|
||||
automountServiceAccountToken: false
|
||||
```
|
||||
|
||||
Save the following configuration to a file called `account_update.yaml`.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: default
|
||||
automountServiceAccountToken: false
|
||||
```
|
||||
|
||||
Create a bash script file called `account_update.sh`.
|
||||
Be sure to `chmod +x account_update.sh` so the script has execute permissions.
|
||||
|
||||
```bash
|
||||
#!/bin/bash -e
|
||||
|
||||
for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do
|
||||
kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)"
|
||||
done
|
||||
```
|
||||
|
||||
Execute this script to apply the `account_update.yaml` configuration to `default` service account in all namespaces.
|
||||
|
||||
### Configure Network Policy
|
||||
|
||||
#### Ensure that all Namespaces have Network Policies defined
|
||||
|
||||
Running different applications on the same Kubernetes cluster creates a risk of one compromised application attacking a neighboring application. Network segmentation is important to ensure that containers can communicate only with those they are supposed to. A network policy is a specification of how selections of pods are allowed to communicate with each other and other network endpoints.
|
||||
|
||||
Network Policies are namespace scoped. When a network policy is introduced to a given namespace, all traffic not allowed by the policy is denied. However, if there are no network policies in a namespace all traffic will be allowed into and out of the pods in that namespace. To enforce network policies, a container network interface (CNI) plugin must be enabled. This guide uses [Canal](https://github.com/projectcalico/canal) to provide the policy enforcement. Additional information about CNI providers can be found [here](https://www.suse.com/c/rancher_blog/comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/).
|
||||
|
||||
Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a **permissive** example is provided below. If you want to allow all traffic to all pods in a namespace (even if policies are added that cause some pods to be treated as “isolated”), you can create a policy that explicitly allows all traffic in that namespace. Save the following configuration as `default-allow-all.yaml`. Additional [documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) about network policies can be found on the Kubernetes site.
|
||||
|
||||
:::caution
|
||||
This network policy is just an example and is not recommended for production use.
|
||||
:::
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: default-allow-all
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress:
|
||||
- {}
|
||||
egress:
|
||||
- {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
```
|
||||
|
||||
Create a bash script file called `apply_networkPolicy_to_all_ns.sh`. Be sure to `chmod +x apply_networkPolicy_to_all_ns.sh` so the script has execute permissions.
|
||||
|
||||
```bash
|
||||
#!/bin/bash -e
|
||||
|
||||
for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do
|
||||
kubectl apply -f default-allow-all.yaml -n ${namespace}
|
||||
done
|
||||
```
|
||||
|
||||
Execute this script to apply the `default-allow-all.yaml` configuration with the **permissive** `NetworkPolicy` to all namespaces.
|
||||
|
||||
## Known Limitations
|
||||
|
||||
- Rancher **exec shell** and **view logs** for pods are **not** functional in a hardened setup when only a public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes.
|
||||
- When setting `default_pod_security_policy_template_id:` to `restricted` or `restricted-noroot`, based on the pod security policies (PSP) [provided](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/create-pod-security-policies.md) by Rancher, Rancher creates `RoleBindings` and `ClusterRoleBindings` on the `default` service accounts. The CIS check 5.1.5 requires that the `default` service accounts have no roles or cluster roles bound to it apart from the defaults. In addition, the `default` service accounts should be configured such that it does not provide a service account token and does not have any explicit rights assignments.
|
||||
|
||||
## Reference Hardened RKE `cluster.yml` Configuration
|
||||
|
||||
The reference `cluster.yml` is used by the RKE CLI that provides the configuration needed to achieve a hardened installation of RKE. RKE [documentation](https://rancher.com/docs/rke/latest/en/installation/) provides additional details about the configuration items. This reference `cluster.yml` does not include the required `nodes` directive which will vary depending on your environment. Documentation for node configuration in RKE can be found [here](https://rancher.com/docs/rke/latest/en/config-options/nodes/).
|
||||
|
||||
The example `cluster.yml` configuration file contains an Admission Configuration policy in the `services.kube-api.admission_configuration` field. This [sample](../../psa-restricted-exemptions.md) policy contains the namespace exemptions necessary for an imported RKE cluster to run properly in Rancher, similar to Rancher's pre-defined [`rancher-restricted`](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) policy.
|
||||
|
||||
If you prefer to use RKE's default `restricted` policy, then leave the `services.kube-api.admission_configuration` field empty and set `services.pod_security_configuration` to `restricted`. See [the RKE docs](https://rke.docs.rancher.com/config-options/services/pod-security-admission) for more information.
|
||||
|
||||
<Tabs groupId="rke1-version">
|
||||
<TabItem value="v1.25 and Newer" default>
|
||||
|
||||
:::note
|
||||
If you intend to import an RKE cluster into Rancher, please consult the [documentation](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) for how to configure the PSA to exempt Rancher system namespaces.
|
||||
:::
|
||||
|
||||
```yaml
|
||||
# If you intend to deploy Kubernetes in an air-gapped environment,
|
||||
# please consult the documentation on how to configure custom RKE images.
|
||||
nodes: []
|
||||
kubernetes_version: # Define RKE version
|
||||
services:
|
||||
etcd:
|
||||
uid: 52034
|
||||
gid: 52034
|
||||
kube-api:
|
||||
secrets_encryption_config:
|
||||
enabled: true
|
||||
audit_log:
|
||||
enabled: true
|
||||
event_rate_limit:
|
||||
enabled: true
|
||||
# Leave `pod_security_configuration` out if you are setting a
|
||||
# custom policy in `admission_configuration`. Otherwise set
|
||||
# it to `restricted` to use RKE's pre-defined restricted policy,
|
||||
# and remove everything inside `admission_configuration` field.
|
||||
#
|
||||
# pod_security_configuration: restricted
|
||||
#
|
||||
admission_configuration:
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
kind: AdmissionConfiguration
|
||||
plugins:
|
||||
- name: PodSecurity
|
||||
configuration:
|
||||
apiVersion: pod-security.admission.config.k8s.io/v1
|
||||
kind: PodSecurityConfiguration
|
||||
defaults:
|
||||
enforce: "restricted"
|
||||
enforce-version: "latest"
|
||||
audit: "restricted"
|
||||
audit-version: "latest"
|
||||
warn: "restricted"
|
||||
warn-version: "latest"
|
||||
exemptions:
|
||||
usernames: []
|
||||
runtimeClasses: []
|
||||
namespaces: [calico-apiserver,
|
||||
calico-system,
|
||||
cattle-alerting,
|
||||
cattle-csp-adapter-system,
|
||||
cattle-elemental-system,
|
||||
cattle-epinio-system,
|
||||
cattle-externalip-system,
|
||||
cattle-fleet-local-system,
|
||||
cattle-fleet-system,
|
||||
cattle-gatekeeper-system,
|
||||
cattle-global-data,
|
||||
cattle-global-nt,
|
||||
cattle-impersonation-system,
|
||||
cattle-istio,
|
||||
cattle-istio-system,
|
||||
cattle-logging,
|
||||
cattle-logging-system,
|
||||
cattle-monitoring-system,
|
||||
cattle-neuvector-system,
|
||||
cattle-prometheus,
|
||||
cattle-provisioning-capi-system,
|
||||
cattle-resources-system,
|
||||
cattle-sriov-system,
|
||||
cattle-system,
|
||||
cattle-ui-plugin-system,
|
||||
cattle-windows-gmsa-system,
|
||||
cert-manager,
|
||||
cis-operator-system,
|
||||
fleet-default,
|
||||
ingress-nginx,
|
||||
istio-system,
|
||||
kube-node-lease,
|
||||
kube-public,
|
||||
kube-system,
|
||||
longhorn-system,
|
||||
rancher-alerting-drivers,
|
||||
security-scan,
|
||||
tigera-operator]
|
||||
kube-controller:
|
||||
extra_args:
|
||||
feature-gates: RotateKubeletServerCertificate=true
|
||||
kubelet:
|
||||
extra_args:
|
||||
feature-gates: RotateKubeletServerCertificate=true
|
||||
generate_serving_certificate: true
|
||||
addons: |
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: default-allow-all
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress:
|
||||
- {}
|
||||
egress:
|
||||
- {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: default
|
||||
automountServiceAccountToken: false
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="v1.24 and Older">
|
||||
|
||||
```yaml
|
||||
# If you intend to deploy Kubernetes in an air-gapped environment,
|
||||
# please consult the documentation on how to configure custom RKE images.
|
||||
nodes: []
|
||||
kubernetes_version: # Define RKE version
|
||||
services:
|
||||
etcd:
|
||||
uid: 52034
|
||||
gid: 52034
|
||||
kube-api:
|
||||
secrets_encryption_config:
|
||||
enabled: true
|
||||
audit_log:
|
||||
enabled: true
|
||||
event_rate_limit:
|
||||
enabled: true
|
||||
pod_security_policy: true
|
||||
kube-controller:
|
||||
extra_args:
|
||||
feature-gates: RotateKubeletServerCertificate=true
|
||||
kubelet:
|
||||
extra_args:
|
||||
feature-gates: RotateKubeletServerCertificate=true
|
||||
protect-kernel-defaults: true
|
||||
generate_serving_certificate: true
|
||||
addons: |
|
||||
# Upstream Kubernetes restricted PSP policy
|
||||
# https://github.com/kubernetes/website/blob/564baf15c102412522e9c8fc6ef2b5ff5b6e766c/content/en/examples/policy/restricted-psp.yaml
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: restricted-noroot
|
||||
spec:
|
||||
privileged: false
|
||||
# Required to prevent escalations to root.
|
||||
allowPrivilegeEscalation: false
|
||||
requiredDropCapabilities:
|
||||
- ALL
|
||||
# Allow core volume types.
|
||||
volumes:
|
||||
- 'configMap'
|
||||
- 'emptyDir'
|
||||
- 'projected'
|
||||
- 'secret'
|
||||
- 'downwardAPI'
|
||||
# Assume that ephemeral CSI drivers & persistentVolumes set up by the cluster admin are safe to use.
|
||||
- 'csi'
|
||||
- 'persistentVolumeClaim'
|
||||
- 'ephemeral'
|
||||
hostNetwork: false
|
||||
hostIPC: false
|
||||
hostPID: false
|
||||
runAsUser:
|
||||
# Require the container to run without root privileges.
|
||||
rule: 'MustRunAsNonRoot'
|
||||
seLinux:
|
||||
# This policy assumes the nodes are using AppArmor rather than SELinux.
|
||||
rule: 'RunAsAny'
|
||||
supplementalGroups:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
# Forbid adding the root group.
|
||||
- min: 1
|
||||
max: 65535
|
||||
fsGroup:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
# Forbid adding the root group.
|
||||
- min: 1
|
||||
max: 65535
|
||||
readOnlyRootFilesystem: false
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:restricted-noroot
|
||||
rules:
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resourceNames:
|
||||
- restricted-noroot
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: psp:restricted-noroot
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: psp:restricted-noroot
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:authenticated
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: default-allow-all
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress:
|
||||
- {}
|
||||
egress:
|
||||
- {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: default
|
||||
automountServiceAccountToken: false
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Reference Hardened RKE Cluster Template Configuration
|
||||
|
||||
The reference RKE cluster template provides the minimum required configuration to achieve a hardened installation of Kubernetes. RKE templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher [documentation](../../../../getting-started/installation-and-upgrade/installation-and-upgrade.md) for additional information about installing RKE and its template details.
|
||||
|
||||
<Tabs groupId="rke1-version">
|
||||
<TabItem value="v1.25 and Newer" default>
|
||||
|
||||
```yaml
|
||||
#
|
||||
# Cluster Config
|
||||
#
|
||||
default_pod_security_admission_configuration_template_name: rancher-restricted
|
||||
enable_network_policy: true
|
||||
local_cluster_auth_endpoint:
|
||||
enabled: true
|
||||
name: # Define cluster name
|
||||
|
||||
#
|
||||
# Rancher Config
|
||||
#
|
||||
rancher_kubernetes_engine_config:
|
||||
addon_job_timeout: 45
|
||||
authentication:
|
||||
strategy: x509|webhook
|
||||
kubernetes_version: # Define RKE version
|
||||
services:
|
||||
etcd:
|
||||
uid: 52034
|
||||
gid: 52034
|
||||
kube-api:
|
||||
audit_log:
|
||||
enabled: true
|
||||
event_rate_limit:
|
||||
enabled: true
|
||||
pod_security_policy: false
|
||||
secrets_encryption_config:
|
||||
enabled: true
|
||||
kube-controller:
|
||||
extra_args:
|
||||
feature-gates: RotateKubeletServerCertificate=true
|
||||
tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
|
||||
kubelet:
|
||||
extra_args:
|
||||
feature-gates: RotateKubeletServerCertificate=true
|
||||
tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
|
||||
generate_serving_certificate: true
|
||||
scheduler:
|
||||
extra_args:
|
||||
tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="v1.24 and Older">
|
||||
|
||||
```yaml
|
||||
#
|
||||
# Cluster Config
|
||||
#
|
||||
default_pod_security_policy_template_id: restricted-noroot
|
||||
enable_network_policy: true
|
||||
local_cluster_auth_endpoint:
|
||||
enabled: true
|
||||
name: # Define cluster name
|
||||
|
||||
#
|
||||
# Rancher Config
|
||||
#
|
||||
rancher_kubernetes_engine_config:
|
||||
addon_job_timeout: 45
|
||||
authentication:
|
||||
strategy: x509|webhook
|
||||
kubernetes_version: # Define RKE version
|
||||
services:
|
||||
etcd:
|
||||
uid: 52034
|
||||
gid: 52034
|
||||
kube-api:
|
||||
audit_log:
|
||||
enabled: true
|
||||
event_rate_limit:
|
||||
enabled: true
|
||||
pod_security_policy: true
|
||||
secrets_encryption_config:
|
||||
enabled: true
|
||||
kube-controller:
|
||||
extra_args:
|
||||
feature-gates: RotateKubeletServerCertificate=true
|
||||
tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
|
||||
kubelet:
|
||||
extra_args:
|
||||
feature-gates: RotateKubeletServerCertificate=true
|
||||
protect-kernel-defaults: true
|
||||
tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
|
||||
generate_serving_certificate: true
|
||||
scheduler:
|
||||
extra_args:
|
||||
tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Conclusion
|
||||
|
||||
If you have followed this guide, your RKE custom cluster provisioned by Rancher will be configured to pass the CIS Kubernetes Benchmark. You can review our RKE self-assessment guides to understand how we verified each of the benchmarks and how you can do the same on your cluster.
|
||||
+3089
File diff suppressed because one or more lines are too long
+3049
File diff suppressed because it is too large
Load Diff
+2865
File diff suppressed because it is too large
Load Diff
+274
@@ -0,0 +1,274 @@
|
||||
---
|
||||
title: RKE2 Hardening Guides
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/rancher-security/hardening-guides/rke2-hardening-guide"/>
|
||||
</head>
|
||||
|
||||
This document provides prescriptive guidance for how to harden an RKE2 cluster intended for production, before provisioning it with Rancher. It outlines the configurations and controls required for Center for Information Security (CIS) Kubernetes benchmark controls.
|
||||
|
||||
:::note
|
||||
This hardening guide describes how to secure the nodes in your cluster. We recommended that you follow this guide before you install Kubernetes.
|
||||
:::
|
||||
|
||||
This hardening guide is intended to be used for RKE2 clusters and is associated with the following versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher:
|
||||
|
||||
| Rancher Version | CIS Benchmark Version | Kubernetes Version |
|
||||
|-----------------|-----------------------|------------------------------|
|
||||
| Rancher v2.7 | Benchmark v1.23 | Kubernetes v1.23 |
|
||||
| Rancher v2.7 | Benchmark v1.24 | Kubernetes v1.24 |
|
||||
| Rancher v2.7 | Benchmark v1.7 | Kubernetes v1.25 up to v1.26 |
|
||||
|
||||
:::note
|
||||
- In Benchmark v1.24 and later, some check ids might fail due to new file permission requirements (600 instead of 644). Impacted check ids: `1.1.1`, `1.1.3`, `1.1.5`, `1.1.7`, `1.1.13`, `1.1.15`, `1.1.17`, `4.1.3`, `4.1.5` and `4.1.9`.
|
||||
- In Benchmark v1.7, the `--protect-kernel-defaults` (4.2.6) parameter is not required anymore, and was removed by CIS.
|
||||
:::
|
||||
|
||||
For more details on how to evaluate a hardened RKE2 cluster against the official CIS benchmark, refer to the RKE2 self-assessment guides for specific Kubernetes and CIS benchmark versions.
|
||||
|
||||
RKE2 passes a number of the Kubernetes CIS controls without modification, as it applies several security mitigations by default. There are some notable exceptions to this that require manual intervention to fully comply with the CIS Benchmark:
|
||||
|
||||
1. RKE2 will not modify the host operating system. Therefore, you, the operator, must make a few host-level modifications.
|
||||
2. Certain CIS controls for Network Policies and Pod Security Standards (or Pod Security Policies (PSP) on RKE2 versions prior to v1.25) will restrict the functionality of the cluster. You must opt into having RKE2 configure these for you. To help ensure these requirements are met, RKE2 can be started with the profile flag set to `cis-1.23` for v1.25 and newer or `cis-1.6` for v1.24 and older.
|
||||
|
||||
## Host-level requirements
|
||||
|
||||
There are two areas of host-level requirements: kernel parameters and etcd process/directory configuration. These are outlined in this section.
|
||||
|
||||
### Ensure `protect-kernel-defaults` is set
|
||||
|
||||
<Tabs groupId="k3s-version">
|
||||
<TabItem value="v1.25 and Newer" default>
|
||||
|
||||
The `protect-kernel-defaults` is no longer required since CIS benchmark 1.7.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="v1.24 and Older">
|
||||
|
||||
This is a kubelet flag that will cause the kubelet to exit if the required kernel parameters are unset or are set to values that are different from the kubelet's defaults.
|
||||
|
||||
The `protect-kernel-defaults` flag can be set in the cluster configuration in Rancher.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
rkeConfig:
|
||||
machineSelectorConfig:
|
||||
- config:
|
||||
protect-kernel-defaults: true
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Set kernel parameters
|
||||
|
||||
The following `sysctl` configuration is recommended for all nodes type in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`:
|
||||
|
||||
```ini
|
||||
vm.panic_on_oom=0
|
||||
vm.overcommit_memory=1
|
||||
kernel.panic=10
|
||||
kernel.panic_on_oops=1
|
||||
```
|
||||
|
||||
Run `sudo sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings.
|
||||
|
||||
### Ensure etcd is configured properly
|
||||
|
||||
The CIS Benchmark requires that the etcd data directory be owned by the `etcd` user and group. This implicitly requires the etcd process run as the host-level `etcd` user. To achieve this, RKE2 takes several steps when started with a valid `cis-1.xx` profile:
|
||||
|
||||
1. Check that the `etcd` user and group exists on the host. If they don't, exit with an error.
|
||||
2. Create etcd's data directory with `etcd` as the user and group owner.
|
||||
3. Ensure the etcd process is ran as the `etcd` user and group by setting the etcd static pod's `SecurityContext` appropriately.
|
||||
|
||||
To meet the above requirements, you must:
|
||||
|
||||
#### Create the etcd user
|
||||
|
||||
On some Linux distributions, the `useradd` command will not create a group. The `-U` flag is included below to account for that. This flag tells `useradd` to create a group with the same name as the user.
|
||||
|
||||
```bash
|
||||
sudo useradd -r -c "etcd user" -s /sbin/nologin -M etcd -U
|
||||
```
|
||||
|
||||
## Kubernetes runtime requirements
|
||||
|
||||
The runtime requirements to pass the CIS Benchmark are centered around pod security, network policies and kernel parameters. Most of this is automatically handled by RKE2 when using a valid `cis-1.xx` profile, but some additional operator intervention is required. These are outlined in this section.
|
||||
|
||||
### PodSecurity
|
||||
|
||||
RKE2 always runs with some amount of pod security.
|
||||
|
||||
<Tabs groupId="rke2-version">
|
||||
<TabItem value="v1.25 and Newer" default>
|
||||
|
||||
On v1.25 and newer, [Pod Security Admissions (PSAs)](https://kubernetes.io/docs/concepts/security/pod-security-admission/) are used for pod security.
|
||||
|
||||
Below is the minimum necessary configuration needed for hardening RKE2 to pass CIS v1.23 hardened profile `rke2-cis-1.7-hardened` available in Rancher.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
defaultPodSecurityAdmissionConfigurationTemplateName: rancher-restricted
|
||||
rkeConfig:
|
||||
machineSelectorConfig:
|
||||
- config:
|
||||
profile: cis-1.23
|
||||
```
|
||||
|
||||
When both the `defaultPodSecurityAdmissionConfigurationTemplateName` and `profile` flags are set, Rancher and RKE2 does the following:
|
||||
|
||||
1. Checks that host-level requirements have been met. If they haven't, RKE2 will exit with a fatal error describing the unmet requirements.
|
||||
2. Applies network policies that allow the cluster to pass associated controls.
|
||||
3. Configures the Pod Security Admission Controller with the PSA configuration template `rancher-restricted`, to enforce restricted mode in all namespaces, except the ones in the template's exemption list.
|
||||
These namespaces are exempted to allow system pods to run without restrictions, which is required for proper operation of the cluster.
|
||||
|
||||
:::note
|
||||
If you intend to import an RKE cluster into Rancher, please consult the [documentation](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) for how to configure the PSA to exempt Rancher system namespaces.
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="v1.24 and Older">
|
||||
|
||||
On Kubernetes v1.24 and older, the `PodSecurityPolicy` admission controller is always enabled.
|
||||
|
||||
Below is the minimum necessary configuration needed for hardening RKE2 to pass CIS v1.23 hardened profile `rke2-cis-1.23-hardened` available in Rancher.
|
||||
|
||||
:::note
|
||||
In the following example the profile is set to `cis-1.6` which is the value defined in the upstream RKE2, but the cluster is actually configured to pass the CIS v1.23 hardened profile
|
||||
:::
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
defaultPodSecurityPolicyTemplateName: restricted-noroot
|
||||
rkeConfig:
|
||||
machineSelectorConfig:
|
||||
- config:
|
||||
profile: cis-1.6
|
||||
```
|
||||
|
||||
|
||||
When both the `defaultPodSecurityPolicyTemplateName` and `profile` flags are set, Rancher and RKE2 does the following:
|
||||
|
||||
1. Checks that host-level requirements have been met. If they haven't, RKE2 will exit with a fatal error describing the unmet requirements.
|
||||
2. Applies network policies that allow the cluster to pass associated controls.
|
||||
3. Configures runtime pod security policies that allow the cluster to pass associated controls.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
:::note
|
||||
The Kubernetes control plane components and critical additions such as CNI, DNS, and Ingress are ran as pods in the `kube-system` namespace. Therefore, this namespace will have a policy that is less restrictive so that these components can run properly.
|
||||
:::
|
||||
|
||||
### NetworkPolicies
|
||||
|
||||
When ran with a valid `cis-1.xx` profile, RKE2 will put `NetworkPolicies` in place that passes the CIS Benchmark for Kubernetes' built-in namespaces. These namespaces are: `kube-system`, `kube-public`, `kube-node-lease`, and `default`.
|
||||
|
||||
The `NetworkPolicy` used will only allow pods within the same namespace to talk to each other. The notable exception to this is that it allows DNS requests to be resolved.
|
||||
|
||||
:::note
|
||||
Operators must manage network policies as normal for additional namespaces that are created.
|
||||
:::
|
||||
|
||||
### Configure `default` service account
|
||||
|
||||
**Set `automountServiceAccountToken` to `false` for `default` service accounts**
|
||||
|
||||
Kubernetes provides a `default` service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. The `default` service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments.
|
||||
|
||||
For each namespace including `default` and `kube-system` on a standard RKE2 install, the `default` service account must include this value:
|
||||
|
||||
```yaml
|
||||
automountServiceAccountToken: false
|
||||
```
|
||||
|
||||
For namespaces created by the cluster operator, the following script and configuration file can be used to configure the `default` service account.
|
||||
|
||||
The configuration bellow must be saved to a file called `account_update.yaml`.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: default
|
||||
automountServiceAccountToken: false
|
||||
```
|
||||
|
||||
Create a bash script file called `account_update.sh`. Be sure to `sudo chmod +x account_update.sh` so the script has execute permissions.
|
||||
|
||||
```bash
|
||||
#!/bin/bash -e
|
||||
|
||||
for namespace in $(kubectl get namespaces -A -o=jsonpath="{.items[*]['metadata.name']}"); do
|
||||
echo -n "Patching namespace $namespace - "
|
||||
kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)"
|
||||
done
|
||||
```
|
||||
|
||||
Execute this script to apply the `account_update.yaml` configuration to `default` service account in all namespaces.
|
||||
|
||||
### API Server audit configuration
|
||||
|
||||
CIS requirements 1.2.19 to 1.2.22 are related to configuring audit logs for the API Server. When RKE2 is started with the `profile` flag set, it will automatically configure hardened `--audit-log-` parameters in the API Server to pass those CIS checks.
|
||||
|
||||
RKE2's default audit policy is configured to not log requests in the API Server. This is done to allow cluster operators flexibility to customize an audit policy that suits their auditing requirements and needs, as these are specific to each users' environment and policies.
|
||||
|
||||
A default audit policy is created by RKE2 when started with the `profile` flag set. The policy is defined in `/etc/rancher/rke2/audit-policy.yaml`.
|
||||
|
||||
```yaml
|
||||
apiVersion: audit.k8s.io/v1
|
||||
kind: Policy
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
rules:
|
||||
- level: None
|
||||
```
|
||||
|
||||
## Reference Hardened RKE2 Template Configuration
|
||||
|
||||
The reference template configuration is used in Rancher to create a hardened RKE2 custom cluster. This reference does not include other required **cluster configuration** directives which will vary depending on your environment.
|
||||
|
||||
|
||||
<Tabs groupId="rke2-version">
|
||||
<TabItem value="v1.25 and Newer" default>
|
||||
|
||||
```yaml
|
||||
apiVersion: provisioning.cattle.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: # Define cluster name
|
||||
spec:
|
||||
defaultPodSecurityAdmissionConfigurationTemplateName: rancher-restricted
|
||||
kubernetesVersion: # Define RKE2 version
|
||||
rkeConfig:
|
||||
machineSelectorConfig:
|
||||
- config:
|
||||
profile: cis-1.23
|
||||
```
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="v1.24 and Older">
|
||||
|
||||
```yaml
|
||||
apiVersion: provisioning.cattle.io/v1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: # Define cluster name
|
||||
spec:
|
||||
defaultPodSecurityPolicyTemplateName: restricted-noroot
|
||||
kubernetesVersion: # Define RKE2 version
|
||||
rkeConfig:
|
||||
machineSelectorConfig:
|
||||
- config:
|
||||
profile: cis-1.6
|
||||
protect-kernel-defaults: true
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Conclusion
|
||||
|
||||
If you have followed this guide, your RKE2 custom cluster provisioned by Rancher will be configured to pass the CIS Kubernetes Benchmark. You can review our RKE2 self-assessment guides to understand how we verified each of the benchmarks and how you can do the same on your cluster.
|
||||
+3200
File diff suppressed because one or more lines are too long
+3202
File diff suppressed because one or more lines are too long
+2967
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user