diff --git a/content/rancher/v2.x/en/security/_index.md b/content/rancher/v2.x/en/security/_index.md index ab97c117bbb..4789b369162 100644 --- a/content/rancher/v2.x/en/security/_index.md +++ b/content/rancher/v2.x/en/security/_index.md @@ -55,7 +55,7 @@ Each version of the hardening guide is intended to be used with specific version Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version ------------------------|----------------|-----------------------|------------------ -[Hardening Guide v2.3.4]({{}}/rancher/v2.x/en/security/hardening-2.3.4/) | Rancher v2.3.4 | Benchmark v1.5 | Kubernetes v1.15 +[Hardening Guide v2.3.5]({{}}/rancher/v2.x/en/security/hardening-2.3.5/) | Rancher v2.3.5 | Benchmark v1.5 | Kubernetes v1.15 [Hardening Guide v2.3.3]({{}}/rancher/v2.x/en/security/hardening-2.3.3/) | Rancher v2.3.3 | Benchmark v1.4.1 | Kubernetes v1.14, v1.15, and v1.16 [Hardening Guide v2.3]({{}}/rancher/v2.x/en/security/hardening-2.3/) | Rancher v2.3.0-v2.3.2 | Benchmark v1.4.1 | Kubernetes v1.15 [Hardening Guide v2.2]({{}}/rancher/v2.x/en/security/hardening-2.2/) | Rancher v2.2.x | Benchmark v1.4.1 and 1.4.0 | Kubernetes v1.13 diff --git a/content/rancher/v2.x/en/security/benchmark-2.3.4/_index.md b/content/rancher/v2.x/en/security/benchmark-2.3.5/_index.md similarity index 73% rename from content/rancher/v2.x/en/security/benchmark-2.3.4/_index.md rename to content/rancher/v2.x/en/security/benchmark-2.3.5/_index.md index 57e25c15e7c..d06ce9b3af0 100644 --- a/content/rancher/v2.x/en/security/benchmark-2.3.4/_index.md +++ b/content/rancher/v2.x/en/security/benchmark-2.3.5/_index.md @@ -1,23 +1,23 @@ --- -title: CIS Benchmark Rancher Self-Assessment Guide - Rancher v2.3.4 +title: CIS Benchmark Rancher Self-Assessment Guide - v2.3.5 weight: 105 --- -### CIS Kubernetes Benchmark 1.5 - Rancher 2.3.4 with Kubernetes 1.15 +### CIS Kubernetes Benchmark 1.5 - Rancher 2.3.5 with Kubernetes 1.15 -[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.3.4/Rancher_Benchmark_Assessment.pdf) +[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.3.5/Rancher_Benchmark_Assessment.pdf) #### Overview -This document is a companion to the Rancher v2.3.4 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark. +This document is a companion to the Rancher v2.3.5 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark. This guide corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark: Self Assessment Guide Version | Rancher Version | Hardening Guide Version | Kubernetes Version | CIS Benchmark Version ---------------------------|----------|---------|-------|----- -Self Assessment Guide v2.3.4 | Rancher v2.3.4 | Hardening Guide v2.3.4 | Kubernetes v1.15 | Benchmark v1.5 +Self Assessment Guide v2.3.5 | Rancher v2.3.5 | Hardening Guide v2.3.5 | Kubernetes v1.15 | Benchmark v1.5 -Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher-created clusters. +Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply and will have a result of `Not Applicable`. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher-created clusters. This document is to be used by Rancher operators, security teams, auditors and decision makers. @@ -27,9 +27,9 @@ For more detail about each audit, including rationales and remediations for fail Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files. -Scoring the commands is different in Rancher Labs than in the CIS Benchmark. Where the commands differ from the original CIS benchmark, the commands specific to Rancher Labs are provided for testing. +Scoring the commands is different in Rancher Labs than in the CIS Benchmark. Where the commands differ from the original CIS benchmark, the commands specific to Rancher Labs are provided for testing. Only **scored** tests will be covered in this guide. -When performing the tests, you will need access to the Docker command line on the hosts of all three RKE roles. The commands also make use of the the `jq` command to provide human-readable formatting. +When performing the tests, you will need access to the Docker command line on the hosts of all three RKE roles. The commands also make use of the the `jq` and `kubectl` (with valid config) commands to provide human-readable formatting. ### Controls @@ -39,96 +39,60 @@ When performing the tests, you will need access to the Docker command line on th #### 1.1.1 Ensure that the API server pod specification file permissions are set to `644` or more restrictive (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the API server. All configuration is passed in as arguments at container run time. #### 1.1.2 Ensure that the API server pod specification file ownership is set to `root:root` (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the API server. All configuration is passed in as arguments at container run time. #### 1.1.3 Ensure that the controller manager pod specification file permissions are set to `644` or more restrictive (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. #### 1.1.4 Ensure that the controller manager pod specification file ownership is set to `root:root` (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. #### 1.1.5 Ensure that the scheduler pod specification file permissions are set to `644` or more restrictive (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. #### 1.1.6 Ensure that the scheduler pod specification file ownership is set to `root:root` (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. #### 1.1.7 Ensure that the etcd pod specification file permissions are set to `644` or more restrictive (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. #### 1.1.8 Ensure that the etcd pod specification file ownership is set to `root:root` (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. -#### 1.1.9 Ensure that the Container Network Interface file permissions are set to `644` or more restrictive (Not Scored) - -**Result:** WARN - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, - -``` bash -chmod 644 -``` - -**Audit:** - -``` -stat -c %a -``` - -#### 1.1.10 Ensure that the Container Network Interface file ownership is set to `root:root` (Not Scored) - -**Result:** WARN - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, - -``` bash -chown root:root -``` - -**Audit:** - -``` -stat -c %U:%G -``` - #### 1.1.11 Ensure that the etcd data directory permissions are set to `700` or more restrictive (Scored) **Result:** PASS @@ -150,7 +114,7 @@ chmod 700 /var/lib/etcd **Audit:** ``` -ps -ef | grep etcd | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%' | xargs stat -c %a +/mnt/kube-bench/test_helpers/1.1.11.sh etcd ``` **Expected result**: @@ -173,14 +137,14 @@ ps -ef | grep etcd Run the below command (based on the etcd data directory found above). For example, -``` bash +``` bash chown etcd:etcd /var/lib/etcd ``` **Audit:** ``` -ps -ef | grep etcd | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%' | xargs stat -c %U:%G +/mnt/kube-bench/test_helpers/1.1.12.sh etcd ``` **Expected result**: @@ -191,7 +155,7 @@ ps -ef | grep etcd | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%' #### 1.1.13 Ensure that the `admin.conf` file permissions are set to `644` or more restrictive (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE does not store the kubernetes default kubeconfig credentials file on the nodes. It’s presented to user where RKE is run. @@ -199,7 +163,7 @@ We recommend that this `kube_config_cluster.yml` file be kept in secure store. #### 1.1.14 Ensure that the admin.conf file ownership is set to `root:root` (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE does not store the kubernetes default kubeconfig credentials file on the nodes. It’s presented to user where RKE is run. @@ -207,112 +171,106 @@ We recommend that this `kube_config_cluster.yml` file be kept in secure store. #### 1.1.15 Ensure that the `scheduler.conf` file permissions are set to `644` or more restrictive (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. #### 1.1.16 Ensure that the `scheduler.conf` file ownership is set to `root:root` (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. #### 1.1.17 Ensure that the `controller-manager.conf` file permissions are set to `644` or more restrictive (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. #### 1.1.18 Ensure that the `controller-manager.conf` file ownership is set to `root:root` (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. #### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to `root:root` (Scored) -**Result:** WARN - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, - -``` bash -chown -R root:root /etc/kubernetes/pki/ -``` - -**Audit:** - -``` -ls -laR /etc/kubernetes/pki/ -``` - -#### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to `644` or more restrictive (Scored) - -**Result:** WARN - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, - -``` bash -chmod -R 644 /etc/kubernetes/pki/*.crt -``` - -**Audit:** - -``` -stat -c %n %a /etc/kubernetes/pki/*.crt -``` - -#### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to `600` (Scored) - -**Result:** WARN - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, - -``` bash -chmod -R 600 /etc/kubernetes/pki/*.key -``` - -**Audit:** - -``` -stat -c %n %a /etc/kubernetes/pki/*.key -``` - -### 1.2 API Server - -#### 1.2.1 Ensure that the `--anonymous-auth` argument is set to `false` (Not Scored) - **Result:** PASS **Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the below parameter. +Run the below command (based on the file location on your system) on the master node. +For example, ``` bash ---anonymous-auth=false +chown -R root:root /etc/kubernetes/ssl ``` **Audit:** ``` -/bin/ps -ef | grep kube-apiserver | grep -v grep +stat -c %U:%G /etc/kubernetes/ssl ``` **Expected result**: ``` -'false' is equal to 'false' +'root:root' is present ``` +#### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to `644` or more restrictive (Scored) + +**Result:** PASS + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, + +``` bash +chmod -R 644 /etc/kubernetes/ssl" +``` + +**Audit:** + +``` +/mnt/kube-bench/test_helpers/check_files_permissions.sh '/etc/kubernetes/ssl/*.pem' +``` + +**Expected result**: + +``` +'true' is present +``` + +#### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to `600` (Scored) + +**Result:** PASS + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, + +``` bash +chmod -R 600 /etc/kubernetes/ssl/certs/serverca +``` + +**Audit:** + +``` +/mnt/kube-bench/test_helpers/1.1.21.sh /etc/kubernetes/ssl +``` + +**Expected result**: + +``` +'pass' is present +``` + +### 1.2 API Server + #### 1.2.2 Ensure that the `--basic-auth-file` argument is not set (Scored) **Result:** PASS @@ -499,32 +457,6 @@ for example: 'Node,RBAC' has 'RBAC' ``` -#### 1.2.10 Ensure that the admission control plugin `EventRateLimit` is set (Not Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and set the desired limits in a configuration file. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -and set the below parameters. - -``` bash ---enable-admission-plugins=...,EventRateLimit,... ---admission-control-config-file= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy' has 'EventRateLimit' -``` - #### 1.2.11 Ensure that the admission control plugin `AlwaysAdmit` is not set (Scored) **Result:** PASS @@ -543,51 +475,7 @@ value that does not include `AlwaysAdmit`. **Expected result**: ``` -'ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy' not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present -``` - -#### 1.2.12 Ensure that the admission control plugin `AlwaysPullImages` is set (Not Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--enable-admission-plugins` parameter to include -`AlwaysPullImages`. - -``` bash ---enable-admission-plugins=...,AlwaysPullImages,... -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy' has 'AlwaysPullImages' -``` - -#### 1.2.13 Ensure that the admission control plugin `SecurityContextDeny` is set if `PodSecurityPolicy` is not used (Not Scored) - -**Result:** WARN - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--enable-admission-plugins` parameter to include -`SecurityContextDeny`, unless `PodSecurityPolicy` is already in place. - -``` bash ---enable-admission-plugins=...,SecurityContextDeny,... -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep +'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present ``` #### 1.2.14 Ensure that the admission control plugin `ServiceAccount` is set (Scored) @@ -609,7 +497,7 @@ value that does not include `ServiceAccount`. **Expected result**: ``` -'ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy' has 'ServiceAccount' OR '--enable-admission-plugins' is not present +'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'ServiceAccount' OR '--enable-admission-plugins' is not present ``` #### 1.2.15 Ensure that the admission control plugin `NamespaceLifecycle` is set (Scored) @@ -658,7 +546,7 @@ Then restart the API Server. **Expected result**: ``` -'ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy' has 'PodSecurityPolicy' +'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'PodSecurityPolicy' ``` #### 1.2.17 Ensure that the admission control plugin `NodeRestriction` is set (Scored) @@ -684,7 +572,7 @@ value that includes `NodeRestriction`. **Expected result**: ``` -'ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy' has 'NodeRestriction' +'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'NodeRestriction' ``` #### 1.2.18 Ensure that the `--insecure-bind-address` argument is not set (Scored) @@ -1081,7 +969,7 @@ on the master node and set the `--encryption-provider-config` parameter to the p #### 1.2.34 Ensure that encryption providers are appropriately configured (Scored) -**Result:** WARN +**Result:** PASS **Remediation:** Follow the Kubernetes documentation and configure a `EncryptionConfig` file. @@ -1090,31 +978,13 @@ In this file, choose **aescbc**, **kms** or **secretbox** as the encryption prov **Audit:** ``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -#### 1.2.35 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Not Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the below parameter. - -``` bash ---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep +/mnt/kube-bench/test_helpers/1.2.34.sh /etc/kubernetes/ssl/encryption.yaml ``` **Expected result**: ``` -'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' has 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' +'--pass' is present ``` ### 1.3 Controller Manager @@ -1482,109 +1352,76 @@ node and either remove the `--peer-auto-tls` parameter or set it to `false`. '--peer-auto-tls' is not present OR '--peer-auto-tls' is present ``` -#### 2.7 Ensure that a unique Certificate Authority is used for etcd (Not Scored) +## 3 Control Plane Configuration +### 3.2 Logging + +#### 3.2.1 Ensure that a minimal audit policy is created (Scored) **Result:** PASS **Remediation:** -[Manual test] -Follow the etcd documentation and create a dedicated certificate authority setup for the -etcd service. -Then, edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the -master node and set the below parameter. - -``` bash ---trusted-ca-file= -``` +Create an audit policy file for your cluster. **Audit:** ``` -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +/mnt/kube-bench/test_helpers/3.2.1.sh kube-apiserver ``` **Expected result**: ``` -'--trusted-ca-file' is present +'--audit-policy-file' is present ``` -## 3 Control Plane Configuration -### 3.1 Authentication and Authorization - -#### 3.1.1 Client certificate authentication should not be used for users (Not Scored) - -**Result:** WARN - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be -implemented in place of client certificates. - -### 3.2 Logging - -#### 3.2.1 Ensure that a minimal audit policy is created (Scored) - -**Result:** WARN - -**Remediation:** -Create an audit policy file for your cluster. - -#### 3.2.2 Ensure that the audit policy covers key security concerns (Not Scored) - -**Result:** WARN - -**Remediation:** -Consider modification of the audit policy in use on the cluster to include these items, at a -minimum. - ## 4 Worker Node Security Configuration ### 4.1 Worker Node Configuration Files #### 4.1.1 Ensure that the kubelet service file permissions are set to `644` or more restrictive (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. #### 4.1.2 Ensure that the kubelet service file ownership is set to `root:root` (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. #### 4.1.3 Ensure that the proxy kubeconfig file permissions are set to `644` or more restrictive (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the proxy service. All configuration is passed in as arguments at container run time. #### 4.1.4 Ensure that the proxy kubeconfig file ownership is set to `root:root` (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the proxy service. All configuration is passed in as arguments at container run time. #### 4.1.5 Ensure that the kubelet.conf file permissions are set to `644` or more restrictive (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. #### 4.1.6 Ensure that the kubelet.conf file ownership is set to `root:root` (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. #### 4.1.7 Ensure that the certificate authorities file permissions are set to `644` or more restrictive (Scored) -**Result:** WARN +**Result:** PASS **Remediation:** Run the following command to modify the file permissions of the @@ -1593,6 +1430,18 @@ Run the following command to modify the file permissions of the --client-ca-file chmod 644 ``` +**Audit:** + +``` +stat -c %a /etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected result**: + +``` +'644' is equal to '644' OR '640' is present OR '600' is present +``` + #### 4.1.8 Ensure that the client certificate authorities file ownership is set to `root:root` (Scored) **Result:** PASS @@ -1618,14 +1467,14 @@ chown root:root #### 4.1.9 Ensure that the kubelet configuration file has permissions set to `644` or more restrictive (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. #### 4.1.10 Ensure that the kubelet configuration file ownership is set to `root:root` (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. @@ -1827,7 +1676,7 @@ systemctl restart kubelet.service **Expected result**: ``` -'1800s' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present +'30m' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present ``` #### 4.2.6 Ensure that the ```--protect-kernel-defaults``` argument is set to `true` (Scored) @@ -1904,64 +1753,9 @@ systemctl restart kubelet.service 'true' is equal to 'true' OR '--make-iptables-util-chains' is not present ``` -#### 4.2.8 Ensure that the `--hostname-override` argument is not set (Not Scored) - -**Result:** WARN - -**Remediation:** -Edit the kubelet service file `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` -on each worker node and remove the `--hostname-override` argument from the -`KUBELET_SYSTEM_PODS_ARGS` variable. -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -#### 4.2.9 Ensure that the `--event-qps` argument is set to `0` or a level which ensures appropriate event capture (Not Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set `eventRecordQPS`: to an appropriate level. -If using command line arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'0' is equal to '0' -``` - #### 4.2.10 Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate (Scored) -**Result:** INFO +**Result:** Not Applicable **Remediation:** RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. @@ -2039,90 +1833,12 @@ systemctl restart kubelet.service 'true' is equal to 'true' ``` -#### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Not Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set `TLSCipherSuites`: to - -``` bash -TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -``` - -or to a subset of these values. -If using executable arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -set the `--tls-cipher-suites` parameter as follows, or to a subset of these values. - -``` bash ---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -``` - -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' contains valid elements from 'TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256' -``` - ## 5 Kubernetes Policies ### 5.1 RBAC and Service Accounts -#### 5.1.1 Ensure that the cluster-admin role is only used where required (Not Scored) - -**Result:** WARN - -**Remediation:** -Identify all `clusterrolebindings` to the `cluster-admin` role. Check if they are used and -if they need this role or if they could use a role with fewer privileges. -Where possible, first bind users to a lower privileged role and then remove the -`clusterrolebinding` to the `cluster-admin` role : - -``` bash -kubectl delete clusterrolebinding [name] -``` - -#### 5.1.2 Minimize access to secrets (Not Scored) - -**Result:** WARN - -**Remediation:** -Where possible, remove `get`, `list` and `watch` access to secret objects in the cluster. - -#### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Not Scored) - -**Result:** WARN - -**Remediation:** -Where possible replace any use of wildcards in `clusterroles` and roles with specific -objects or actions. - -#### 5.1.4 Minimize access to create pods (Not Scored) - -**Result:** WARN - #### 5.1.5 Ensure that default service accounts are not actively used. (Scored) -**Result:** WARN +**Result:** PASS **Remediation:** Create explicit service accounts wherever a Kubernetes workload requires specific access @@ -2133,192 +1849,140 @@ Modify the configuration of each default service account to include this value automountServiceAccountToken: false ``` -#### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Not Scored) +**Audit:** -**Result:** WARN +``` +/mnt/kube-bench/test_helpers/5.1.5.sh +``` -**Remediation:** -Modify the definition of pods and service accounts which do not need to mount service -account tokens to disable it. +**Expected result**: + +``` +'--pass' is present +``` ### 5.2 Pod Security Policies -#### 5.2.1 Minimize the admission of privileged containers (Not Scored) - -**Result:** WARN - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that -the `.spec.privileged` field is omitted or set to `false`. - #### 5.2.2 Minimize the admission of containers wishing to share the host process ID namespace (Scored) -**Result:** WARN +**Result:** PASS **Remediation:** Create a PSP as described in the Kubernetes documentation, ensuring that the `.spec.hostPID` field is omitted or set to `false`. +**Audit:** + +``` +kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +**Expected result**: + +``` +1 is greater than 0 +``` + #### 5.2.3 Minimize the admission of containers wishing to share the host IPC namespace (Scored) -**Result:** WARN +**Result:** PASS **Remediation:** Create a PSP as described in the Kubernetes documentation, ensuring that the `.spec.hostIPC` field is omitted or set to `false`. +**Audit:** + +``` +kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +**Expected result**: + +``` +1 is greater than 0 +``` + #### 5.2.4 Minimize the admission of containers wishing to share the host network namespace (Scored) -**Result:** WARN +**Result:** PASS **Remediation:** Create a PSP as described in the Kubernetes documentation, ensuring that the `.spec.hostNetwork` field is omitted or set to `false`. +**Audit:** + +``` +kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +**Expected result**: + +``` +1 is greater than 0 +``` + #### 5.2.5 Minimize the admission of containers with `allowPrivilegeEscalation` (Scored) -**Result:** WARN +**Result:** PASS **Remediation:** Create a PSP as described in the Kubernetes documentation, ensuring that the `.spec.allowPrivilegeEscalation` field is omitted or set to `false`. -#### 5.2.6 Minimize the admission of root containers (Not Scored) +**Audit:** -**Result:** WARN +``` +kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -`.spec.runAsUser.rule` is set to either `MustRunAsNonRoot` or `MustRunAs` with the range of -UIDs not including `0`. +**Expected result**: -#### 5.2.7 Minimize the admission of containers with the `NET_RAW` capability (Not Scored) - -**Result:** WARN - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -`.spec.requiredDropCapabilities` is set to include either `NET_RAW` or `ALL`. - -#### 5.2.8 Minimize the admission of containers with added capabilities (Not Scored) - -**Result:** WARN - -**Remediation:** -Ensure that `allowedCapabilities` is not present in PSPs for the cluster unless -it is set to an empty array. - -#### 5.2.9 Minimize the admission of containers with capabilities assigned (Not Scored) - -**Result:** WARN - -**Remediation:** -Review the use of capabilites in applications runnning on your cluster. Where a namespace -contains applicaions which do not require any Linux capabities to operate consider adding -a PSP which forbids the admission of containers which do not drop all capabilities. +``` +1 is greater than 0 +``` ### 5.3 Network Policies and CNI -#### 5.3.1 Ensure that the CNI in use supports Network Policies (Not Scored) - -**Result:** WARN - -**Remediation:** -If the CNI plugin in use does not support network policies, consideration should be given to -making use of a different plugin, or finding an alternate mechanism for restricting traffic -in the Kubernetes cluster. - #### 5.3.2 Ensure that all Namespaces have Network Policies defined (Scored) -**Result:** WARN +**Result:** PASS **Remediation:** Follow the documentation and create `NetworkPolicy` objects as you need them. -### 5.4 Secrets Management +**Audit:** -#### 5.4.1 Prefer using secrets as files over secrets as environment variables (Not Scored) +``` +/mnt/kube-bench/test_helpers/5.3.2.sh +``` -**Result:** WARN +**Expected result**: -**Remediation:** -if possible, rewrite application code to read secrets from mounted secret files, rather than -from environment variables. - -#### 5.4.2 Consider external secret storage (Not Scored) - -**Result:** WARN - -**Remediation:** -Refer to the secrets management options offered by your cloud provider or a third-party -secrets management solution. - -### 5.5 Extensible Admission Control - -#### 5.5.1 Configure Image Provenance using `ImagePolicyWebhook` admission controller (Not Scored) - -**Result:** WARN - -**Remediation:** -Follow the Kubernetes documentation and setup image provenance. +``` +'--pass' is present +``` ### 5.6 General Policies -#### 5.6.1 Create administrative boundaries between resources using namespaces (Not Scored) - -**Result:** WARN - -**Remediation:** -Follow the documentation and create namespaces for objects in your deployment as you need -them. - -#### 5.6.2 Ensure that the seccomp profile is set to docker/default in your pod definitions (Not Scored) - -**Result:** WARN - -**Remediation:** -Seccomp is an alpha feature currently. By default, all alpha features are disabled. So, you -would need to enable alpha features in the apiserver by passing `"--feature- -gates=AllAlpha=true"` argument. -Edit the `/etc/kubernetes/apiserver` file on the master node and set the `KUBE_API_ARGS` -parameter to `"--feature-gates=AllAlpha=true"` -`KUBE_API_ARGS="--feature-gates=AllAlpha=true"` -Based on your system, restart the kube-apiserver service. For example: - -``` bash -systemctl restart kube-apiserver.service -``` - -Use annotations to enable the docker/default seccomp profile in your pod definitions. An -example is as below: - -``` bash -apiVersion: v1 -kind: Pod -metadata: - name: trustworthy-pod - annotations: - seccomp.security.alpha.kubernetes.io/pod: docker/default -spec: - containers: - - name: trustworthy-container - image: sotrustworthy:latest -``` - -#### 5.6.3 Apply Security Context to Your Pods and Containers (Not Scored) - -**Result:** WARN - -**Remediation:** -Follow the Kubernetes documentation and apply security contexts to your pods. For a -suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker -Containers. - #### 5.6.4 The default namespace should not be used (Scored) -**Result:** WARN +**Result:** PASS **Remediation:** Ensure that namespaces are created to allow for appropriate segregation of Kubernetes resources and that all new resources are created in a specific namespace. +**Audit:** + +``` +/mnt/kube-bench/test_helpers/5.6.4.sh +``` + +**Expected result**: + +``` +'0' is equal to '0' +``` + diff --git a/content/rancher/v2.x/en/security/hardening-2.3.4/_index.md b/content/rancher/v2.x/en/security/hardening-2.3.4/_index.md deleted file mode 100644 index 6b437ce37f3..00000000000 --- a/content/rancher/v2.x/en/security/hardening-2.3.4/_index.md +++ /dev/null @@ -1,491 +0,0 @@ ---- -title: Hardening Guide v2.3.4 -weight: 100 ---- - -This document provides prescriptive guidance for hardening a production installation of Rancher v2.3.4. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). - -> This hardening guide describes how to secure the nodes in your cluster, and it is recommended to follow this guide before installing Kubernetes. - -This hardening guide is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: - -Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version -------------------------|----------------|-----------------------|------------------ -Hardening Guide v2.3.4 | Rancher v2.3.4 | Benchmark v1.5 | Kubernetes 1.15 - - -[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.3.4/Rancher_Hardening_Guide.pdf) - -### Overview - -This document provides prescriptive guidance for hardening a production installation of Rancher v2.3.4 with Kubernetes v1.15. It outlines the configurations required to address Kubernetes benchmark controls from the Center for Information Security (CIS). - -For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS Benchmark Rancher Self-Assessment Guide - Rancher v2.3.4]({{< baseurl >}}/rancher/v2.x/en/security/benchmark-2.3.4/). - -### Configure Kernel Runtime Parameters - -The folowing `sysctl` configuration is recommended for all nodes type in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`: - -``` bash -vm.overcommit_memory=1 -vm.panic_on_oom=0 -kernel.panic=10 -kernel.panic_on_oops=1 -kernel.keys.root_maxkeys=1000000 -kernel.keys.root_maxbytes=25000000 -``` - -Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings. - -### Configuration Files and Permissions. - -#### kubelet.conf - -**path**: /etc/sysctl.d/kubelet.conf - -**owner**: root:root - -**permissions:** 0644 - -**contents**: - -``` text -vm.overcommit_memory=1 -kernel.panic=10 -kernel.panic_on_oops=1 -``` - -#### admission.yaml - -**path**: /opt/kubernetes/admission.yaml - -**owner**: root:root - -**permissions**: 0600 - -**content**: - -``` yaml -apiVersion: apiserver.k8s.io/v1alpha1 -kind: AdmissionConfiguration -plugins: -- name: EventRateLimit - path: /opt/kubernetes/event.yaml -``` - -#### event.yaml - -**path**: /opt/kubernetes/event.yaml - -**owner**: root:root - -**permissions**: 0600 - -**content**: - -``` yaml -apiVersion: eventratelimit.admission.k8s.io/v1alpha1 -kind: Configuration -limits: -- type: Server - qps: 5000 - burst: 20000 -``` - -#### encryption.yaml - -**path**: /opt/kubernetes/encryption.yaml - -**owner**: root:root - -**permissions**: 0600 - -**content**: - -``` yaml -apiVersion: apiserver.config.k8s.io/v1 -kind: EncryptionConfiguration -resources: - - resources: - - secrets - providers: - - aescbc: - keys: - - name: key1 - secret: - - identity: {} -``` - - -#### audit.yaml - -**path**: /opt/kubernetes/audit.yaml - -**owner**: root:root - -**permissions**: 0600 - -**content**: - -``` yaml -apiVersion: audit.k8s.io/v1beta1 -kind: Policy -rules: -- level: Metadata - -``` - -### Configure `etcd` user and `data-dir` permissions - -#### create `etcd` user and `group` - -``` -addgroup --gid 52034 etcd -useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd -``` - -#### create `data-dir` and set permissions -``` -mkdir -p /var/lib/etcd && chown etcd.etcd /var/lib/etcd && chmod 0700 /var/lib/etcd -``` - - -### Hardened RKE `config.yml` configuration - -``` yaml -# -# Cluster Config -# -docker_root_dir: /var/lib/docker -enable_cluster_alerting: false -enable_cluster_monitoring: false -enable_network_policy: false -# -# Rancher Config -# -rancher_kubernetes_engine_config: - addon_job_timeout: 30 - addons: |- - --- - apiVersion: v1 - kind: Namespace - metadata: - name: ingress-nginx - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: Role - metadata: - name: default-psp-role - namespace: ingress-nginx - rules: - - apiGroups: - - extensions - resourceNames: - - default-psp - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: RoleBinding - metadata: - name: default-psp-rolebinding - namespace: ingress-nginx - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: default-psp-role - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: v1 - kind: Namespace - metadata: - name: cattle-system - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: Role - metadata: - name: default-psp-role - namespace: cattle-system - rules: - - apiGroups: - - extensions - resourceNames: - - default-psp - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: RoleBinding - metadata: - name: default-psp-rolebinding - namespace: cattle-system - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: default-psp-role - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: extensions/v1beta1 - kind: PodSecurityPolicy - metadata: - name: restricted - spec: - requiredDropCapabilities: - - NET_RAW - privileged: false - allowPrivilegeEscalation: false - defaultAllowPrivilegeEscalation: false - fsGroup: - rule: RunAsAny - runAsUser: - rule: MustRunAsNonRoot - seLinux: - rule: RunAsAny - supplementalGroups: - rule: RunAsAny - volumes: - - emptyDir - - secret - - persistentVolumeClaim - - downwardAPI - - configMap - - projected - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: psp:restricted - rules: - - apiGroups: - - extensions - resourceNames: - - restricted - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: psp:restricted - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: psp:restricted - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: tiller - namespace: kube-system - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: tiller - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin - subjects: - - kind: ServiceAccount - name: tiller - namespace: kube-system - ignore_docker_version: true - kubernetes_version: v1.15.6-rancher1-2 -# -# If you are using calico on AWS -# -# network: -# plugin: calico -# calico_network_provider: -# cloud_provider: aws -# -# # To specify flannel interface -# -# network: -# plugin: flannel -# flannel_network_provider: -# iface: eth1 -# -# # To specify flannel interface for canal plugin -# -# network: -# plugin: canal -# canal_network_provider: -# iface: eth1 -# - network: - mtu: 0 - plugin: canal -# -# services: -# kube-api: -# service_cluster_ip_range: 10.43.0.0/16 -# kube-controller: -# cluster_cidr: 10.42.0.0/16 -# service_cluster_ip_range: 10.43.0.0/16 -# kubelet: -# cluster_domain: cluster.local -# cluster_dns_server: 10.43.0.10 -# - services: - etcd: - backup_config: - enabled: false - interval_hours: 12 - retention: 6 - safe_timestamp: false - creation: 12h - extra_args: - data-dir: /var/lib/etcd - extra_binds: - - '/var/lib/etcd:/var/lib/etcd' - gid: 52034 - retention: 72h - snapshot: false - uid: 52034 - kube_api: - always_pull_images: false - extra_args: - admission-control-config-file: /opt/kubernetes/admission.yaml - anonymous-auth: 'false' - audit-log-format: json - audit-log-maxage: '30' - audit-log-maxbackup: '10' - audit-log-maxsize: '100' - audit-log-path: /var/log/kube-audit/audit-log.json - audit-policy-file: /opt/kubernetes/audit.yaml - enable-admission-plugins: >- - ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy - encryption-provider-config: /opt/kubernetes/encryption.yaml - profiling: 'false' - service-account-lookup: 'true' - tls-cipher-suites: >- - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - extra_binds: - - '/var/log/kube-audit:/var/log/kube-audit' - - '/opt/kubernetes:/opt/kubernetes' - pod_security_policy: true - service_node_port_range: 30000-32767 - kube_controller: - extra_args: - address: 127.0.0.1 - feature-gates: RotateKubeletServerCertificate=true - profiling: 'false' - terminated-pod-gc-threshold: '1000' - kubelet: - extra_args: - anonymous-auth: 'false' - event-qps: '0' - feature-gates: RotateKubeletServerCertificate=true - make-iptables-util-chains: 'true' - protect-kernel-defaults: 'true' - streaming-connection-idle-timeout: 1800s - tls-cipher-suites: >- - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - fail_swap_on: false - generate_serving_certificate: true - scheduler: - extra_args: - address: 127.0.0.1 - profiling: 'false' - ssh_agent_auth: false -windows_prefered_cluster: false -``` - -### Hardened Example Ubuntu cloud-config: - -``` yaml -#cloud-config -package_update: false -packages: - - curl - - jq -runcmd: - - sysctl -w vm.overcommit_memory=1 - - sysctl -w kernel.panic=10 - - sysctl -w kernel.panic_on_oops=1 - - curl https://releases.rancher.com/install-docker/18.09.sh | sh - - usermod -aG docker ubuntu - - return=1; while [ $return != 0 ]; do sleep 2; docker ps; return=$?; done - - addgroup --gid 52034 etcd - - useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd - - mkdir -p /var/lib/etcd && chown etcd.etcd /var/lib/etcd && chmod 0700 /var/lib/etcd - - ${agent_cmd} --etcd --controlplane --worker - - mkdir /mnt/kube-bench -write_files: - - path: /etc/sysctl.d/kubelet.conf - owner: root:root - permissions: "0644" - content: | - vm.overcommit_memory=1 - kernel.panic=10 - kernel.panic_on_oops=1 - - path: /opt/kubernetes/admission.yaml - owner: root:root - permissions: "0600" - content: | - apiVersion: apiserver.k8s.io/v1alpha1 - kind: AdmissionConfiguration - plugins: - - name: EventRateLimit - path: /opt/kubernetes/event.yaml - - path: /opt/kubernetes/event.yaml - owner: root:root - permissions: "0600" - content: | - apiVersion: eventratelimit.admission.k8s.io/v1alpha1 - kind: Configuration - limits: - - type: Server - qps: 5000 - burst: 20000 - - path: /opt/kubernetes/encryption.yaml - owner: root:root - permissions: "0600" - content: | - apiVersion: apiserver.config.k8s.io/v1 - kind: EncryptionConfiguration - resources: - - resources: - - secrets - providers: - - aescbc: - keys: - - name: key1 - secret: LF7YiCFyWqAa2MovOgp42rArBdLBGWdjJpX2knvYAkc= - - identity: {} - - path: /opt/kubernetes/audit.yaml - owner: root:root - permissions: "0600" - content: | - apiVersion: audit.k8s.io/v1beta1 - kind: Policy - rules: - - level: Metadata -``` diff --git a/content/rancher/v2.x/en/security/hardening-2.3.5/_index.md b/content/rancher/v2.x/en/security/hardening-2.3.5/_index.md new file mode 100644 index 00000000000..80537a139d9 --- /dev/null +++ b/content/rancher/v2.x/en/security/hardening-2.3.5/_index.md @@ -0,0 +1,424 @@ +--- +title: Hardening Guide v2.3.5 +weight: 100 +--- + +This document provides prescriptive guidance for hardening a production installation of Rancher v2.3.5. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). + +> This hardening guide describes how to secure the nodes in your cluster, and it is recommended to follow this guide before installing Kubernetes. + +This hardening guide is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: + +Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version +------------------------|----------------|-----------------------|------------------ +Hardening Guide v2.3.5 | Rancher v2.3.5 | Benchmark v1.5 | Kubernetes 1.15 + + +[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.3.5/Rancher_Hardening_Guide.pdf) + +### Overview + +This document provides prescriptive guidance for hardening a production installation of Rancher v2.3.5 with Kubernetes v1.15. It outlines the configurations required to address Kubernetes benchmark controls from the Center for Information Security (CIS). + +For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS Benchmark Rancher Self-Assessment Guide - Rancher v2.3.5]({{< baseurl >}}/rancher/v2.x/en/security/benchmark-2.3.5/). + +### Configure Kernel Runtime Parameters + +The folowing `sysctl` configuration is recommended for all nodes type in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`: + +``` bash +vm.overcommit_memory=1 +vm.panic_on_oom=0 +kernel.panic=10 +kernel.panic_on_oops=1 +kernel.keys.root_maxbytes=25000000 +``` + +Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings. + +### Configure `etcd` user and group +A user account and group for the **etcd** service is required to be setup prior to installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time. + +#### create `etcd` user and group +To create the **etcd** group run the following console commands. + +``` +addgroup --gid 52034 etcd +useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd +``` + +Update the RKE **config.yml** with the **uid** and **gid** of the **etcd** user: + +``` yaml +services: + etcd: + gid: 52034 + uid: 52034 +``` + +#### Set `automountServiceAccountToken` to `false` for `default` service accounts +Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. + +For each namespace the **default** service account must include this value: + +``` +automountServiceAccountToken: false +``` + +Save the following yaml to a file called `account_update.yaml` + +``` yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + name: default +automountServiceAccountToken: false +``` + +Create a bash script file called `account_update.sh`. Be sure to `chmod +x account_update.sh` so the script has execute permissions. + +``` +#!/bin/bash -e + +for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do + kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)" +done +``` + +### Ensure that all Namespaces have Network Policies defined + +Running different applications on the same Kubernetes cluster creates a risk of one +compromised application attacking a neighboring application. Network segmentation is +important to ensure that containers can communicate only with those they are supposed +to. A network policy is a specification of how selections of pods are allowed to +communicate with each other and other network endpoints. + +Network Policies are namespace scoped. When a network policy is introduced to a given +namespace, all traffic not allowed by the policy is denied. However, if there are no network +policies in a namespace all traffic will be allowed into and out of the pods in that +namespace. + +> todo: add information about network policies and provide default example here: + + + +### Reference Hardened RKE `config.yml` configuration + +``` yaml +# If you intened to deploy Kubernetes in an air-gapped environment, +# please consult the documentation on how to configure custom RKE images. +kubernetes_version: "v1.15.9-rancher1-1" +enable_network_policy: true +default_pod_security_policy_template_id: "restricted" +nodes: +- address: "172.16.16.9" + port: "" + internal_address: "" + role: + - controlplane + - etcd + - worker + hostname_override: "" + user: "ubuntu" +services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + pod_security_policy: true + secrets_encryption_config: + enabled: true + audit_log: + enabled: true + admission_configuration: + event_rate_limit: + enabled: true + kube-controller: + extra_args: + feature-gates: "RotateKubeletServerCertificate=true" + scheduler: + image: "" + extra_args: {} + extra_binds: [] + extra_env: [] + kubelet: + generate_serving_certificate: true + extra_args: + feature-gates: "RotateKubeletServerCertificate=true" + protect-kernel-defaults: "true" + tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256" + extra_binds: [] + extra_env: [] + cluster_domain: "" + infra_container_image: "" + cluster_dns_server: "" + fail_swap_on: false + generate_serving_certificate: true + kubeproxy: + image: "" + extra_args: {} + extra_binds: [] + extra_env: [] +network: + plugin: "" + options: {} + mtu: 0 + node_selector: {} +authentication: + strategy: "" + sans: [] + webhook: null +addons: | + --- + apiVersion: v1 + kind: Namespace + metadata: + name: ingress-nginx + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: Role + metadata: + name: default-psp-role + namespace: ingress-nginx + rules: + - apiGroups: + - extensions + resourceNames: + - default-psp + resources: + - podsecuritypolicies + verbs: + - use + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: RoleBinding + metadata: + name: default-psp-rolebinding + namespace: ingress-nginx + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: default-psp-role + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:authenticated + --- + apiVersion: v1 + kind: Namespace + metadata: + name: cattle-system + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: Role + metadata: + name: default-psp-role + namespace: cattle-system + rules: + - apiGroups: + - extensions + resourceNames: + - default-psp + resources: + - podsecuritypolicies + verbs: + - use + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: RoleBinding + metadata: + name: default-psp-rolebinding + namespace: cattle-system + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: default-psp-role + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:authenticated + --- + apiVersion: extensions/v1beta1 + kind: PodSecurityPolicy + metadata: + name: restricted + spec: + requiredDropCapabilities: + - NET_RAW + privileged: false + allowPrivilegeEscalation: false + defaultAllowPrivilegeEscalation: false + fsGroup: + rule: RunAsAny + runAsUser: + rule: MustRunAsNonRoot + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + volumes: + - emptyDir + - secret + - persistentVolumeClaim + - downwardAPI + - configMap + - projected + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: psp:restricted + rules: + - apiGroups: + - extensions + resourceNames: + - restricted + resources: + - podsecuritypolicies + verbs: + - use + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: psp:restricted + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp:restricted + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:authenticated + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: tiller + namespace: kube-system + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: tiller + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: cluster-admin + subjects: + - kind: ServiceAccount + name: tiller + namespace: kube-system + +addons_include: [] +system_images: + etcd: "" + alpine: "" + nginx_proxy: "" + cert_downloader: "" + kubernetes_services_sidecar: "" + kubedns: "" + dnsmasq: "" + kubedns_sidecar: "" + kubedns_autoscaler: "" + coredns: "" + coredns_autoscaler: "" + kubernetes: "" + flannel: "" + flannel_cni: "" + calico_node: "" + calico_cni: "" + calico_controllers: "" + calico_ctl: "" + calico_flexvol: "" + canal_node: "" + canal_cni: "" + canal_flannel: "" + canal_flexvol: "" + weave_node: "" + weave_cni: "" + pod_infra_container: "" + ingress: "" + ingress_backend: "" + metrics_server: "" + windows_pod_infra_container: "" +ssh_key_path: "" +ssh_cert_path: "" +ssh_agent_auth: false +authorization: + mode: "" + options: {} +ignore_docker_version: false +private_registries: [] +ingress: + provider: "" + options: {} + node_selector: {} + extra_args: {} + dns_policy: "" + extra_envs: [] + extra_volumes: [] + extra_volume_mounts: [] +cluster_name: "" +prefix_path: "" +addon_job_timeout: 0 +bastion_host: + address: "" + port: "" + user: "" + ssh_key: "" + ssh_key_path: "" + ssh_cert: "" + ssh_cert_path: "" +monitoring: + provider: "" + options: {} + node_selector: {} +restore: + restore: false + snapshot_name: "" +dns: null +``` + +### Reference Hardened RKE Template configuration + +``` yaml +todo: + +``` + + +### Hardened Reference Ubuntu **cloud-config**: + +``` yaml +#cloud-config +packages: + - curl + - jq +runcmd: + - sysctl -w vm.overcommit_memory=1 + - sysctl -w kernel.panic=10 + - sysctl -w kernel.panic_on_oops=1 + - curl https://releases.rancher.com/install-docker/18.09.sh | sh + - usermod -aG docker ubuntu + - return=1; while [ $return != 0 ]; do sleep 2; docker ps; return=$?; done + - addgroup --gid 52034 etcd + - useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd +write_files: + - path: /etc/sysctl.d/kubelet.conf + owner: root:root + permissions: "0644" + content: | + vm.overcommit_memory=1 + kernel.panic=10 + kernel.panic_on_oops=1 +```