From bc3925b4fd08b47d7dc77cfa0524e80779ea9a45 Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Wed, 8 Dec 2021 16:28:29 -0500 Subject: [PATCH 01/33] Added info on K3s bootstrap data, updated style of install notes --- content/k3s/latest/en/upgrades/_index.md | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/content/k3s/latest/en/upgrades/_index.md b/content/k3s/latest/en/upgrades/_index.md index 1b9c86805ad..2b93db7e474 100644 --- a/content/k3s/latest/en/upgrades/_index.md +++ b/content/k3s/latest/en/upgrades/_index.md @@ -9,6 +9,16 @@ This section describes how to upgrade your K3s cluster. [Automated upgrades]({{< baseurl >}}/k3s/latest/en/upgrades/automated/) describes how to perform Kubernetes-native automated upgrades using Rancher's [system-upgrade-controller](https://github.com/rancher/system-upgrade-controller). -> If Traefik is not disabled K3s versions 1.20 and earlier will have installed Traefik v1, while K3s versions 1.21 and later will install Traefik v2 if v1 is not already present. To upgrade Traefik, please refer to the [Traefik documentation](https://doc.traefik.io/traefik/migration/v1-to-v2/) and use the [migration tool](https://github.com/traefik/traefik-migration-tool) to migrate from the older Traefik v1 to Traefik v2. - -> The experimental embedded Dqlite data store was deprecated in K3s v1.19.1. Please note that upgrades from experimental Dqlite to experimental embedded etcd are not supported. If you attempt an upgrade it will not succeed and data will be lost. +> **Important notes on upgrades:** +> +> - **Traefik:** If Traefik is not disabled, K3s versions 1.20 and earlier will install Traefik v1, while K3s versions 1.21 and later will install Traefik v2, if v1 is not already present. To upgrade from the older Traefik v1 to Traefik v2, please refer to the [Traefik documentation](https://doc.traefik.io/traefik/migration/v1-to-v2/) and use the [migration tool](https://github.com/traefik/traefik-migration-tool). +> +> - **Experimental Dqlite:** The experimental embedded Dqlite data store was deprecated in K3s v1.19.1. Please note that upgrades from experimental Dqlite to experimental embedded etcd are not supported. If you attempt an upgrade, it will not succeed, and data will be lost. +> +> - **K3s bootstrap data:** If you are using K3s in an HA configuration with an external SQL datastore, and your server (control-plane) nodes were not started with the `--token` CLI flag, you will no longer be able to add additional K3s servers to the cluster without specifying the token. Ensure that you retain a copy of this token, as it is required when restoring from backup. Previously, K3s did not enforce the use of a token when using external SQL datastores. +> - The affected versions are <= v1.19.12+k3s1, v1.20.8+k3s1, v1.21.2+k3s1; the patched versions are v1.19.13+k3s1, v1.20.9+k3s1, v1.21.3+k3s1. +> +> - You may retrieve the token value from any server already joined to the cluster as follows: +``` +cat /var/lib/rancher/k3s/server/token +``` From 66ba31ade547ae38d2a5813b4820d294fb10add3 Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Thu, 9 Dec 2021 12:40:37 -0500 Subject: [PATCH 02/33] Revised per feedback --- content/k3s/latest/en/upgrades/_index.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/content/k3s/latest/en/upgrades/_index.md b/content/k3s/latest/en/upgrades/_index.md index 2b93db7e474..fad09759854 100644 --- a/content/k3s/latest/en/upgrades/_index.md +++ b/content/k3s/latest/en/upgrades/_index.md @@ -3,22 +3,22 @@ title: "Upgrades" weight: 25 --- -This section describes how to upgrade your K3s cluster. +### Upgrading your K3s cluster [Upgrade basics]({{< baseurl >}}/k3s/latest/en/upgrades/basic/) describes several techniques for upgrading your cluster manually. It can also be used as a basis for upgrading through third-party Infrastructure-as-Code tools like [Terraform](https://www.terraform.io/). [Automated upgrades]({{< baseurl >}}/k3s/latest/en/upgrades/automated/) describes how to perform Kubernetes-native automated upgrades using Rancher's [system-upgrade-controller](https://github.com/rancher/system-upgrade-controller). -> **Important notes on upgrades:** -> -> - **Traefik:** If Traefik is not disabled, K3s versions 1.20 and earlier will install Traefik v1, while K3s versions 1.21 and later will install Traefik v2, if v1 is not already present. To upgrade from the older Traefik v1 to Traefik v2, please refer to the [Traefik documentation](https://doc.traefik.io/traefik/migration/v1-to-v2/) and use the [migration tool](https://github.com/traefik/traefik-migration-tool). -> -> - **Experimental Dqlite:** The experimental embedded Dqlite data store was deprecated in K3s v1.19.1. Please note that upgrades from experimental Dqlite to experimental embedded etcd are not supported. If you attempt an upgrade, it will not succeed, and data will be lost. -> -> - **K3s bootstrap data:** If you are using K3s in an HA configuration with an external SQL datastore, and your server (control-plane) nodes were not started with the `--token` CLI flag, you will no longer be able to add additional K3s servers to the cluster without specifying the token. Ensure that you retain a copy of this token, as it is required when restoring from backup. Previously, K3s did not enforce the use of a token when using external SQL datastores. -> - The affected versions are <= v1.19.12+k3s1, v1.20.8+k3s1, v1.21.2+k3s1; the patched versions are v1.19.13+k3s1, v1.20.9+k3s1, v1.21.3+k3s1. -> -> - You may retrieve the token value from any server already joined to the cluster as follows: +### Version-specific caveats + +- **Traefik:** If Traefik is not disabled, K3s versions 1.20 and earlier will install Traefik v1, while K3s versions 1.21 and later will install Traefik v2, if v1 is not already present. To upgrade from the older Traefik v1 to Traefik v2, please refer to the [Traefik documentation](https://doc.traefik.io/traefik/migration/v1-to-v2/) and use the [migration tool](https://github.com/traefik/traefik-migration-tool). + +- **K3s bootstrap data:** If you are using K3s in an HA configuration with an external SQL datastore, and your server (control-plane) nodes were not started with the `--token` CLI flag, you will no longer be able to add additional K3s servers to the cluster without specifying the token. Ensure that you retain a copy of this token, as it is required when restoring from backup. Previously, K3s did not enforce the use of a token when using external SQL datastores. + - The affected versions are <= v1.19.12+k3s1, v1.20.8+k3s1, v1.21.2+k3s1; the patched versions are v1.19.13+k3s1, v1.20.9+k3s1, v1.21.3+k3s1. + + - You may retrieve the token value from any server already joined to the cluster as follows: ``` cat /var/lib/rancher/k3s/server/token ``` + +- **Experimental Dqlite:** The experimental embedded Dqlite data store was deprecated in K3s v1.19.1. Please note that upgrades from experimental Dqlite to experimental embedded etcd are not supported. If you attempt an upgrade, it will not succeed, and data will be lost. From eadfd19b5e98870b6c628c8fd884ebe3406e42c7 Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Fri, 17 Dec 2021 13:11:20 -0500 Subject: [PATCH 03/33] Added section for adding private CA in 2.5 --- content/rancher/v2.5/en/helm-charts/_index.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/content/rancher/v2.5/en/helm-charts/_index.md b/content/rancher/v2.5/en/helm-charts/_index.md index e5e6ba7853e..b24451e3ff5 100644 --- a/content/rancher/v2.5/en/helm-charts/_index.md +++ b/content/rancher/v2.5/en/helm-charts/_index.md @@ -50,6 +50,20 @@ From the left sidebar select _"Repositories"_. These items represent helm repositories, and can be either traditional helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository. +To add a private CA for Helm Chart repositories: + +- **HTTP-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click `Edit YAML` for the chart repo and set, as in the following example:
+ ``` + [...] + spec: + caBundle: + MIIFXzCCA0egAwIBAgIUWNy8WrvSkgNzV0zdWRP79j9cVcEwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMRQwEgYDVQQKDAtNeU9yZywgSW5jLjENMAsGA1UEAwwEcm9vdDAeFw0yMTEyMTQwODMyMTdaFw0yNDEwMDMwODMyMT + ... + nDxZ/tNXt/WPJr/PgEB3hQdInDWYMg7vGO0Oz00G5kWg0sJ0ZTSoA10ZwdjIdGEeKlj1NlPyAqpQ+uDnmx6DW+zqfYtLnc/g6GuLLVPamraqN+gyU8CHwAWPNjZonFN9Vpg0PIk1I2zuOc4EHifoTAXSpnjfzfyAxCaZsnTptimlPFJJqAMj+FfDArGmr4= + [...] + ``` + +- **Git-based chart repositories**: It is not currently possible to add a private CA. For git-based chart repositories with a certificate signed by a private CA, you must disable TLS verification. Click `Edit YAML` for the chart repo, and add the key/value pair: `spec.insecureSkipTLSVerify: true`. ### Helm Compatibility From bbafbcf4151cee3ccb9461d7f78503f9888c481b Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Fri, 17 Dec 2021 13:11:35 -0500 Subject: [PATCH 04/33] Added section for adding private CA in 2.6 --- content/rancher/v2.6/en/helm-charts/_index.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/content/rancher/v2.6/en/helm-charts/_index.md b/content/rancher/v2.6/en/helm-charts/_index.md index f61e0c2ecfe..490d99c340c 100644 --- a/content/rancher/v2.6/en/helm-charts/_index.md +++ b/content/rancher/v2.6/en/helm-charts/_index.md @@ -25,6 +25,20 @@ From the left sidebar select _"Repositories"_. These items represent helm repositories, and can be either traditional helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository. +To add a private CA for Helm Chart repositories: + +- **HTTP-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click `Edit YAML` for the chart repo and set, as in the following example:
+ ``` + [...] + spec: + caBundle: + MIIFXzCCA0egAwIBAgIUWNy8WrvSkgNzV0zdWRP79j9cVcEwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMRQwEgYDVQQKDAtNeU9yZywgSW5jLjENMAsGA1UEAwwEcm9vdDAeFw0yMTEyMTQwODMyMTdaFw0yNDEwMDMwODMyMT + ... + nDxZ/tNXt/WPJr/PgEB3hQdInDWYMg7vGO0Oz00G5kWg0sJ0ZTSoA10ZwdjIdGEeKlj1NlPyAqpQ+uDnmx6DW+zqfYtLnc/g6GuLLVPamraqN+gyU8CHwAWPNjZonFN9Vpg0PIk1I2zuOc4EHifoTAXSpnjfzfyAxCaZsnTptimlPFJJqAMj+FfDArGmr4= + [...] + ``` + +- **Git-based chart repositories**: It is not currently possible to add a private CA. For git-based chart repositories with a certificate signed by a private CA, you must disable TLS verification. Click `Edit YAML` for the chart repo, and add the key/value pair: `spec.insecureSkipTLSVerify: true`. ### Helm Compatibility From 197aa68f49672cae88023dfde8327be9b3afca6b Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Fri, 17 Dec 2021 15:07:34 -0500 Subject: [PATCH 05/33] Updated per feedback in 2.6 --- content/rancher/v2.6/en/helm-charts/_index.md | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/content/rancher/v2.6/en/helm-charts/_index.md b/content/rancher/v2.6/en/helm-charts/_index.md index 490d99c340c..c7e1f5b1ae8 100644 --- a/content/rancher/v2.6/en/helm-charts/_index.md +++ b/content/rancher/v2.6/en/helm-charts/_index.md @@ -27,18 +27,24 @@ These items represent helm repositories, and can be either traditional helm endp To add a private CA for Helm Chart repositories: -- **HTTP-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click `Edit YAML` for the chart repo and set, as in the following example:
+- **HTTP-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click **Edit YAML** for the chart repo and set, as in the following example:
``` [...] spec: - caBundle: + caBundle: MIIFXzCCA0egAwIBAgIUWNy8WrvSkgNzV0zdWRP79j9cVcEwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMRQwEgYDVQQKDAtNeU9yZywgSW5jLjENMAsGA1UEAwwEcm9vdDAeFw0yMTEyMTQwODMyMTdaFw0yNDEwMDMwODMyMT ... nDxZ/tNXt/WPJr/PgEB3hQdInDWYMg7vGO0Oz00G5kWg0sJ0ZTSoA10ZwdjIdGEeKlj1NlPyAqpQ+uDnmx6DW+zqfYtLnc/g6GuLLVPamraqN+gyU8CHwAWPNjZonFN9Vpg0PIk1I2zuOc4EHifoTAXSpnjfzfyAxCaZsnTptimlPFJJqAMj+FfDArGmr4= [...] ``` -- **Git-based chart repositories**: It is not currently possible to add a private CA. For git-based chart repositories with a certificate signed by a private CA, you must disable TLS verification. Click `Edit YAML` for the chart repo, and add the key/value pair: `spec.insecureSkipTLSVerify: true`. +- **Git-based chart repositories**: It is not currently possible to add a private CA. For git-based chart repositories with a certificate signed by a private CA, you must disable TLS verification. Click **Edit YAML** for the chart repo, and add the key/value pair as follows: + ``` + [...] + spec: + insecureSkipTLSVerify: true + [...] + ``` ### Helm Compatibility From 847f9962516fd5524240345d46a93dadb78ba06a Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Fri, 17 Dec 2021 15:07:50 -0500 Subject: [PATCH 06/33] Updated per feedback in 2.5 --- content/rancher/v2.5/en/helm-charts/_index.md | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/content/rancher/v2.5/en/helm-charts/_index.md b/content/rancher/v2.5/en/helm-charts/_index.md index b24451e3ff5..256d0e29fef 100644 --- a/content/rancher/v2.5/en/helm-charts/_index.md +++ b/content/rancher/v2.5/en/helm-charts/_index.md @@ -52,18 +52,24 @@ These items represent helm repositories, and can be either traditional helm endp To add a private CA for Helm Chart repositories: -- **HTTP-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click `Edit YAML` for the chart repo and set, as in the following example:
+- **HTTP-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click **Edit YAML** for the chart repo and set, as in the following example:
``` [...] spec: - caBundle: + caBundle: MIIFXzCCA0egAwIBAgIUWNy8WrvSkgNzV0zdWRP79j9cVcEwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMRQwEgYDVQQKDAtNeU9yZywgSW5jLjENMAsGA1UEAwwEcm9vdDAeFw0yMTEyMTQwODMyMTdaFw0yNDEwMDMwODMyMT ... nDxZ/tNXt/WPJr/PgEB3hQdInDWYMg7vGO0Oz00G5kWg0sJ0ZTSoA10ZwdjIdGEeKlj1NlPyAqpQ+uDnmx6DW+zqfYtLnc/g6GuLLVPamraqN+gyU8CHwAWPNjZonFN9Vpg0PIk1I2zuOc4EHifoTAXSpnjfzfyAxCaZsnTptimlPFJJqAMj+FfDArGmr4= [...] ``` -- **Git-based chart repositories**: It is not currently possible to add a private CA. For git-based chart repositories with a certificate signed by a private CA, you must disable TLS verification. Click `Edit YAML` for the chart repo, and add the key/value pair: `spec.insecureSkipTLSVerify: true`. +- **Git-based chart repositories**: It is not currently possible to add a private CA. For git-based chart repositories with a certificate signed by a private CA, you must disable TLS verification. Click **Edit YAML** for the chart repo, and add the key/value pair as follows: + ``` + [...] + spec: + insecureSkipTLSVerify: true + [...] + ``` ### Helm Compatibility From d408f60c5d26dcf1256bb98c50eb3d1bc90688d0 Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Thu, 23 Dec 2021 18:50:25 +0000 Subject: [PATCH 07/33] Added new section for charts versioning scheme in 2.6 --- content/rancher/v2.6/en/helm-charts/_index.md | 50 +++++++++++++++++++ 1 file changed, 50 insertions(+) diff --git a/content/rancher/v2.6/en/helm-charts/_index.md b/content/rancher/v2.6/en/helm-charts/_index.md index 68e52a68d9a..a41c062340f 100644 --- a/content/rancher/v2.6/en/helm-charts/_index.md +++ b/content/rancher/v2.6/en/helm-charts/_index.md @@ -5,6 +5,56 @@ weight: 11 In this section, you'll learn how to manage Helm chart repositories and applications in Rancher. Helm chart repositories are managed using **Apps & Marketplace**. It uses a catalog-like system to import bundles of charts from repositories and then uses those charts to either deploy custom Helm applications or Rancher's tools such as Monitoring or Istio. Rancher tools come as pre-loaded repositories which deploy as standalone Helm charts. Any additional repositories are only added to the current cluster. +### Changes in Rancher v2.6 + +Starting in Rancher v2.6.0, a new versioning scheme for Rancher feature charts was implemented for Monitoring. The changes are centered around the major version of the charts and the +up annotation for upstream charts, where applicable. + +**Major Version:** The major version of the charts is tied to Rancher minor versions. When you upgrade to a new Rancher minor version, you should ensure that all of your **Apps & Marketplace** charts are also upgraded to the correct release line for the chart. + +>**Note:** Any major versions that are less than the ones mentioned in the table below are meant for 2.5 and below only. For example, you are advised to not use <100.x.x versions of Monitoring in 2.6.x+. + +**Feature Charts for Monitoring V2:** + +| **Name** | **Starting Major Version** | +| ---------------- | ------------------ | +| rancher-vsphere-csi | 100.0.0 | +| rancher-vsphere-cpi | 100.0.0 | +| Logging V2 | 100.0.0+up3.12.0 | +| Monitoring V2 | 100.0.0+up16.6.0 | +| gke-operator | 100.0.0+up1.1.1 | +| aks-operator | 100.0.0+up1.0.1 | +| eks-operator | 100.0.0+up1.1.1 | +| rancher-backup | 2.0.0 | +| rancher-alerting-drivers | 100.0.0 | +| rancher-prom2teams | 100.0.0+up0.2.0 | +| rancher-sachet | 100.0.0 | +| rancher-cis-benchmark | 2.0.0 | +| fleet | 100.0.0+up0.3.6 | +| longhorn | 100.0.0+up1.1.2 | +| rancher-gatekeeper | 100.0.0+up3.5.1 | +| rancher-istio | 100.0.0+up1.10.4 | +| rancher-kiali-server | 100.0.0+up1.35.0 | +| rancher-tracing | 100.0.0 | +| rancher-pushprox | 100.0.0 | +| rancher-prometheus-adapter | 100.0.0+up2.14.0 | +| rancher-grafana | 100.0.0+up6.11.0 | +| rancher-windows-exporter | 100.0.0 | +| rancher-node-exporter | 100.0.0+up1.18.1 | +| rancher-kube-state-metrics | 100.0.0+up3.2.0 | +| rancher-wins-upgrader | 100.0.0+up0.0.1 | +| system-upgrade-controller | 100.0.0+up0.3.0 | +| rancher-webhook | 1.0.0+up0.2.0 | +| external-ip-webhook | 100.0.0+up1.0.0 | +| rancher-sriov | 100.0.0+up0.1.0 | + +
+**Charts based on upstream:** For charts that are based on upstreams, the +up annotation should inform you of what upstream version the Rancher chart is tracking. Check the upstream version compatibility with Rancher during upgrades also. + +- As an example, `100.x.x+up16.6.0` for Monitoring tracks upstream kube-prometheus-stack `16.6.0` with some Rancher patches added to it + +- On upgrades, ensure that you are not downgrading the version of the chart that you are using. For example, if you are using a version of Monitoring > `16.6.0` in Rancher 2.5, you should not upgrade to `100.x.x+up16.6.0`. Instead, you should upgrade to the appropriate version in the next release. + + ### Charts From the top-left menu select _"Apps & Marketplace"_ and you will be taken to the Charts page. From 206d5e977f43c481018faf3100eb244cc9ac7cb2 Mon Sep 17 00:00:00 2001 From: Guilherme Macedo Date: Thu, 30 Dec 2021 09:57:29 +0100 Subject: [PATCH 08/33] Restructure v2.6 security docs Signed-off-by: Guilherme Macedo --- content/rancher/v2.6/en/security/_index.md | 2 +- .../v2.6/en/security/best-practices/_index.md | 32 +- .../v2.6/en/security/cis-scans/_index.md | 289 ++ .../cis-scans/configuration/_index.md | 98 + .../cis-scans/custom-benchmark/_index.md | 84 + .../v2.6/en/security/cis-scans/rbac/_index.md | 50 + .../cis-scans/skipped-tests/_index.md | 54 + .../rancher/v2.6/en/security/cve/_index.md | 6 +- .../en/security/hardening-guides/_index.md | 7 + .../rancher-2.5/1.5-benchmark-2.5/_index.md | 2265 ----------- .../rancher-2.5/1.5-hardening-2.5/_index.md | 720 ---- .../rancher-2.5/1.6-benchmark-2.5/_index.md | 3317 ----------------- .../rancher-2.5/1.6-hardening-2.5/_index.md | 570 --- .../v2.6/en/security/rancher-2.5/_index.md | 55 - .../v2.6/en/security/security-scan/_index.md | 6 - 15 files changed, 616 insertions(+), 6939 deletions(-) create mode 100644 content/rancher/v2.6/en/security/cis-scans/_index.md create mode 100644 content/rancher/v2.6/en/security/cis-scans/configuration/_index.md create mode 100644 content/rancher/v2.6/en/security/cis-scans/custom-benchmark/_index.md create mode 100644 content/rancher/v2.6/en/security/cis-scans/rbac/_index.md create mode 100644 content/rancher/v2.6/en/security/cis-scans/skipped-tests/_index.md create mode 100644 content/rancher/v2.6/en/security/hardening-guides/_index.md delete mode 100644 content/rancher/v2.6/en/security/rancher-2.5/1.5-benchmark-2.5/_index.md delete mode 100644 content/rancher/v2.6/en/security/rancher-2.5/1.5-hardening-2.5/_index.md delete mode 100644 content/rancher/v2.6/en/security/rancher-2.5/1.6-benchmark-2.5/_index.md delete mode 100644 content/rancher/v2.6/en/security/rancher-2.5/1.6-hardening-2.5/_index.md delete mode 100644 content/rancher/v2.6/en/security/rancher-2.5/_index.md delete mode 100644 content/rancher/v2.6/en/security/security-scan/_index.md diff --git a/content/rancher/v2.6/en/security/_index.md b/content/rancher/v2.6/en/security/_index.md index 871203d2102..5780e27a265 100644 --- a/content/rancher/v2.6/en/security/_index.md +++ b/content/rancher/v2.6/en/security/_index.md @@ -11,7 +11,7 @@ weight: 20

Reporting process

-

Please submit possible security issues by emailing security@rancher.com

+

Please submit possible security issues by emailing security@rancher.com .

Announcements

diff --git a/content/rancher/v2.6/en/security/best-practices/_index.md b/content/rancher/v2.6/en/security/best-practices/_index.md index 1b207551e35..bb31bfca47c 100644 --- a/content/rancher/v2.6/en/security/best-practices/_index.md +++ b/content/rancher/v2.6/en/security/best-practices/_index.md @@ -1,8 +1,36 @@ --- -title: Kubernetes Security Best Practices +title: Security Best Practices weight: 5 --- -# Restricting cloud metadata API access +### Restricting cloud metadata API access Cloud providers such as AWS, Azure, or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets. + +Below is an example of a global network policy (`GlobalNetworkPolicy`) using Calico to restrict access to cloud metadata API. + +``` +apiVersion: projectcalico.org/v3 +kind: GlobalNetworkPolicy +metadata: + name: allow-all-egress-except-metadata-api +spec: + selector: all() + egress: + - action: Deny + protocol: TCP + destination: + nets: + - 169.254.169.254/32 + - action: Allow + destination: + nets: + - 0.0.0.0/0 +``` +A similar policy can be created with Canal and Weave. For the list of approved CNI (Container Network Interface) providers in Rancher, please consult this [list]({{}}/rancher/v2.6/en/faq/networking/cni-providers). + +Removing unnecessary packages and using reduced base images is also a good defense in depth strategy to reduce the attack surface against possible abuses of the cloud metadata API. As well as following the principle of least privilege and using instances profiles and roles with the least amount of required permissions for the services to fully function. + +It is advised to consult your cloud provider's security best practices for further recommendations and specific details on how to restrict access to cloud instance metadata API. + +Further references: MITRE ATT&CK knowledge base on - [Unsecured Credentials: Cloud Instance Metadata API](https://attack.mitre.org/techniques/T1552/005/). diff --git a/content/rancher/v2.6/en/security/cis-scans/_index.md b/content/rancher/v2.6/en/security/cis-scans/_index.md new file mode 100644 index 00000000000..17aa5a5a3b1 --- /dev/null +++ b/content/rancher/v2.6/en/security/cis-scans/_index.md @@ -0,0 +1,289 @@ +--- +title: CIS Scans +weight: 17 +--- + +Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. The CIS scans can run on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE. + +The `rancher-cis-benchmark` app leverages kube-bench, an open-source tool from Aqua Security, to check clusters for CIS Kubernetes Benchmark compliance. Also, to generate a cluster-wide report, the application utilizes Sonobuoy for report aggregation. + +- [About the CIS Benchmark](#about-the-cis-benchmark) +- [About the Generated Report](#about-the-generated-report) +- [Test Profiles](#test-profiles) +- [About Skipped and Not Applicable Tests](#about-skipped-and-not-applicable-tests) +- [Roles-based Access Control](./rbac) +- [Configuration](./configuration) +- [How-to Guides](#how-to-guides) + - [Installing CIS Benchmark](#installing-cis-benchmark) + - [Uninstalling CIS Benchmark](#uninstalling-cis-benchmark) + - [Running a Scan](#running-a-scan) + - [Running a Scan Periodically on a Schedule](#running-a-scan-periodically-on-a-schedule) + - [Skipping Tests](#skipping-tests) + - [Viewing Reports](#viewing-reports) + - [Enabling Alerting for rancher-cis-benchmark](#enabling-alerting-for-rancher-cis-benchmark) + - [Configuring Alerts for a Periodic Scan on a Schedule](#configuring-alerts-for-a-periodic-scan-on-a-schedule) + - [Creating a Custom Benchmark Version for Running a Cluster Scan](#creating-a-custom-benchmark-version-for-running-a-cluster-scan) + + +# About the CIS Benchmark + +The Center for Internet Security is a 501(c\)(3) non-profit organization, formed in October 2000, with a mission to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace". The organization is headquartered in East Greenbush, New York, with members including large corporations, government agencies, and academic institutions. + +CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team. + +The official Benchmark documents are available through the CIS website. The sign-up form to access the documents is +here. + +# About the Generated Report + +Each scan generates a report can be viewed in the Rancher UI and can be downloaded in CSV format. + +By default, the CIS Benchmark v1.6 is used. + +The Benchmark version is included in the generated report. + +The Benchmark provides recommendations of two types: Automated and Manual. Recommendations marked as Manual in the Benchmark are not included in the generated report. + +Some tests are designated as "Not Applicable." These tests will not be run on any CIS scan because of the way that Rancher provisions RKE clusters. For information on how test results can be audited, and why some tests are designated to be not applicable, refer to Rancher's self-assessment guide for the corresponding Kubernetes version. + +The report contains the following information: + +| Column in Report | Description | +|------------------|-------------| +| `id` | The ID number of the CIS Benchmark. | +| `description` | The description of the CIS Benchmark test. | +| `remediation` | What needs to be fixed in order to pass the test. | +| `state` | Indicates if the test passed, failed, was skipped, or was not applicable. | +| `node_type` | The node role, which affects which tests are run on the node. Master tests are run on controlplane nodes, etcd tests are run on etcd nodes, and node tests are run on the worker nodes. | +| `audit` | This is the audit check that `kube-bench` runs for this test. | +| `audit_config` | Any configuration applicable to the audit script. | +| `test_info` | Test-related info as reported by `kube-bench`, if any. | +| `commands` | Test-related commands as reported by `kube-bench`, if any. | +| `config_commands` | Test-related configuration data as reported by `kube-bench`, if any. | +| `actual_value` | The test's actual value, present if reported by `kube-bench`. | +| `expected_result` | The test's expected result, present if reported by `kube-bench`. | + +Refer to the table in the cluster hardening guide for information on which versions of Kubernetes, the Benchmark, Rancher, and our cluster hardening guide correspond to each other. Also refer to the hardening guide for configuration files of CIS-compliant clusters and information on remediating failed tests. + +# Test Profiles + +The following profiles are available: + +- Generic CIS 1.5 +- Generic CIS 1.6 +- RKE permissive 1.5 +- RKE hardened 1.5 +- RKE permissive 1.6 +- RKE hardened 1.6 +- EKS +- GKE +- RKE2 permissive 1.5 +- RKE2 permissive 1.5 + +You also have the ability to customize a profile by saving a set of tests to skip. + +All profiles will have a set of not applicable tests that will be skipped during the CIS scan. These tests are not applicable based on how a RKE cluster manages Kubernetes. + +There are two types of RKE cluster scan profiles: + +- **Permissive:** This profile has a set of tests that have been will be skipped as these tests will fail on a default RKE Kubernetes cluster. Besides the list of skipped tests, the profile will also not run the not applicable tests. +- **Hardened:** This profile will not skip any tests, except for the non-applicable tests. + +The EKS and GKE cluster scan profiles are based on CIS Benchmark versions that are specific to those types of clusters. + +In order to pass the "Hardened" profile, you will need to follow the steps on the hardening guide and use the `cluster.yml` defined in the hardening guide to provision a hardened cluster. + +The default profile and the supported CIS benchmark version depends on the type of cluster that will be scanned: + +The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version. + +- For RKE Kubernetes clusters, the RKE Permissive 1.6 profile is the default. +- EKS and GKE have their own CIS Benchmarks published by `kube-bench`. The corresponding test profiles are used by default for those clusters. +- For RKE2 Kubernetes clusters, the RKE2 Permissive 1.5 profile is the default. +- For cluster types other than RKE, RKE2, EKS and GKE, the Generic CIS 1.5 profile will be used by default. + +# About Skipped and Not Applicable Tests + +For a list of skipped and not applicable tests, refer to this page. + +For now, only user-defined skipped tests are marked as skipped in the generated report. + +Any skipped tests that are defined as being skipped by one of the default profiles are marked as not applicable. + +# Roles-based Access Control + +For information about permissions, refer to this page. + +# Configuration + +For more information about configuring the custom resources for the scans, profiles, and benchmark versions, refer to this page. + +# How-to Guides + +- [Installing rancher-cis-benchmark](#installing-rancher-cis-benchmark) +- [Uninstalling rancher-cis-benchmark](#uninstalling-rancher-cis-benchmark) +- [Running a Scan](#running-a-scan) +- [Running a Scan Periodically on a Schedule](#running-a-scan-periodically-on-a-schedule) +- [Skipping Tests](#skipping-tests) +- [Viewing Reports](#viewing-reports) +- [Enabling Alerting for rancher-cis-benchmark](#enabling-alerting-for-rancher-cis-benchmark) +- [Configuring Alerts for a Periodic Scan on a Schedule](#configuring-alerts-for-a-periodic-scan-on-a-schedule) +- [Creating a Custom Benchmark Version for Running a Cluster Scan](#creating-a-custom-benchmark-version-for-running-a-cluster-scan) +### Installing CIS Benchmark + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to install CIS Benchmark and click **Explore**. +1. In the left navigation bar, click **Apps & Marketplace > Charts**. +1. Click **CIS Benchmark** +1. Click **Install**. + +**Result:** The CIS scan application is deployed on the Kubernetes cluster. + +### Uninstalling CIS Benchmark + +1. From the **Cluster Dashboard,** go to the left navigation bar and click **Apps & Marketplace > Installed Apps**. +1. Go to the `cis-operator-system` namespace and check the boxes next to `rancher-cis-benchmark-crd` and `rancher-cis-benchmark`. +1. Click **Delete** and confirm **Delete**. + +**Result:** The `rancher-cis-benchmark` application is uninstalled. + +### Running a Scan + +When a ClusterScan custom resource is created, it launches a new CIS scan on the cluster for the chosen ClusterScanProfile. + +Note: There is currently a limitation of running only one CIS scan at a time for a cluster. If you create multiple ClusterScan custom resources, they will be run one after the other by the operator, and until one scan finishes, the rest of the ClusterScan custom resources will be in the "Pending" state. + +To run a scan, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. +1. Click **CIS Benchmark > Scan**. +1. Click **Create**. +1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. +1. Click **Create**. + +**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears. +### Running a Scan Periodically on a Schedule + +To run a ClusterScan on a schedule, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. +1. Click **CIS Benchmark > Scan**. +1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. +1. Choose the option **Run scan on a schedule**. +1. Enter a valid cron schedule expression in the field **Schedule**. +1. Choose a **Retention** count, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged. +1. Click **Create**. + +**Result:** The scan runs and reschedules to run according to the cron schedule provided. The **Next Scan** value indicates the next time this scan will run again. + +A report is generated with the scan results every time the scan runs. To see the latest results, click the name of the scan that appears. + +You can also see the previous reports by choosing the report from the **Reports** dropdown on the scan detail page. + +### Skipping Tests + +CIS scans can be run using test profiles with user-defined skips. + +To skip tests, you will create a custom CIS scan profile. A profile contains the configuration for the CIS scan, which includes the benchmark versions to use and any specific tests to skip in that benchmark. + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. +1. Click **CIS Benchmark > Profile**. +1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile and click **⋮ Clone**. If you are filling out the form, add the tests to skip using the test IDs, using the relevant CIS Benchmark as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name: + + ```yaml + apiVersion: cis.cattle.io/v1 + kind: ClusterScanProfile + metadata: + annotations: + meta.helm.sh/release-name: clusterscan-operator + meta.helm.sh/release-namespace: cis-operator-system + labels: + app.kubernetes.io/managed-by: Helm + name: "" + spec: + benchmarkVersion: cis-1.5 + skipTests: + - "1.1.20" + - "1.1.21" + ``` +1. Click **Create**. + +**Result:** A new CIS scan profile is created. + +When you [run a scan](#running-a-scan) that uses this profile, the defined tests will be skipped during the scan. The skipped tests will be marked in the generated report as `Skip`. + +### Viewing Reports + +To view the generated CIS scan reports, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. +1. Click **CIS Benchmark > Scan**. +1. The **Scans** page will show the generated reports. To see a detailed report, go to a scan report and click the name. + +One can download the report from the Scans list or from the scan detail page. + +### Enabling Alerting for rancher-cis-benchmark + +Alerts can be configured to be sent out for a scan that runs on a schedule. + +> **Prerequisite:** +> +> Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration) +> +> While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration/receiver/#example-route-config-for-cis-scan-alerts) + +While installing or upgrading the `rancher-cis-benchmark` Helm chart, set the following flag to `true` in the `values.yaml`: + +```yaml +alerts: + enabled: true +``` + +### Configuring Alerts for a Periodic Scan on a Schedule + +It is possible to run a ClusterScan on a schedule. + +A scheduled scan can also specify if you should receive alerts when the scan completes. + +Alerts are supported only for a scan that runs on a schedule. + +The CIS Benchmark application supports two types of alerts: + +- Alert on scan completion: This alert is sent out when the scan run finishes. The alert includes details including the ClusterScan's name and the ClusterScanProfile name. +- Alert on scan failure: This alert is sent out if there are some test failures in the scan run or if the scan is in a `Fail` state. + +> **Prerequisite:** +> +> Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration) +> +> While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration/receiver/#example-route-config-for-cis-scan-alerts) + +To configure alerts for a scan that runs on a schedule, + +1. Please enable alerts on the `rancher-cis-benchmark` application (#enabling-alerting-for-rancher-cis-benchmark) +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. +1. Click **CIS Benchmark > Scan**. +1. Click **Create**. +1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. +1. Choose the option **Run scan on a schedule**. +1. Enter a valid [cron schedule expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) in the field **Schedule**. +1. Check the boxes next to the Alert types under **Alerting**. +1. Optional: Choose a **Retention Count**, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged. +1. Click **Create**. + +**Result:** The scan runs and reschedules to run according to the cron schedule provided. Alerts are sent out when the scan finishes if routes and receiver are configured under `rancher-monitoring` application. + +A report is generated with the scan results every time the scan runs. To see the latest results, click the name of the scan that appears. + +### Creating a Custom Benchmark Version for Running a Cluster Scan + +There could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them. + +It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application. + +For details, see [this page.](./custom-benchmark) diff --git a/content/rancher/v2.6/en/security/cis-scans/configuration/_index.md b/content/rancher/v2.6/en/security/cis-scans/configuration/_index.md new file mode 100644 index 00000000000..26df1932e25 --- /dev/null +++ b/content/rancher/v2.6/en/security/cis-scans/configuration/_index.md @@ -0,0 +1,98 @@ +--- +title: Configuration +weight: 3 +--- + +This configuration reference is intended to help you manage the custom resources created by the `rancher-cis-benchmark` application. These resources are used for performing CIS scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization. + +To configure the custom resources, go to the **Cluster Dashboard** To configure the CIS scans, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to configure CIS scans and click **Explore**. +1. In the left navigation bar, click **CIS Benchmark**. + +### Scans + +A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed. + +When configuring a scan, you need to define the name of the scan profile that will be used with the `scanProfileName` directive. + +An example ClusterScan custom resource is below: + +```yaml +apiVersion: cis.cattle.io/v1 +kind: ClusterScan +metadata: + name: rke-cis +spec: + scanProfileName: rke-profile-hardened +``` + +### Profiles + +A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark. + +> By default, a few ClusterScanProfiles are installed as part of the `rancher-cis-benchmark` chart. If a user edits these default benchmarks or profiles, the next chart update will reset them back. So it is advisable for users to not edit the default ClusterScanProfiles. + +Users can clone the ClusterScanProfiles to create custom profiles. + +Skipped tests are listed under the `skipTests` directive. + +When you create a new profile, you will also need to give it a name. + +An example `ClusterScanProfile` is below: + +```yaml +apiVersion: cis.cattle.io/v1 +kind: ClusterScanProfile +metadata: + annotations: + meta.helm.sh/release-name: clusterscan-operator + meta.helm.sh/release-namespace: cis-operator-system + labels: + app.kubernetes.io/managed-by: Helm + name: "" +spec: + benchmarkVersion: cis-1.5 + skipTests: + - "1.1.20" + - "1.1.21" +``` + +### Benchmark Versions + +A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark. + +A `ClusterScanBenchmark` defines the CIS `BenchmarkVersion` name and test configurations. The `BenchmarkVersion` name is a parameter provided to the `kube-bench` tool. + +By default, a few `BenchmarkVersion` names and test configurations are packaged as part of the CIS scan application. When this feature is enabled, these default BenchmarkVersions will be automatically installed and available for users to create a ClusterScanProfile. + +> If the default BenchmarkVersions are edited, the next chart update will reset them back. Therefore we don't recommend editing the default ClusterScanBenchmarks. + +A ClusterScanBenchmark consists of the fields: + +- `ClusterProvider`: This is the cluster provider name for which this benchmark is applicable. For example: RKE, EKS, GKE, etc. Leave it empty if this benchmark can be run on any cluster type. +- `MinKubernetesVersion`: Specifies the cluster's minimum kubernetes version necessary to run this benchmark. Leave it empty if there is no dependency on a particular Kubernetes version. +- `MaxKubernetesVersion`: Specifies the cluster's maximum Kubernetes version necessary to run this benchmark. Leave it empty if there is no dependency on a particular k8s version. + +An example `ClusterScanBenchmark` is below: + +```yaml +apiVersion: cis.cattle.io/v1 +kind: ClusterScanBenchmark +metadata: + annotations: + meta.helm.sh/release-name: clusterscan-operator + meta.helm.sh/release-namespace: cis-operator-system + creationTimestamp: "2020-08-28T18:18:07Z" + generation: 1 + labels: + app.kubernetes.io/managed-by: Helm + name: cis-1.5 + resourceVersion: "203878" + selfLink: /apis/cis.cattle.io/v1/clusterscanbenchmarks/cis-1.5 + uid: 309e543e-9102-4091-be91-08d7af7fb7a7 +spec: + clusterProvider: "" + minKubernetesVersion: 1.15.0 +``` \ No newline at end of file diff --git a/content/rancher/v2.6/en/security/cis-scans/custom-benchmark/_index.md b/content/rancher/v2.6/en/security/cis-scans/custom-benchmark/_index.md new file mode 100644 index 00000000000..36f70ccaa55 --- /dev/null +++ b/content/rancher/v2.6/en/security/cis-scans/custom-benchmark/_index.md @@ -0,0 +1,84 @@ +--- +title: Creating a Custom Benchmark Version for Running a Cluster Scan +weight: 4 +--- + +Each Benchmark Version defines a set of test configuration files that define the CIS tests to be run by the kube-bench tool. +The `rancher-cis-benchmark` application installs a few default Benchmark Versions which are listed under CIS Benchmark application menu. + +But there could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them. + +It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application. + +When a cluster scan is run, you need to select a Profile which points to a specific Benchmark Version. + +Follow all the steps below to add a custom Benchmark Version and run a scan using it. + +1. [Prepare the Custom Benchmark Version ConfigMap](#1-prepare-the-custom-benchmark-version-configmap) +2. [Add a Custom Benchmark Version to a Cluster](#2-add-a-custom-benchmark-version-to-a-cluster) +3. [Create a New Profile for the Custom Benchmark Version](#3-create-a-new-profile-for-the-custom-benchmark-version) +4. [Run a Scan Using the Custom Benchmark Version](#4-run-a-scan-using-the-custom-benchmark-version) + +### 1. Prepare the Custom Benchmark Version ConfigMap + +To create a custom benchmark version, first you need to create a ConfigMap containing the benchmark version's config files and upload it to your Kubernetes cluster where you want to run the scan. + +To prepare a custom benchmark version ConfigMap, suppose we want to add a custom Benchmark Version named `foo`. + +1. Create a directory named `foo` and inside this directory, place all the config YAML files that the kube-bench tool looks for. For example, here are the config YAML files for a Generic CIS 1.5 Benchmark Version https://github.com/aquasecurity/kube-bench/tree/master/cfg/cis-1.5 +1. Place the complete `config.yaml` file, which includes all the components that should be tested. +1. Add the Benchmark version name to the `target_mapping` section of the `config.yaml`: + + ```yaml + target_mapping: + "foo": + - "master" + - "node" + - "controlplane" + - "etcd" + - "policies" + ``` +1. Upload this directory to your Kubernetes Cluster by creating a ConfigMap: + + ```yaml + kubectl create configmap -n foo --from-file= + ``` + +### 2. Add a Custom Benchmark Version to a Cluster + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. +1. In the left navigation bar, click **CIS Benchmark > Benchmark Version**. +1. Click **Create**. +1. Enter the **Name** and a description for your custom benchmark version. +1. Choose the cluster provider that your benchmark version applies to. +1. Choose the ConfigMap you have uploaded from the dropdown. +1. Add the minimum and maximum Kubernetes version limits applicable, if any. +1. Click **Create**. + +### 3. Create a New Profile for the Custom Benchmark Version + +To run a scan using your custom benchmark version, you need to add a new Profile pointing to this benchmark version. + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. +1. In the left navigation bar, click **CIS Benchmark > Profile**. +1. Click **Create**. +1. Provide a **Name** and description. In this example, we name it `foo-profile`. +1. Choose the Benchmark Version from the dropdown. +1. Click **Create**. + +### 4. Run a Scan Using the Custom Benchmark Version + +Once the Profile pointing to your custom benchmark version `foo` has been created, you can create a new Scan to run the custom test configs in the Benchmark Version. + +To run a scan, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. +1. In the left navigation bar, click **CIS Benchmark > Scan**. +1. Click **Create**. +1. Choose the new cluster scan profile. +1. Click **Create**. + +**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears. diff --git a/content/rancher/v2.6/en/security/cis-scans/rbac/_index.md b/content/rancher/v2.6/en/security/cis-scans/rbac/_index.md new file mode 100644 index 00000000000..ad2b47bff2c --- /dev/null +++ b/content/rancher/v2.6/en/security/cis-scans/rbac/_index.md @@ -0,0 +1,50 @@ +--- +title: Roles-based Access Control +shortTitle: RBAC +weight: 3 +--- + +This section describes the permissions required to use the rancher-cis-benchmark App. + +The rancher-cis-benchmark is a cluster-admin only feature by default. + +However, the `rancher-cis-benchmark` chart installs these two default `ClusterRoles`: + +- cis-admin +- cis-view + +In Rancher, only cluster owners and global administrators have `cis-admin` access by default. + +Note: If you were using the `cis-edit` role added in Rancher v2.5 setup, it has now been removed since +Rancher v2.5.2 because it essentially is same as `cis-admin`. If you happen to create any clusterrolebindings +for `cis-edit`, please update them to use `cis-admin` ClusterRole instead. + +# Cluster-Admin Access + +Rancher CIS Scans is a cluster-admin only feature by default. +This means only the Rancher global admins, and the cluster’s cluster-owner can: + +- Install/Uninstall the rancher-cis-benchmark App +- See the navigation links for CIS Benchmark CRDs - ClusterScanBenchmarks, ClusterScanProfiles, ClusterScans +- List the default ClusterScanBenchmarks and ClusterScanProfiles +- Create/Edit/Delete new ClusterScanProfiles +- Create/Edit/Delete a new ClusterScan to run the CIS scan on the cluster +- View and Download the ClusterScanReport created after the ClusterScan is complete + + +# Summary of Default Permissions for Kubernetes Default Roles + +The rancher-cis-benchmark creates three `ClusterRoles` and adds the CIS Benchmark CRD access to the following default K8s `ClusterRoles`: + +| ClusterRole created by chart | Default K8s ClusterRole | Permissions given with Role +| ------------------------------| ---------------------------| ---------------------------| +| `cis-admin` | `admin`| Ability to CRUD clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR +| `cis-view` | `view `| Ability to List(R) clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR + + +By default only cluster-owner role will have ability to manage and use `rancher-cis-benchmark` feature. + +The other Rancher roles (cluster-member, project-owner, project-member) do not have any default permissions to manage and use rancher-cis-benchmark resources. + +But if a cluster-owner wants to delegate access to other users, they can do so by creating ClusterRoleBindings between these users and the above CIS ClusterRoles manually. +There is no automatic role aggregation supported for the `rancher-cis-benchmark` ClusterRoles. diff --git a/content/rancher/v2.6/en/security/cis-scans/skipped-tests/_index.md b/content/rancher/v2.6/en/security/cis-scans/skipped-tests/_index.md new file mode 100644 index 00000000000..f2b125c0262 --- /dev/null +++ b/content/rancher/v2.6/en/security/cis-scans/skipped-tests/_index.md @@ -0,0 +1,54 @@ +--- +title: Skipped and Not Applicable Tests +weight: 3 +--- + +This section lists the tests that are skipped in the permissive test profile for RKE. + +> All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile. + +# CIS Benchmark v1.5 + +### CIS Benchmark v1.5 Skipped Tests + +| Number | Description | Reason for Skipping | +| ---------- | ------------- | --------- | +| 1.1.12 | Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership. | +| 1.2.6 | Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | +| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. | +| 1.2.34 | Ensure that encryption providers are appropriately configured (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. | +| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Automated) | System level configurations are required before provisioning the cluster in order for this argument to be set to true. | +| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | +| 5.1.5 | Ensure that default service accounts are not actively used. (Automated) | Kubernetes provides default service accounts to be used. | +| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.2.3 | Minimize the admission of containers wishing to share the host IPC namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.2.4 | Minimize the admission of containers wishing to share the host network namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.2.5 | Minimize the admission of containers with allowPrivilegeEscalation (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.3.2 | Ensure that all Namespaces have Network Policies defined (Automated) | Enabling Network Policies can prevent certain applications from communicating with each other. | +| 5.6.4 | The default namespace should not be used (Automated) | Kubernetes provides a default namespace. | + +### CIS Benchmark v1.5 Not Applicable Tests + +| Number | Description | Reason for being not applicable | +| ---------- | ------------- | --------- | +| 1.1.1 | Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | +| 1.1.2 | Ensure that the API server pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | +| 1.1.3 | Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.1.4 | Ensure that the controller manager pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.1.5 | Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.6 | Ensure that the scheduler pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.7 | Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | +| 1.1.8 | Ensure that the etcd pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | +| 1.1.13 | Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | +| 1.1.14 | Ensure that the admin.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | +| 1.1.15 | Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.16 | Ensure that the scheduler.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.17 | Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.1.18 | Ensure that the controller-manager.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.3.6 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | +| 4.1.1 | Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | +| 4.1.2 | Ensure that the kubelet service file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | +| 4.1.9 | Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | +| 4.1.10 | Ensure that the kubelet configuration file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | +| 4.2.12 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | \ No newline at end of file diff --git a/content/rancher/v2.6/en/security/cve/_index.md b/content/rancher/v2.6/en/security/cve/_index.md index b97cf1a59c5..174acb15bf0 100644 --- a/content/rancher/v2.6/en/security/cve/_index.md +++ b/content/rancher/v2.6/en/security/cve/_index.md @@ -1,9 +1,9 @@ --- -title: Rancher CVEs and Resolutions +title: Security Advisories and CVEs weight: 300 --- -Rancher is committed to informing the community of security issues in our products. Rancher will publish CVEs (Common Vulnerabilities and Exposures) for issues we have resolved. +Rancher is committed to informing the community of security issues in our products. Rancher will publish security advisories and CVEs (Common Vulnerabilities and Exposures) for issues we have resolved. New security advisories are also published in Rancher's GitHub [security page](https://github.com/rancher/rancher/security/advisories). | ID | Description | Date | Resolution | |----|-------------|------|------------| @@ -18,4 +18,4 @@ Rancher is committed to informing the community of security issues in our produc | [CVE-2019-12274](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12274) | Nodes using the built-in node drivers using a file path option allows the machine to read arbitrary files including sensitive ones from inside the Rancher server container. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) | | [CVE-2019-11202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11202) | The default admin, that is shipped with Rancher, will be re-created upon restart of Rancher despite being explicitly deleted. | 16 Apr 2019 | [Rancher v2.2.2](https://github.com/rancher/rancher/releases/tag/v2.2.2), [Rancher v2.1.9](https://github.com/rancher/rancher/releases/tag/v2.1.9) and [Rancher v2.0.14](https://github.com/rancher/rancher/releases/tag/v2.0.14) | | [CVE-2019-6287](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6287) | Project members continue to get access to namespaces from projects that they were removed from if they were added to more than one project. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) | -| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions]({{}}/rancher/v2.6/en/installation/install-rancher-on-k8s/rollbacks). | \ No newline at end of file +| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions]({{}}/rancher/v2.6/en/installation/install-rancher-on-k8s/rollbacks). | diff --git a/content/rancher/v2.6/en/security/hardening-guides/_index.md b/content/rancher/v2.6/en/security/hardening-guides/_index.md new file mode 100644 index 00000000000..e6a16445fc1 --- /dev/null +++ b/content/rancher/v2.6/en/security/hardening-guides/_index.md @@ -0,0 +1,7 @@ +--- +title: Self-Assessment and Hardening Guides for Rancher v2.6 +shortTitle: Rancher v2.6 Guides +weight: 1 +--- + +Rancher v2.6 hardening guides are currently being updated. For the time being, please consult Rancher v2.5 [self-assessment and hardening guides]({{}}/rancher/v2.5/en/security/rancher-2.5) for more information. diff --git a/content/rancher/v2.6/en/security/rancher-2.5/1.5-benchmark-2.5/_index.md b/content/rancher/v2.6/en/security/rancher-2.5/1.5-benchmark-2.5/_index.md deleted file mode 100644 index 463446b78a0..00000000000 --- a/content/rancher/v2.6/en/security/rancher-2.5/1.5-benchmark-2.5/_index.md +++ /dev/null @@ -1,2265 +0,0 @@ ---- -title: CIS 1.5 Benchmark - Self-Assessment Guide - Rancher v2.5 -weight: 201 ---- - -### CIS v1.5 Kubernetes Benchmark - Rancher v2.5 with Kubernetes v1.15 - -[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.5/Rancher_1.5_Benchmark_Assessment.pdf) - -#### Overview - -This document is a companion to the Rancher v2.5 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark. - -This guide corresponds to specific versions of the hardening guide, Rancher, CIS Benchmark, and Kubernetes: - -Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version ----------------------------|----------|---------|------- -Hardening Guide with CIS 1.5 Benchmark | Rancher v2.5 | CIS v1.5| Kubernetes v1.15 - -Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply and will have a result of `Not Applicable`. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher-created clusters. - -This document is to be used by Rancher operators, security teams, auditors and decision makers. - -For more detail about each audit, including rationales and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.5. You can download the benchmark after logging in to [CISecurity.org]( https://www.cisecurity.org/benchmark/kubernetes/). - -#### Testing controls methodology - -Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files. - -Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher Labs are provided for testing. -When performing the tests, you will need access to the Docker command line on the hosts of all three RKE roles. The commands also make use of the the [jq](https://stedolan.github.io/jq/) and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) (with valid config) tools to and are required in the testing and evaluation of test results. - -> NOTE: only scored tests are covered in this guide. - -### Controls - ---- -## 1 Master Node Security Configuration -### 1.1 Master Node Configuration Files - -#### 1.1.1 Ensure that the API server pod specification file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the API server. All configuration is passed in as arguments at container run time. - -#### 1.1.2 Ensure that the API server pod specification file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the API server. All configuration is passed in as arguments at container run time. - -#### 1.1.3 Ensure that the controller manager pod specification file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. - -#### 1.1.4 Ensure that the controller manager pod specification file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. - -#### 1.1.5 Ensure that the scheduler pod specification file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. - -#### 1.1.6 Ensure that the scheduler pod specification file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. - -#### 1.1.7 Ensure that the etcd pod specification file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. - -#### 1.1.8 Ensure that the etcd pod specification file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. - -#### 1.1.11 Ensure that the etcd data directory permissions are set to `700` or more restrictive (Scored) - -**Result:** PASS - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument `--data-dir`, -from the below command: - -``` bash -ps -ef | grep etcd -``` - -Run the below command (based on the etcd data directory found above). For example, - -``` bash -chmod 700 /var/lib/etcd -``` - -**Audit Script:** 1.1.11.sh - -``` -#!/bin/bash -e - -etcd_bin=${1} - -test_dir=$(ps -ef | grep ${etcd_bin} | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%') - -docker inspect etcd | jq -r '.[].HostConfig.Binds[]' | grep "${test_dir}" | cut -d ":" -f 1 | xargs stat -c %a -``` - -**Audit Execution:** - -``` -./1.1.11.sh etcd -``` - -**Expected result**: - -``` -'700' is equal to '700' -``` - -#### 1.1.12 Ensure that the etcd data directory ownership is set to `etcd:etcd` (Scored) - -**Result:** PASS - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument `--data-dir`, -from the below command: - -``` bash -ps -ef | grep etcd -``` - -Run the below command (based on the etcd data directory found above). -For example, -``` bash -chown etcd:etcd /var/lib/etcd -``` - -**Audit Script:** 1.1.12.sh - -``` -#!/bin/bash -e - -etcd_bin=${1} - -test_dir=$(ps -ef | grep ${etcd_bin} | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%') - -docker inspect etcd | jq -r '.[].HostConfig.Binds[]' | grep "${test_dir}" | cut -d ":" -f 1 | xargs stat -c %U:%G -``` - -**Audit Execution:** - -``` -./1.1.12.sh etcd -``` - -**Expected result**: - -``` -'etcd:etcd' is present -``` - -#### 1.1.13 Ensure that the `admin.conf` file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE does not store the kubernetes default kubeconfig credentials file on the nodes. It’s presented to user where RKE is run. -We recommend that this `kube_config_cluster.yml` file be kept in secure store. - -#### 1.1.14 Ensure that the admin.conf file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE does not store the kubernetes default kubeconfig credentials file on the nodes. It’s presented to user where RKE is run. -We recommend that this `kube_config_cluster.yml` file be kept in secure store. - -#### 1.1.15 Ensure that the `scheduler.conf` file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. - -#### 1.1.16 Ensure that the `scheduler.conf` file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. - -#### 1.1.17 Ensure that the `controller-manager.conf` file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. - -#### 1.1.18 Ensure that the `controller-manager.conf` file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. - -#### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to `root:root` (Scored) - -**Result:** PASS - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, - -``` bash -chown -R root:root /etc/kubernetes/ssl -``` - -**Audit:** - -``` -stat -c %U:%G /etc/kubernetes/ssl -``` - -**Expected result**: - -``` -'root:root' is present -``` - -#### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to `644` or more restrictive (Scored) - -**Result:** PASS - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, - -``` bash -chmod -R 644 /etc/kubernetes/ssl -``` - -**Audit Script:** check_files_permissions.sh - -``` -#!/usr/bin/env bash - -# This script is used to ensure the file permissions are set to 644 or -# more restrictive for all files in a given directory or a wildcard -# selection of files -# -# inputs: -# $1 = /full/path/to/directory or /path/to/fileswithpattern -# ex: !(*key).pem -# -# $2 (optional) = permission (ex: 600) -# -# outputs: -# true/false - -# Turn on "extended glob" for use of '!' in wildcard -shopt -s extglob - -# Turn off history to avoid surprises when using '!' -set -H - -USER_INPUT=$1 - -if [[ "${USER_INPUT}" == "" ]]; then - echo "false" - exit -fi - - -if [[ -d ${USER_INPUT} ]]; then - PATTERN="${USER_INPUT}/*" -else - PATTERN="${USER_INPUT}" -fi - -PERMISSION="" -if [[ "$2" != "" ]]; then - PERMISSION=$2 -fi - -FILES_PERMISSIONS=$(stat -c %n\ %a ${PATTERN}) - -while read -r fileInfo; do - p=$(echo ${fileInfo} | cut -d' ' -f2) - - if [[ "${PERMISSION}" != "" ]]; then - if [[ "$p" != "${PERMISSION}" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "644" && "$p" != "640" && "$p" != "600" ]]; then - echo "false" - exit - fi - fi -done <<< "${FILES_PERMISSIONS}" - - -echo "true" -exit -``` - -**Audit Execution:** - -``` -./check_files_permissions.sh '/etc/kubernetes/ssl/*.pem' -``` - -**Expected result**: - -``` -'true' is present -``` - -#### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to `600` (Scored) - -**Result:** PASS - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, - -``` bash -chmod -R 600 /etc/kubernetes/ssl/certs/serverca -``` - -**Audit Script:** 1.1.21.sh - -``` -#!/bin/bash -e -check_dir=${1:-/etc/kubernetes/ssl} - -for file in $(find ${check_dir} -name "*key.pem"); do - file_permission=$(stat -c %a ${file}) - if [[ "${file_permission}" == "600" ]]; then - continue - else - echo "FAIL: ${file} ${file_permission}" - exit 1 - fi -done - -echo "pass" -``` - -**Audit Execution:** - -``` -./1.1.21.sh /etc/kubernetes/ssl -``` - -**Expected result**: - -``` -'pass' is present -``` - -### 1.2 API Server - -#### 1.2.2 Ensure that the `--basic-auth-file` argument is not set (Scored) - -**Result:** PASS - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and remove the `--basic-auth-file=` parameter. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--basic-auth-file' is not present -``` - -#### 1.2.3 Ensure that the `--token-auth-file` parameter is not set (Scored) - -**Result:** PASS - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and remove the `--token-auth-file=` parameter. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--token-auth-file' is not present -``` - -#### 1.2.4 Ensure that the `--kubelet-https` argument is set to true (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and remove the `--kubelet-https` parameter. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--kubelet-https' is present OR '--kubelet-https' is not present -``` - -#### 1.2.5 Ensure that the `--kubelet-client-certificate` and `--kubelet-client-key` arguments are set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the -apiserver and kubelets. Then, edit API server pod specification file -`/etc/kubernetes/manifests/kube-apiserver.yaml` on the master node and set the -kubelet client certificate and key parameters as below. - -``` bash ---kubelet-client-certificate= ---kubelet-client-key= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present -``` - -#### 1.2.6 Ensure that the `--kubelet-certificate-authority` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and setup the TLS connection between -the apiserver and kubelets. Then, edit the API server pod specification file -`/etc/kubernetes/manifests/kube-apiserver.yaml` on the master node and set the -`--kubelet-certificate-authority` parameter to the path to the cert file for the certificate authority. -`--kubelet-certificate-authority=` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--kubelet-certificate-authority' is present -``` - -#### 1.2.7 Ensure that the `--authorization-mode` argument is not set to `AlwaysAllow` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--authorization-mode` parameter to values other than `AlwaysAllow`. -One such example could be as below. - -``` bash ---authorization-mode=RBAC -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'Node,RBAC' not have 'AlwaysAllow' -``` - -#### 1.2.8 Ensure that the `--authorization-mode` argument includes `Node` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--authorization-mode` parameter to a value that includes `Node`. - -``` bash ---authorization-mode=Node,RBAC -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'Node,RBAC' has 'Node' -``` - -#### 1.2.9 Ensure that the `--authorization-mode` argument includes `RBAC` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--authorization-mode` parameter to a value that includes RBAC, -for example: - -``` bash ---authorization-mode=Node,RBAC -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'Node,RBAC' has 'RBAC' -``` - -#### 1.2.11 Ensure that the admission control plugin `AlwaysAdmit` is not set (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and either remove the `--enable-admission-plugins` parameter, or set it to a -value that does not include `AlwaysAdmit`. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present -``` - -#### 1.2.14 Ensure that the admission control plugin `ServiceAccount` is set (Scored) - -**Result:** PASS - -**Remediation:** -Follow the documentation and create ServiceAccount objects as per your environment. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and ensure that the `--disable-admission-plugins` parameter is set to a -value that does not include `ServiceAccount`. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'ServiceAccount' OR '--enable-admission-plugins' is not present -``` - -#### 1.2.15 Ensure that the admission control plugin `NamespaceLifecycle` is set (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--disable-admission-plugins` parameter to -ensure it does not include `NamespaceLifecycle`. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -#### 1.2.16 Ensure that the admission control plugin `PodSecurityPolicy` is set (Scored) - -**Result:** PASS - -**Remediation:** -Follow the documentation and create Pod Security Policy objects as per your environment. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--enable-admission-plugins` parameter to a -value that includes `PodSecurityPolicy`: - -``` bash ---enable-admission-plugins=...,PodSecurityPolicy,... -``` - -Then restart the API Server. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'PodSecurityPolicy' -``` - -#### 1.2.17 Ensure that the admission control plugin `NodeRestriction` is set (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and configure `NodeRestriction` plug-in on kubelets. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--enable-admission-plugins` parameter to a -value that includes `NodeRestriction`. - -``` bash ---enable-admission-plugins=...,NodeRestriction,... -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'NodeRestriction' -``` - -#### 1.2.18 Ensure that the `--insecure-bind-address` argument is not set (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and remove the `--insecure-bind-address` parameter. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--insecure-bind-address' is not present -``` - -#### 1.2.19 Ensure that the `--insecure-port` argument is set to `0` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the below parameter. - -``` bash ---insecure-port=0 -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'0' is equal to '0' -``` - -#### 1.2.20 Ensure that the `--secure-port` argument is not set to `0` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and either remove the `--secure-port` parameter or -set it to a different **(non-zero)** desired port. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -6443 is greater than 0 OR '--secure-port' is not present -``` - -#### 1.2.21 Ensure that the `--profiling` argument is set to `false` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the below parameter. - -``` bash ---profiling=false -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'false' is equal to 'false' -``` - -#### 1.2.22 Ensure that the `--audit-log-path` argument is set (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--audit-log-path` parameter to a suitable path and -file where you would like audit logs to be written, for example: - -``` bash ---audit-log-path=/var/log/apiserver/audit.log -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--audit-log-path' is present -``` - -#### 1.2.23 Ensure that the `--audit-log-maxage` argument is set to `30` or as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--audit-log-maxage` parameter to `30` or as an appropriate number of days: - -``` bash ---audit-log-maxage=30 -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -30 is greater or equal to 30 -``` - -#### 1.2.24 Ensure that the `--audit-log-maxbackup` argument is set to `10` or as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--audit-log-maxbackup` parameter to `10` or to an appropriate -value. - -``` bash ---audit-log-maxbackup=10 -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -10 is greater or equal to 10 -``` - -#### 1.2.25 Ensure that the `--audit-log-maxsize` argument is set to `100` or as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--audit-log-maxsize` parameter to an appropriate size in **MB**. -For example, to set it as `100` **MB**: - -``` bash ---audit-log-maxsize=100 -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -100 is greater or equal to 100 -``` - -#### 1.2.26 Ensure that the `--request-timeout` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -and set the below parameter as appropriate and if needed. -For example, - -``` bash ---request-timeout=300s -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--request-timeout' is not present OR '--request-timeout' is present -``` - -#### 1.2.27 Ensure that the `--service-account-lookup` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the below parameter. - -``` bash ---service-account-lookup=true -``` - -Alternatively, you can delete the `--service-account-lookup` parameter from this file so -that the default takes effect. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--service-account-lookup' is not present OR 'true' is equal to 'true' -``` - -#### 1.2.28 Ensure that the `--service-account-key-file` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--service-account-key-file` parameter -to the public key file for service accounts: - -``` bash ---service-account-key-file= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--service-account-key-file' is present -``` - -#### 1.2.29 Ensure that the `--etcd-certfile` and `--etcd-keyfile` arguments are set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the **etcd** certificate and **key** file parameters. - -``` bash ---etcd-certfile= ---etcd-keyfile= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--etcd-certfile' is present AND '--etcd-keyfile' is present -``` - -#### 1.2.30 Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the TLS certificate and private key file parameters. - -``` bash ---tls-cert-file= ---tls-private-key-file= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -#### 1.2.31 Ensure that the `--client-ca-file` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the client certificate authority file. - -``` bash ---client-ca-file= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--client-ca-file' is present -``` - -#### 1.2.32 Ensure that the `--etcd-cafile` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the etcd certificate authority file parameter. - -``` bash ---etcd-cafile= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--etcd-cafile' is present -``` - -#### 1.2.33 Ensure that the `--encryption-provider-config` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--encryption-provider-config` parameter to the path of that file: - -``` bash ---encryption-provider-config= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--encryption-provider-config' is present -``` - -#### 1.2.34 Ensure that encryption providers are appropriately configured (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and configure a `EncryptionConfig` file. -In this file, choose **aescbc**, **kms** or **secretbox** as the encryption provider. - -**Audit Script:** 1.2.34.sh - -``` -#!/bin/bash -e - -check_file=${1} - -grep -q -E 'aescbc|kms|secretbox' ${check_file} -if [ $? -eq 0 ]; then - echo "--pass" - exit 0 -else - echo "fail: encryption provider found in ${check_file}" - exit 1 -fi -``` - -**Audit Execution:** - -``` -./1.2.34.sh /etc/kubernetes/ssl/encryption.yaml -``` - -**Expected result**: - -``` -'--pass' is present -``` - -### 1.3 Controller Manager - -#### 1.3.1 Ensure that the `--terminated-pod-gc-threshold` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` -on the master node and set the `--terminated-pod-gc-threshold` to an appropriate threshold, -for example: - -``` bash ---terminated-pod-gc-threshold=10 -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected result**: - -``` -'--terminated-pod-gc-threshold' is present -``` - -#### 1.3.2 Ensure that the `--profiling` argument is set to false (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` -on the master node and set the below parameter. - -``` bash ---profiling=false -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected result**: - -``` -'false' is equal to 'false' -``` - -#### 1.3.3 Ensure that the `--use-service-account-credentials` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` -on the master node to set the below parameter. - -``` bash ---use-service-account-credentials=true -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected result**: - -``` -'true' is not equal to 'false' -``` - -#### 1.3.4 Ensure that the `--service-account-private-key-file` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` -on the master node and set the `--service-account-private-key-file` parameter -to the private key file for service accounts. - -``` bash ---service-account-private-key-file= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected result**: - -``` -'--service-account-private-key-file' is present -``` - -#### 1.3.5 Ensure that the `--root-ca-file` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` -on the master node and set the `--root-ca-file` parameter to the certificate bundle file`. - -``` bash ---root-ca-file= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected result**: - -``` -'--root-ca-file' is present -``` - -#### 1.3.6 Ensure that the `RotateKubeletServerCertificate` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` -on the master node and set the `--feature-gates` parameter to include `RotateKubeletServerCertificate=true`. - -``` bash ---feature-gates=RotateKubeletServerCertificate=true -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected result**: - -``` -'RotateKubeletServerCertificate=true' is equal to 'RotateKubeletServerCertificate=true' -``` - -#### 1.3.7 Ensure that the `--bind-address argument` is set to `127.0.0.1` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` -on the master node and ensure the correct value for the `--bind-address` parameter. - -**Audit:** - -``` -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected result**: - -``` -'--bind-address' is present OR '--bind-address' is not present -``` - -### 1.4 Scheduler - -#### 1.4.1 Ensure that the `--profiling` argument is set to `false` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Scheduler pod specification file `/etc/kubernetes/manifests/kube-scheduler.yaml` file -on the master node and set the below parameter. - -``` bash ---profiling=false -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected result**: - -``` -'false' is equal to 'false' -``` - -#### 1.4.2 Ensure that the `--bind-address` argument is set to `127.0.0.1` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Scheduler pod specification file `/etc/kubernetes/manifests/kube-scheduler.yaml` -on the master node and ensure the correct value for the `--bind-address` parameter. - -**Audit:** - -``` -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected result**: - -``` -'--bind-address' is present OR '--bind-address' is not present -``` - -## 2 Etcd Node Configuration -### 2 Etcd Node Configuration Files - -#### 2.1 Ensure that the `--cert-file` and `--key-file` arguments are set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the etcd service documentation and configure TLS encryption. -Then, edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` -on the master node and set the below parameters. - -``` bash ---cert-file= ---key-file= -``` - -**Audit:** - -``` -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected result**: - -``` -'--cert-file' is present AND '--key-file' is present -``` - -#### 2.2 Ensure that the `--client-cert-auth` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the master -node and set the below parameter. - -``` bash ---client-cert-auth="true" -``` - -**Audit:** - -``` -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected result**: - -``` -'true' is equal to 'true' -``` - -#### 2.3 Ensure that the `--auto-tls` argument is not set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the master -node and either remove the `--auto-tls` parameter or set it to `false`. - -``` bash - --auto-tls=false -``` - -**Audit:** - -``` -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected result**: - -``` -'--auto-tls' is not present OR '--auto-tls' is not present -``` - -#### 2.4 Ensure that the `--peer-cert-file` and `--peer-key-file` arguments are set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the etcd service documentation and configure peer TLS encryption as appropriate -for your etcd cluster. Then, edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the -master node and set the below parameters. - -``` bash ---peer-client-file= ---peer-key-file= -``` - -**Audit:** - -``` -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected result**: - -``` -'--peer-cert-file' is present AND '--peer-key-file' is present -``` - -#### 2.5 Ensure that the `--peer-client-cert-auth` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the master -node and set the below parameter. - -``` bash ---peer-client-cert-auth=true -``` - -**Audit:** - -``` -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected result**: - -``` -'true' is equal to 'true' -``` - -#### 2.6 Ensure that the `--peer-auto-tls` argument is not set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the master -node and either remove the `--peer-auto-tls` parameter or set it to `false`. - -``` bash ---peer-auto-tls=false -``` - -**Audit:** - -``` -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected result**: - -``` -'--peer-auto-tls' is not present OR '--peer-auto-tls' is present -``` - -## 3 Control Plane Configuration -### 3.2 Logging - -#### 3.2.1 Ensure that a minimal audit policy is created (Scored) - -**Result:** PASS - -**Remediation:** -Create an audit policy file for your cluster. - -**Audit Script:** 3.2.1.sh - -``` -#!/bin/bash -e - -api_server_bin=${1} - -/bin/ps -ef | /bin/grep ${api_server_bin} | /bin/grep -v ${0} | /bin/grep -v grep -``` - -**Audit Execution:** - -``` -./3.2.1.sh kube-apiserver -``` - -**Expected result**: - -``` -'--audit-policy-file' is present -``` - -## 4 Worker Node Security Configuration -### 4.1 Worker Node Configuration Files - -#### 4.1.1 Ensure that the kubelet service file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. - -#### 4.1.2 Ensure that the kubelet service file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. - -#### 4.1.3 Ensure that the proxy kubeconfig file permissions are set to `644` or more restrictive (Scored) - -**Result:** PASS - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, - -``` bash -chmod 644 /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml -``` - -**Audit:** - -``` -/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %a /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected result**: - -``` -'644' is present OR '640' is present OR '600' is equal to '600' OR '444' is present OR '440' is present OR '400' is present OR '000' is present -``` - -#### 4.1.4 Ensure that the proxy kubeconfig file ownership is set to `root:root` (Scored) - -**Result:** PASS - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, - -``` bash -chown root:root /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml -``` - -**Audit:** - -``` -/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected result**: - -``` -'root:root' is present -``` - -#### 4.1.5 Ensure that the kubelet.conf file permissions are set to `644` or more restrictive (Scored) - -**Result:** PASS - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, - -``` bash -chmod 644 /etc/kubernetes/ssl/kubecfg-kube-node.yaml -``` - -**Audit:** - -``` -/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %a /etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected result**: - -``` -'644' is present OR '640' is present OR '600' is equal to '600' OR '444' is present OR '440' is present OR '400' is present OR '000' is present -``` - -#### 4.1.6 Ensure that the kubelet.conf file ownership is set to `root:root` (Scored) - -**Result:** PASS - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, - -``` bash -chown root:root /etc/kubernetes/ssl/kubecfg-kube-node.yaml -``` - -**Audit:** - -``` -/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected result**: - -``` -'root:root' is equal to 'root:root' -``` - -#### 4.1.7 Ensure that the certificate authorities file permissions are set to `644` or more restrictive (Scored) - -**Result:** PASS - -**Remediation:** -Run the following command to modify the file permissions of the - -``` bash ---client-ca-file chmod 644 -``` - -**Audit:** - -``` -stat -c %a /etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected result**: - -``` -'644' is equal to '644' OR '640' is present OR '600' is present -``` - -#### 4.1.8 Ensure that the client certificate authorities file ownership is set to `root:root` (Scored) - -**Result:** PASS - -**Remediation:** -Run the following command to modify the ownership of the `--client-ca-file`. - -``` bash -chown root:root -``` - -**Audit:** - -``` -/bin/sh -c 'if test -e /etc/kubernetes/ssl/kube-ca.pem; then stat -c %U:%G /etc/kubernetes/ssl/kube-ca.pem; fi' -``` - -**Expected result**: - -``` -'root:root' is equal to 'root:root' -``` - -#### 4.1.9 Ensure that the kubelet configuration file has permissions set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. - -#### 4.1.10 Ensure that the kubelet configuration file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. - -### 4.2 Kubelet - -#### 4.2.1 Ensure that the `--anonymous-auth argument` is set to false (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set authentication: `anonymous`: enabled to -`false`. -If using executable arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. - -``` bash ---anonymous-auth=false -``` - -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'false' is equal to 'false' -``` - -#### 4.2.2 Ensure that the `--authorization-mode` argument is not set to `AlwaysAllow` (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set authorization: `mode` to `Webhook`. If -using executable arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -set the below parameter in `KUBELET_AUTHZ_ARGS` variable. - -``` bash ---authorization-mode=Webhook -``` - -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'Webhook' not have 'AlwaysAllow' -``` - -#### 4.2.3 Ensure that the `--client-ca-file` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set authentication: `x509`: `clientCAFile` to -the location of the client CA file. -If using command line arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -set the below parameter in `KUBELET_AUTHZ_ARGS` variable. - -``` bash ---client-ca-file= -``` - -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'--client-ca-file' is present -``` - -#### 4.2.4 Ensure that the `--read-only-port` argument is set to `0` (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set `readOnlyPort` to `0`. -If using command line arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. - -``` bash ---read-only-port=0 -``` - -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'0' is equal to '0' -``` - -#### 4.2.5 Ensure that the `--streaming-connection-idle-timeout` argument is not set to `0` (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a -value other than `0`. -If using command line arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. - -``` bash ---streaming-connection-idle-timeout=5m -``` - -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'30m' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present -``` - -#### 4.2.6 Ensure that the ```--protect-kernel-defaults``` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set `protectKernelDefaults`: `true`. -If using command line arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. - -``` bash ---protect-kernel-defaults=true -``` - -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'true' is equal to 'true' -``` - -#### 4.2.7 Ensure that the `--make-iptables-util-chains` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains`: `true`. -If using command line arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -remove the `--make-iptables-util-chains` argument from the -`KUBELET_SYSTEM_PODS_ARGS` variable. -Based on your system, restart the kubelet service. For example: - -```bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'true' is equal to 'true' OR '--make-iptables-util-chains' is not present -``` - -#### 4.2.10 Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. - -#### 4.2.11 Ensure that the `--rotate-certificates` argument is not set to `false` (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to add the line `rotateCertificates`: `true` or -remove it altogether to use the default value. -If using command line arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -remove `--rotate-certificates=false` argument from the `KUBELET_CERTIFICATE_ARGS` -variable. -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'--rotate-certificates' is present OR '--rotate-certificates' is not present -``` - -#### 4.2.12 Ensure that the `RotateKubeletServerCertificate` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the kubelet service file `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` -on each worker node and set the below parameter in `KUBELET_CERTIFICATE_ARGS` variable. - -``` bash ---feature-gates=RotateKubeletServerCertificate=true -``` - -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'true' is equal to 'true' -``` - -## 5 Kubernetes Policies -### 5.1 RBAC and Service Accounts - -#### 5.1.5 Ensure that default service accounts are not actively used. (Scored) - -**Result:** PASS - -**Remediation:** -Create explicit service accounts wherever a Kubernetes workload requires specific access -to the Kubernetes API server. -Modify the configuration of each default service account to include this value - -``` bash -automountServiceAccountToken: false -``` - -**Audit Script:** 5.1.5.sh - -``` -#!/bin/bash - -export KUBECONFIG=${KUBECONFIG:-/root/.kube/config} - -kubectl version > /dev/null -if [ $? -ne 0 ]; then - echo "fail: kubectl failed" - exit 1 -fi - -accounts="$(kubectl --kubeconfig=${KUBECONFIG} get serviceaccounts -A -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true)) | "fail \(.metadata.name) \(.metadata.namespace)"')" - -if [[ "${accounts}" != "" ]]; then - echo "fail: automountServiceAccountToken not false for accounts: ${accounts}" - exit 1 -fi - -default_binding="$(kubectl get rolebindings,clusterrolebindings -A -o json | jq -r '.items[] | select(.subjects[].kind=="ServiceAccount" and .subjects[].name=="default" and .metadata.name=="default").metadata.uid' | wc -l)" - -if [[ "${default_binding}" -gt 0 ]]; then - echo "fail: default service accounts have non default bindings" - exit 1 -fi - -echo "--pass" -exit 0 -``` - -**Audit Execution:** - -``` -./5.1.5.sh -``` - -**Expected result**: - -``` -'--pass' is present -``` - -### 5.2 Pod Security Policies - -#### 5.2.2 Minimize the admission of containers wishing to share the host process ID namespace (Scored) - -**Result:** PASS - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -`.spec.hostPID` field is omitted or set to `false`. - -**Audit:** - -``` -kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected result**: - -``` -1 is greater than 0 -``` - -#### 5.2.3 Minimize the admission of containers wishing to share the host IPC namespace (Scored) - -**Result:** PASS - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -`.spec.hostIPC` field is omitted or set to `false`. - -**Audit:** - -``` -kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected result**: - -``` -1 is greater than 0 -``` - -#### 5.2.4 Minimize the admission of containers wishing to share the host network namespace (Scored) - -**Result:** PASS - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -`.spec.hostNetwork` field is omitted or set to `false`. - -**Audit:** - -``` -kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected result**: - -``` -1 is greater than 0 -``` - -#### 5.2.5 Minimize the admission of containers with `allowPrivilegeEscalation` (Scored) - -**Result:** PASS - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -`.spec.allowPrivilegeEscalation` field is omitted or set to `false`. - -**Audit:** - -``` -kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected result**: - -``` -1 is greater than 0 -``` - -### 5.3 Network Policies and CNI - -#### 5.3.2 Ensure that all Namespaces have Network Policies defined (Scored) - -**Result:** PASS - -**Remediation:** -Follow the documentation and create `NetworkPolicy` objects as you need them. - -**Audit Script:** 5.3.2.sh - -``` -#!/bin/bash -e - -export KUBECONFIG=${KUBECONFIG:-"/root/.kube/config"} - -kubectl version > /dev/null -if [ $? -ne 0 ]; then - echo "fail: kubectl failed" - exit 1 -fi - -for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do - policy_count=$(kubectl get networkpolicy -n ${namespace} -o json | jq '.items | length') - if [ ${policy_count} -eq 0 ]; then - echo "fail: ${namespace}" - exit 1 - fi -done - -echo "pass" -``` - -**Audit Execution:** - -``` -./5.3.2.sh -``` - -**Expected result**: - -``` -'pass' is present -``` - -### 5.6 General Policies - -#### 5.6.4 The default namespace should not be used (Scored) - -**Result:** PASS - -**Remediation:** -Ensure that namespaces are created to allow for appropriate segregation of Kubernetes -resources and that all new resources are created in a specific namespace. - -**Audit Script:** 5.6.4.sh - -``` -#!/bin/bash -e - -export KUBECONFIG=${KUBECONFIG:-/root/.kube/config} - -kubectl version > /dev/null -if [[ $? -gt 0 ]]; then - echo "fail: kubectl failed" - exit 1 -fi - -default_resources=$(kubectl get all -o json | jq --compact-output '.items[] | select((.kind == "Service") and (.metadata.name == "kubernetes") and (.metadata.namespace == "default") | not)' | wc -l) - -echo "--count=${default_resources}" -``` - -**Audit Execution:** - -``` -./5.6.4.sh -``` - -**Expected result**: - -``` -'0' is equal to '0' -``` diff --git a/content/rancher/v2.6/en/security/rancher-2.5/1.5-hardening-2.5/_index.md b/content/rancher/v2.6/en/security/rancher-2.5/1.5-hardening-2.5/_index.md deleted file mode 100644 index 7f64bbbc6a9..00000000000 --- a/content/rancher/v2.6/en/security/rancher-2.5/1.5-hardening-2.5/_index.md +++ /dev/null @@ -1,720 +0,0 @@ ---- -title: Hardening Guide with CIS 1.5 Benchmark -weight: 200 ---- - -This document provides prescriptive guidance for hardening a production installation of a RKE cluster to be used with Rancher v2.5. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). - -> This hardening guide describes how to secure the nodes in your cluster, and it is recommended to follow this guide before installing Kubernetes. - -This hardening guide is intended to be used for RKE clusters and associated with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: - - Rancher Version | CIS Benchmark Version | Kubernetes Version -----------------|-----------------------|------------------ - Rancher v2.5 | Benchmark v1.5 | Kubernetes 1.15 - -[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.5/Rancher_Hardening_Guide_CIS_1.5.pdf) - -### Overview - -This document provides prescriptive guidance for hardening a RKE cluster to be used for installing Rancher v2.5 with Kubernetes v1.15 or provisioning a RKE cluster with Kubernetes 1.15 to be used within Rancher v2.5. It outlines the configurations required to address Kubernetes benchmark controls from the Center for Information Security (CIS). - -For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS 1.5 Benchmark - Self-Assessment Guide - Rancher v2.5]({{< baseurl >}}/rancher/v2.6/en/security/rancher-2.5/1.5-benchmark-2.5/). - -#### Known Issues - -- Rancher **exec shell** and **view logs** for pods are **not** functional in a CIS 1.5 hardened setup when only public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes. -- When setting the `default_pod_security_policy_template_id:` to `restricted` Rancher creates **RoleBindings** and **ClusterRoleBindings** on the default service accounts. The CIS 1.5 5.1.5 check requires the default service accounts have no roles or cluster roles bound to it apart from the defaults. In addition the default service accounts should be configured such that it does not provide a service account token and does not have any explicit rights assignments. - -### Configure Kernel Runtime Parameters - -The following `sysctl` configuration is recommended for all nodes type in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`: - -``` -vm.overcommit_memory=1 -vm.panic_on_oom=0 -kernel.panic=10 -kernel.panic_on_oops=1 -kernel.keys.root_maxbytes=25000000 -``` - -Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings. - -### Configure `etcd` user and group -A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time. - -#### create `etcd` user and group -To create the **etcd** group run the following console commands. - -The commands below use `52034` for **uid** and **gid** are for example purposes. Any valid unused **uid** or **gid** could also be used in lieu of `52034`. - -``` -groupadd --gid 52034 etcd -useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd -``` - -Update the RKE **config.yml** with the **uid** and **gid** of the **etcd** user: - -``` yaml -services: - etcd: - gid: 52034 - uid: 52034 -``` - -#### Set `automountServiceAccountToken` to `false` for `default` service accounts -Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. - -For each namespace including **default** and **kube-system** on a standard RKE install the **default** service account must include this value: - -``` -automountServiceAccountToken: false -``` - -Save the following yaml to a file called `account_update.yaml` - -``` yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: default -automountServiceAccountToken: false -``` - -Create a bash script file called `account_update.sh`. Be sure to `chmod +x account_update.sh` so the script has execute permissions. - -``` -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do - kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)" -done -``` - -### Ensure that all Namespaces have Network Policies defined - -Running different applications on the same Kubernetes cluster creates a risk of one -compromised application attacking a neighboring application. Network segmentation is -important to ensure that containers can communicate only with those they are supposed -to. A network policy is a specification of how selections of pods are allowed to -communicate with each other and other network endpoints. - -Network Policies are namespace scoped. When a network policy is introduced to a given -namespace, all traffic not allowed by the policy is denied. However, if there are no network -policies in a namespace all traffic will be allowed into and out of the pods in that -namespace. To enforce network policies, a CNI (container network interface) plugin must be enabled. -This guide uses [canal](https://github.com/projectcalico/canal) to provide the policy enforcement. -Additional information about CNI providers can be found -[here](https://rancher.com/blog/2019/2019-03-21-comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/) - -Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a -**permissive** example is provide below. If you want to allow all traffic to all pods in a namespace -(even if policies are added that cause some pods to be treated as “isolated”), -you can create a policy that explicitly allows all traffic in that namespace. Save the following `yaml` as -`default-allow-all.yaml`. Additional [documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) -about network policies can be found on the Kubernetes site. - -> This `NetworkPolicy` is not recommended for production use - -``` yaml ---- -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: default-allow-all -spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress -``` - -Create a bash script file called `apply_networkPolicy_to_all_ns.sh`. Be sure to -`chmod +x apply_networkPolicy_to_all_ns.sh` so the script has execute permissions. - -``` -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do - kubectl apply -f default-allow-all.yaml -n ${namespace} -done -``` -Execute this script to apply the `default-allow-all.yaml` the **permissive** `NetworkPolicy` to all namespaces. - -### Reference Hardened RKE `cluster.yml` configuration - -The reference `cluster.yml` is used by the RKE CLI that provides the configuration needed to achieve a hardened install -of Rancher Kubernetes Engine (RKE). Install [documentation](https://rancher.com/docs/rke/latest/en/installation/) is -provided with additional details about the configuration items. This reference `cluster.yml` does not include the required **nodes** directive which will vary depending on your environment. Documentation for node configuration can be found here: https://rancher.com/docs/rke/latest/en/config-options/nodes - - -``` yaml -# If you intend to deploy Kubernetes in an air-gapped environment, -# please consult the documentation on how to configure custom RKE images. -kubernetes_version: "v1.15.9-rancher1-1" -enable_network_policy: true -default_pod_security_policy_template_id: "restricted" -# the nodes directive is required and will vary depending on your environment -# documentation for node configuration can be found here: -# https://rancher.com/docs/rke/latest/en/config-options/nodes -nodes: -services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - pod_security_policy: true - secrets_encryption_config: - enabled: true - audit_log: - enabled: true - admission_configuration: - event_rate_limit: - enabled: true - kube-controller: - extra_args: - feature-gates: "RotateKubeletServerCertificate=true" - scheduler: - image: "" - extra_args: {} - extra_binds: [] - extra_env: [] - kubelet: - generate_serving_certificate: true - extra_args: - feature-gates: "RotateKubeletServerCertificate=true" - protect-kernel-defaults: "true" - tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256" - extra_binds: [] - extra_env: [] - cluster_domain: "" - infra_container_image: "" - cluster_dns_server: "" - fail_swap_on: false - kubeproxy: - image: "" - extra_args: {} - extra_binds: [] - extra_env: [] -network: - plugin: "" - options: {} - mtu: 0 - node_selector: {} -authentication: - strategy: "" - sans: [] - webhook: null -addons: | - --- - apiVersion: v1 - kind: Namespace - metadata: - name: ingress-nginx - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: Role - metadata: - name: default-psp-role - namespace: ingress-nginx - rules: - - apiGroups: - - extensions - resourceNames: - - default-psp - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: RoleBinding - metadata: - name: default-psp-rolebinding - namespace: ingress-nginx - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: default-psp-role - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: v1 - kind: Namespace - metadata: - name: cattle-system - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: Role - metadata: - name: default-psp-role - namespace: cattle-system - rules: - - apiGroups: - - extensions - resourceNames: - - default-psp - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: RoleBinding - metadata: - name: default-psp-rolebinding - namespace: cattle-system - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: default-psp-role - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: policy/v1beta1 - kind: PodSecurityPolicy - metadata: - name: restricted - spec: - requiredDropCapabilities: - - NET_RAW - privileged: false - allowPrivilegeEscalation: false - defaultAllowPrivilegeEscalation: false - fsGroup: - rule: RunAsAny - runAsUser: - rule: MustRunAsNonRoot - seLinux: - rule: RunAsAny - supplementalGroups: - rule: RunAsAny - volumes: - - emptyDir - - secret - - persistentVolumeClaim - - downwardAPI - - configMap - - projected - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: psp:restricted - rules: - - apiGroups: - - extensions - resourceNames: - - restricted - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: psp:restricted - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: psp:restricted - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: tiller - namespace: kube-system - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: tiller - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin - subjects: - - kind: ServiceAccount - name: tiller - namespace: kube-system - -addons_include: [] -system_images: - etcd: "" - alpine: "" - nginx_proxy: "" - cert_downloader: "" - kubernetes_services_sidecar: "" - kubedns: "" - dnsmasq: "" - kubedns_sidecar: "" - kubedns_autoscaler: "" - coredns: "" - coredns_autoscaler: "" - kubernetes: "" - flannel: "" - flannel_cni: "" - calico_node: "" - calico_cni: "" - calico_controllers: "" - calico_ctl: "" - calico_flexvol: "" - canal_node: "" - canal_cni: "" - canal_flannel: "" - canal_flexvol: "" - weave_node: "" - weave_cni: "" - pod_infra_container: "" - ingress: "" - ingress_backend: "" - metrics_server: "" - windows_pod_infra_container: "" -ssh_key_path: "" -ssh_cert_path: "" -ssh_agent_auth: false -authorization: - mode: "" - options: {} -ignore_docker_version: false -private_registries: [] -ingress: - provider: "" - options: {} - node_selector: {} - extra_args: {} - dns_policy: "" - extra_envs: [] - extra_volumes: [] - extra_volume_mounts: [] -cluster_name: "" -prefix_path: "" -addon_job_timeout: 0 -bastion_host: - address: "" - port: "" - user: "" - ssh_key: "" - ssh_key_path: "" - ssh_cert: "" - ssh_cert_path: "" -monitoring: - provider: "" - options: {} - node_selector: {} -restore: - restore: false - snapshot_name: "" -dns: null -``` - -### Reference Hardened RKE Template configuration - -The reference RKE Template provides the configuration needed to achieve a hardened install of Kubenetes. -RKE Templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher -[documentaion](https://rancher.com/docs/rancher/v2.6/en/installation) for additional installation and RKE Template details. - -``` yaml -# -# Cluster Config -# -default_pod_security_policy_template_id: restricted -docker_root_dir: /var/lib/docker -enable_cluster_alerting: false -enable_cluster_monitoring: false -enable_network_policy: true -# -# Rancher Config -# -rancher_kubernetes_engine_config: - addon_job_timeout: 30 - addons: |- - --- - apiVersion: v1 - kind: Namespace - metadata: - name: ingress-nginx - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: Role - metadata: - name: default-psp-role - namespace: ingress-nginx - rules: - - apiGroups: - - extensions - resourceNames: - - default-psp - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: RoleBinding - metadata: - name: default-psp-rolebinding - namespace: ingress-nginx - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: default-psp-role - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: v1 - kind: Namespace - metadata: - name: cattle-system - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: Role - metadata: - name: default-psp-role - namespace: cattle-system - rules: - - apiGroups: - - extensions - resourceNames: - - default-psp - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: RoleBinding - metadata: - name: default-psp-rolebinding - namespace: cattle-system - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: default-psp-role - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: policy/v1beta1 - kind: PodSecurityPolicy - metadata: - name: restricted - spec: - requiredDropCapabilities: - - NET_RAW - privileged: false - allowPrivilegeEscalation: false - defaultAllowPrivilegeEscalation: false - fsGroup: - rule: RunAsAny - runAsUser: - rule: MustRunAsNonRoot - seLinux: - rule: RunAsAny - supplementalGroups: - rule: RunAsAny - volumes: - - emptyDir - - secret - - persistentVolumeClaim - - downwardAPI - - configMap - - projected - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: psp:restricted - rules: - - apiGroups: - - extensions - resourceNames: - - restricted - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: psp:restricted - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: psp:restricted - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: tiller - namespace: kube-system - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: tiller - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin - subjects: - - kind: ServiceAccount - name: tiller - namespace: kube-system - ignore_docker_version: true - kubernetes_version: v1.15.9-rancher1-1 -# -# If you are using calico on AWS -# -# network: -# plugin: calico -# calico_network_provider: -# cloud_provider: aws -# -# # To specify flannel interface -# -# network: -# plugin: flannel -# flannel_network_provider: -# iface: eth1 -# -# # To specify flannel interface for canal plugin -# -# network: -# plugin: canal -# canal_network_provider: -# iface: eth1 -# - network: - mtu: 0 - plugin: canal -# -# services: -# kube-api: -# service_cluster_ip_range: 10.43.0.0/16 -# kube-controller: -# cluster_cidr: 10.42.0.0/16 -# service_cluster_ip_range: 10.43.0.0/16 -# kubelet: -# cluster_domain: cluster.local -# cluster_dns_server: 10.43.0.10 -# - services: - etcd: - backup_config: - enabled: false - interval_hours: 12 - retention: 6 - safe_timestamp: false - creation: 12h - extra_args: - election-timeout: '5000' - heartbeat-interval: '500' - gid: 52034 - retention: 72h - snapshot: false - uid: 52034 - kube_api: - always_pull_images: false - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: true - secrets_encryption_config: - enabled: true - service_node_port_range: 30000-32767 - kube_controller: - extra_args: - address: 127.0.0.1 - feature-gates: RotateKubeletServerCertificate=true - profiling: 'false' - terminated-pod-gc-threshold: '1000' - kubelet: - extra_args: - anonymous-auth: 'false' - event-qps: '0' - feature-gates: RotateKubeletServerCertificate=true - make-iptables-util-chains: 'true' - protect-kernel-defaults: 'true' - streaming-connection-idle-timeout: 1800s - tls-cipher-suites: >- - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - fail_swap_on: false - generate_serving_certificate: true - scheduler: - extra_args: - address: 127.0.0.1 - profiling: 'false' - ssh_agent_auth: false -windows_prefered_cluster: false -``` - -### Hardened Reference Ubuntu 18.04 LTS **cloud-config**: - -The reference **cloud-config** is generally used in cloud infrastructure environments to allow for -configuration management of compute instances. The reference config configures Ubuntu operating system level settings -needed before installing kubernetes. - -``` yaml -#cloud-config -packages: - - curl - - jq -runcmd: - - sysctl -w vm.overcommit_memory=1 - - sysctl -w kernel.panic=10 - - sysctl -w kernel.panic_on_oops=1 - - curl https://releases.rancher.com/install-docker/18.09.sh | sh - - usermod -aG docker ubuntu - - return=1; while [ $return != 0 ]; do sleep 2; docker ps; return=$?; done - - addgroup --gid 52034 etcd - - useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd -write_files: - - path: /etc/sysctl.d/kubelet.conf - owner: root:root - permissions: "0644" - content: | - vm.overcommit_memory=1 - kernel.panic=10 - kernel.panic_on_oops=1 -``` diff --git a/content/rancher/v2.6/en/security/rancher-2.5/1.6-benchmark-2.5/_index.md b/content/rancher/v2.6/en/security/rancher-2.5/1.6-benchmark-2.5/_index.md deleted file mode 100644 index d7803779eb6..00000000000 --- a/content/rancher/v2.6/en/security/rancher-2.5/1.6-benchmark-2.5/_index.md +++ /dev/null @@ -1,3317 +0,0 @@ ---- -title: CIS 1.6 Benchmark - Self-Assessment Guide - Rancher v2.5.4 -weight: 101 ---- - -### CIS 1.6 Kubernetes Benchmark - Rancher v2.5.4 with Kubernetes v1.18 - -[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.5/Rancher_1.6_Benchmark_Assessment.pdf) - -#### Overview - -This document is a companion to the Rancher v2.5.4 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark. - -This guide corresponds to specific versions of the hardening guide, Rancher, CIS Benchmark, and Kubernetes: - -Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version ----------------------------|----------|---------|------- -Hardening Guide with CIS 1.6 Benchmark | Rancher v2.5.4 | CIS 1.6| Kubernetes v1.18 - -Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply and will have a result of `Not Applicable`. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher-created clusters. - -This document is to be used by Rancher operators, security teams, auditors and decision makers. - -For more detail about each audit, including rationales and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark 1.6. You can download the benchmark after logging in to [CISecurity.org]( https://www.cisecurity.org/benchmark/kubernetes/). - -#### Testing controls methodology - -Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files. - -Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher Labs are provided for testing. -When performing the tests, you will need access to the Docker command line on the hosts of all three RKE roles. The commands also make use of the the [jq](https://stedolan.github.io/jq/) and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) (with valid config) tools to and are required in the testing and evaluation of test results. - -### Controls - -## 1.1 Etcd Node Configuration Files -### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the below command: -ps -ef | grep etcd Run the below command (based on the etcd data directory found above). For example, -chmod 700 /var/lib/etcd - - -**Audit:** - -```bash -stat -c %a /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'700' is equal to '700' -``` - -**Returned Value**: - -```console -700 - -``` -### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the below command: -ps -ef | grep etcd -Run the below command (based on the etcd data directory found above). -For example, chown etcd:etcd /var/lib/etcd - -A system service account is required for etcd data directory ownership. -Refer to Rancher's hardening guide for more details on how to configure this ownership. - - -**Audit:** - -```bash -stat -c %U:%G /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'etcd:etcd' is present -``` - -**Returned Value**: - -```console -etcd:etcd - -``` -### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, -chown -R root:root /etc/kubernetes/pki/ - - -**Audit:** - -```bash -check_files_owner_in_dir.sh /node/etc/kubernetes/ssl -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Audit Script:** -```bash -#!/usr/bin/env bash - -# This script is used to ensure the owner is set to root:root for -# the given directory and all the files in it -# -# inputs: -# $1 = /full/path/to/directory -# -# outputs: -# true/false - -INPUT_DIR=$1 - -if [[ "${INPUT_DIR}" == "" ]]; then - echo "false" - exit -fi - -if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then - echo "false" - exit -fi - -statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) -while read -r statInfoLine; do - f=$(echo ${statInfoLine} | cut -d' ' -f1) - p=$(echo ${statInfoLine} | cut -d' ' -f2) - - if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then - if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "root:root" ]]; then - echo "false" - exit - fi - fi -done <<< "${statInfoLines}" - - -echo "true" -exit - -``` -**Returned Value**: - -```console -true - -``` -### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Automated) - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, -chmod -R 644 /etc/kubernetes/pki/*.crt - - -**Audit:** - -```bash -check_files_permissions.sh /node/etc/kubernetes/ssl/!(*key).pem -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Audit Script:** -```bash -#!/usr/bin/env bash - -# This script is used to ensure the file permissions are set to 644 or -# more restrictive for all files in a given directory or a wildcard -# selection of files -# -# inputs: -# $1 = /full/path/to/directory or /path/to/fileswithpattern -# ex: !(*key).pem -# -# $2 (optional) = permission (ex: 600) -# -# outputs: -# true/false - -# Turn on "extended glob" for use of '!' in wildcard -shopt -s extglob - -# Turn off history to avoid surprises when using '!' -set -H - -USER_INPUT=$1 - -if [[ "${USER_INPUT}" == "" ]]; then - echo "false" - exit -fi - - -if [[ -d ${USER_INPUT} ]]; then - PATTERN="${USER_INPUT}/*" -else - PATTERN="${USER_INPUT}" -fi - -PERMISSION="" -if [[ "$2" != "" ]]; then - PERMISSION=$2 -fi - -FILES_PERMISSIONS=$(stat -c %n\ %a ${PATTERN}) - -while read -r fileInfo; do - p=$(echo ${fileInfo} | cut -d' ' -f2) - - if [[ "${PERMISSION}" != "" ]]; then - if [[ "$p" != "${PERMISSION}" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "644" && "$p" != "640" && "$p" != "600" ]]; then - echo "false" - exit - fi - fi -done <<< "${FILES_PERMISSIONS}" - - -echo "true" -exit - -``` -**Returned Value**: - -```console -true - -``` -### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Automated) - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, -chmod -R 600 /etc/kubernetes/ssl/*key.pem - - -**Audit:** - -```bash -check_files_permissions.sh /node/etc/kubernetes/ssl/*key.pem 600 -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Audit Script:** -```bash -#!/usr/bin/env bash - -# This script is used to ensure the file permissions are set to 644 or -# more restrictive for all files in a given directory or a wildcard -# selection of files -# -# inputs: -# $1 = /full/path/to/directory or /path/to/fileswithpattern -# ex: !(*key).pem -# -# $2 (optional) = permission (ex: 600) -# -# outputs: -# true/false - -# Turn on "extended glob" for use of '!' in wildcard -shopt -s extglob - -# Turn off history to avoid surprises when using '!' -set -H - -USER_INPUT=$1 - -if [[ "${USER_INPUT}" == "" ]]; then - echo "false" - exit -fi - - -if [[ -d ${USER_INPUT} ]]; then - PATTERN="${USER_INPUT}/*" -else - PATTERN="${USER_INPUT}" -fi - -PERMISSION="" -if [[ "$2" != "" ]]; then - PERMISSION=$2 -fi - -FILES_PERMISSIONS=$(stat -c %n\ %a ${PATTERN}) - -while read -r fileInfo; do - p=$(echo ${fileInfo} | cut -d' ' -f2) - - if [[ "${PERMISSION}" != "" ]]; then - if [[ "$p" != "${PERMISSION}" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "644" && "$p" != "640" && "$p" != "600" ]]; then - echo "false" - exit - fi - fi -done <<< "${FILES_PERMISSIONS}" - - -echo "true" -exit - -``` -**Returned Value**: - -```console -true - -``` -### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/kube-apiserver.yaml; then stat -c permissions=%a /etc/kubernetes/manifests/kube-apiserver.yaml; fi' -``` - - -### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/kube-apiserver.yaml; then stat -c %U:%G /etc/kubernetes/manifests/kube-apiserver.yaml; fi' -``` - - -### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/kube-controller-manager.yaml; then stat -c permissions=%a /etc/kubernetes/manifests/kube-controller-manager.yaml; fi' -``` - - -### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/kube-controller-manager.yaml; then stat -c %U:%G /etc/kubernetes/manifests/kube-controller-manager.yaml; fi' -``` - - -### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/kube-scheduler.yaml; then stat -c permissions=%a /etc/kubernetes/manifests/kube-scheduler.yaml; fi' -``` - - -### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/kube-scheduler.yaml; then stat -c %U:%G /etc/kubernetes/manifests/kube-scheduler.yaml; fi' -``` - - -### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/etcd.yaml; then stat -c permissions=%a /etc/kubernetes/manifests/etcd.yaml; fi' -``` - - -### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/etcd.yaml; then stat -c %U:%G /etc/kubernetes/manifests/etcd.yaml; fi' -``` - - -### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual) - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, -chmod 644 - - -**Audit:** - -```bash -stat -c permissions=%a -``` - - -### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, -chown root:root - - -**Audit:** - -```bash -stat -c %U:%G -``` - - -### 1.1.13 Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/admin.conf; then stat -c permissions=%a /etc/kubernetes/admin.conf; fi' -``` - - -### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/admin.conf; then stat -c %U:%G /etc/kubernetes/admin.conf; fi' -``` - - -### 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e scheduler; then stat -c permissions=%a scheduler; fi' -``` - - -### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e scheduler; then stat -c %U:%G scheduler; fi' -``` - - -### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e controllermanager; then stat -c permissions=%a controllermanager; fi' -``` - - -### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e controllermanager; then stat -c %U:%G controllermanager; fi' -``` - - -## 1.2 API Server -### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the below parameter. ---anonymous-auth=false - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'false' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.2 Ensure that the --basic-auth-file argument is not set (Automated) - -**Result:** pass - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and remove the --basic-auth-file= parameter. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--basic-auth-file' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.3 Ensure that the --token-auth-file parameter is not set (Automated) - -**Result:** pass - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and remove the --token-auth-file= parameter. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--token-auth-file' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and remove the --kubelet-https parameter. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--kubelet-https' is not present OR '--kubelet-https' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the -apiserver and kubelets. Then, edit API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the -kubelet client certificate and key parameters as below. ---kubelet-client-certificate= ---kubelet-client-key= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and setup the TLS connection between -the apiserver and kubelets. Then, edit the API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the ---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. ---kubelet-certificate-authority= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--kubelet-certificate-authority' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --authorization-mode parameter to values other than AlwaysAllow. -One such example could be as below. ---authorization-mode=RBAC - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console - 'Node,RBAC' not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --authorization-mode parameter to a value that includes Node. ---authorization-mode=Node,RBAC - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'Node,RBAC' has 'Node' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --authorization-mode parameter to a value that includes RBAC, -for example: ---authorization-mode=Node,RBAC - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'Node,RBAC' has 'RBAC' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set the desired limits in a configuration file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameters. ---enable-admission-plugins=...,EventRateLimit,... ---admission-control-config-file= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'EventRateLimit' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and either remove the --enable-admission-plugins parameter, or set it to a -value that does not include AlwaysAdmit. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console - 'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual) - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --enable-admission-plugins parameter to include -AlwaysPullImages. ---enable-admission-plugins=...,AlwaysPullImages,... - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - - -### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --enable-admission-plugins parameter to include -SecurityContextDeny, unless PodSecurityPolicy is already in place. ---enable-admission-plugins=...,SecurityContextDeny,... - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - - -### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated) - -**Result:** pass - -**Remediation:** -Follow the documentation and create ServiceAccount objects as per your environment. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and ensure that the --disable-admission-plugins parameter is set to a -value that does not include ServiceAccount. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is not present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --disable-admission-plugins parameter to -ensure it does not include NamespaceLifecycle. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is not present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.16 Ensure that the admission control plugin PodSecurityPolicy is set (Automated) - -**Result:** pass - -**Remediation:** -Follow the documentation and create Pod Security Policy objects as per your environment. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --enable-admission-plugins parameter to a -value that includes PodSecurityPolicy: ---enable-admission-plugins=...,PodSecurityPolicy,... -Then restart the API Server. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'PodSecurityPolicy' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.17 Ensure that the admission control plugin NodeRestriction is set (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --enable-admission-plugins parameter to a -value that includes NodeRestriction. ---enable-admission-plugins=...,NodeRestriction,... - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'NodeRestriction' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.18 Ensure that the --insecure-bind-address argument is not set (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and remove the --insecure-bind-address parameter. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--insecure-bind-address' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.19 Ensure that the --insecure-port argument is set to 0 (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the below parameter. ---insecure-port=0 - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'0' is equal to '0' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.20 Ensure that the --secure-port argument is not set to 0 (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and either remove the --secure-port parameter or -set it to a different (non-zero) desired port. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -6443 is greater than 0 OR '--secure-port' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.21 Ensure that the --profiling argument is set to false (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the below parameter. ---profiling=false - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'false' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.22 Ensure that the --audit-log-path argument is set (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --audit-log-path parameter to a suitable path and -file where you would like audit logs to be written, for example: ---audit-log-path=/var/log/apiserver/audit.log - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-path' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.23 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --audit-log-maxage parameter to 30 or as an appropriate number of days: ---audit-log-maxage=30 - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -30 is greater or equal to 30 -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.24 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --audit-log-maxbackup parameter to 10 or to an appropriate -value. ---audit-log-maxbackup=10 - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -10 is greater or equal to 10 -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.25 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --audit-log-maxsize parameter to an appropriate size in MB. -For example, to set it as 100 MB: ---audit-log-maxsize=100 - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -100 is greater or equal to 100 -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.26 Ensure that the --request-timeout argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameter as appropriate and if needed. -For example, ---request-timeout=300s - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--request-timeout' is not present OR '--request-timeout' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.27 Ensure that the --service-account-lookup argument is set to true (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the below parameter. ---service-account-lookup=true -Alternatively, you can delete the --service-account-lookup parameter from this file so -that the default takes effect. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-lookup' is not present OR 'true' is equal to 'true' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.28 Ensure that the --service-account-key-file argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --service-account-key-file parameter -to the public key file for service accounts: ---service-account-key-file= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-key-file' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.29 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the etcd certificate and key file parameters. ---etcd-certfile= ---etcd-keyfile= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-certfile' is present AND '--etcd-keyfile' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.30 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the TLS certificate and private key file parameters. ---tls-cert-file= ---tls-private-key-file= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.31 Ensure that the --client-ca-file argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the client certificate authority file. ---client-ca-file= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.32 Ensure that the --etcd-cafile argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the etcd certificate authority file parameter. ---etcd-cafile= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-cafile' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.33 Ensure that the --encryption-provider-config argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --encryption-provider-config parameter to the path of that file: --encryption-provider-config= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--encryption-provider-config' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.34 Ensure that encryption providers are appropriately configured (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -In this file, choose aescbc, kms or secretbox as the encryption provider. - - -**Audit:** - -```bash -check_encryption_provider_config.sh aescbc kms secretbox -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Audit Script:** -```bash -#!/usr/bin/env bash - -# This script is used to check the encrption provider config is set to aesbc -# -# outputs: -# true/false - -# TODO: Figure out the file location from the kube-apiserver commandline args -ENCRYPTION_CONFIG_FILE="/node/etc/kubernetes/ssl/encryption.yaml" - -if [[ ! -f "${ENCRYPTION_CONFIG_FILE}" ]]; then - echo "false" - exit -fi - -for provider in "$@" -do - if grep "$provider" "${ENCRYPTION_CONFIG_FILE}"; then - echo "true" - exit - fi -done - -echo "false" -exit - -``` -**Returned Value**: - -```console - - aescbc: -true - -``` -### 1.2.35 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Automated) - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the below parameter. ---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM -_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM -_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM -_SHA384 - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - - -## 1.3 Controller Manager -### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the master node and set the --terminated-pod-gc-threshold to an appropriate threshold, -for example: ---terminated-pod-gc-threshold=10 - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--terminated-pod-gc-threshold' is present -``` - -**Returned Value**: - -```console -root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=0.0.0.0 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true - -``` -### 1.3.2 Ensure that the --profiling argument is set to false (Automated) - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the master node and set the below parameter. ---profiling=false - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'false' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=0.0.0.0 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true - -``` -### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the master node to set the below parameter. ---use-service-account-credentials=true - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'true' is not equal to 'false' -``` - -**Returned Value**: - -```console -root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=0.0.0.0 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true - -``` -### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the master node and set the --service-account-private-key-file parameter -to the private key file for service accounts. ---service-account-private-key-file= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=0.0.0.0 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true - -``` -### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the master node and set the --root-ca-file parameter to the certificate bundle file`. ---root-ca-file= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--root-ca-file' is present -``` - -**Returned Value**: - -```console -root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=0.0.0.0 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true - -``` -### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) - -**Result:** notApplicable - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the master node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. ---feature-gates=RotateKubeletServerCertificate=true - -Cluster provisioned by RKE handles certificate rotation directly through RKE. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - - -### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the master node and ensure the correct value for the --bind-address parameter - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is not present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=0.0.0.0 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true - -``` -## 1.4 Scheduler -### 1.4.1 Ensure that the --profiling argument is set to false (Automated) - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file -on the master node and set the below parameter. ---profiling=false - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'false' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4947 4930 1 16:16 ? 00:00:02 kube-scheduler --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --address=0.0.0.0 - -``` -### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml -on the master node and ensure the correct value for the --bind-address parameter - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is not present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4947 4930 1 16:16 ? 00:00:02 kube-scheduler --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --address=0.0.0.0 - -``` -## 2 Etcd Node Configuration Files -### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure TLS encryption. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml -on the master node and set the below parameters. ---cert-file= ---key-file= - - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--cert-file' is present AND '--key-file' is present -``` - -**Returned Value**: - -```console -etcd 4318 4301 6 16:15 ? 00:00:14 /usr/local/bin/etcd --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --advertise-client-urls=https://192.168.1.225:2379,https://192.168.1.225:4001 --election-timeout=5000 --data-dir=/var/lib/rancher/etcd/ --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --enable-v2=true --initial-cluster=etcd-cis-aio-0=https://192.168.1.225:2380 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --initial-cluster-token=etcd-cluster-1 --name=etcd-cis-aio-0 --listen-client-urls=https://0.0.0.0:2379 --peer-key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem --peer-client-cert-auth=true --initial-advertise-peer-urls=https://192.168.1.225:2380 --initial-cluster-state=new --key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem -root 4366 4349 0 16:15 ? 00:00:00 /opt/rke-tools/rke-etcd-backup etcd-backup save --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-node.pem --key /etc/kubernetes/ssl/kube-node-key.pem --name etcd-rolling-snapshots --endpoints=192.168.1.225:2379 --retention=72h --creation=12h -root 4643 4626 23 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User -root 14998 14985 0 16:19 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=5 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.6-hardened --json --log_dir /tmp/results/logs --outputfile /tmp/results/etcd.json - -``` -### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---client-cert-auth="true" - - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--client-cert-auth' is present OR 'true' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 4318 4301 6 16:15 ? 00:00:14 /usr/local/bin/etcd --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --advertise-client-urls=https://192.168.1.225:2379,https://192.168.1.225:4001 --election-timeout=5000 --data-dir=/var/lib/rancher/etcd/ --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --enable-v2=true --initial-cluster=etcd-cis-aio-0=https://192.168.1.225:2380 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --initial-cluster-token=etcd-cluster-1 --name=etcd-cis-aio-0 --listen-client-urls=https://0.0.0.0:2379 --peer-key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem --peer-client-cert-auth=true --initial-advertise-peer-urls=https://192.168.1.225:2380 --initial-cluster-state=new --key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem -root 4366 4349 0 16:15 ? 00:00:00 /opt/rke-tools/rke-etcd-backup etcd-backup save --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-node.pem --key /etc/kubernetes/ssl/kube-node-key.pem --name etcd-rolling-snapshots --endpoints=192.168.1.225:2379 --retention=72h --creation=12h -root 4643 4626 23 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User -root 14998 14985 0 16:19 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=5 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.6-hardened --json --log_dir /tmp/results/logs --outputfile /tmp/results/etcd.json - -``` -### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --auto-tls parameter or set it to false. - --auto-tls=false - - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--auto-tls' is not present OR '--auto-tls' is not present -``` - -**Returned Value**: - -```console -etcd 4318 4301 6 16:15 ? 00:00:14 /usr/local/bin/etcd --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --advertise-client-urls=https://192.168.1.225:2379,https://192.168.1.225:4001 --election-timeout=5000 --data-dir=/var/lib/rancher/etcd/ --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --enable-v2=true --initial-cluster=etcd-cis-aio-0=https://192.168.1.225:2380 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --initial-cluster-token=etcd-cluster-1 --name=etcd-cis-aio-0 --listen-client-urls=https://0.0.0.0:2379 --peer-key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem --peer-client-cert-auth=true --initial-advertise-peer-urls=https://192.168.1.225:2380 --initial-cluster-state=new --key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem -root 4366 4349 0 16:15 ? 00:00:00 /opt/rke-tools/rke-etcd-backup etcd-backup save --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-node.pem --key /etc/kubernetes/ssl/kube-node-key.pem --name etcd-rolling-snapshots --endpoints=192.168.1.225:2379 --retention=72h --creation=12h -root 4643 4626 23 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User -root 14998 14985 0 16:19 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=5 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.6-hardened --json --log_dir /tmp/results/logs --outputfile /tmp/results/etcd.json - -``` -### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure peer TLS encryption as appropriate -for your etcd cluster. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameters. ---peer-client-file= ---peer-key-file= - - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-cert-file' is present AND '--peer-key-file' is present -``` - -**Returned Value**: - -```console -etcd 4318 4301 6 16:15 ? 00:00:14 /usr/local/bin/etcd --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --advertise-client-urls=https://192.168.1.225:2379,https://192.168.1.225:4001 --election-timeout=5000 --data-dir=/var/lib/rancher/etcd/ --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --enable-v2=true --initial-cluster=etcd-cis-aio-0=https://192.168.1.225:2380 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --initial-cluster-token=etcd-cluster-1 --name=etcd-cis-aio-0 --listen-client-urls=https://0.0.0.0:2379 --peer-key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem --peer-client-cert-auth=true --initial-advertise-peer-urls=https://192.168.1.225:2380 --initial-cluster-state=new --key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem -root 4366 4349 0 16:15 ? 00:00:00 /opt/rke-tools/rke-etcd-backup etcd-backup save --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-node.pem --key /etc/kubernetes/ssl/kube-node-key.pem --name etcd-rolling-snapshots --endpoints=192.168.1.225:2379 --retention=72h --creation=12h -root 4643 4626 23 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User -root 14998 14985 0 16:19 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=5 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.6-hardened --json --log_dir /tmp/results/logs --outputfile /tmp/results/etcd.json - -``` -### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---peer-client-cert-auth=true - - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-client-cert-auth' is present OR 'true' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 4318 4301 6 16:15 ? 00:00:14 /usr/local/bin/etcd --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --advertise-client-urls=https://192.168.1.225:2379,https://192.168.1.225:4001 --election-timeout=5000 --data-dir=/var/lib/rancher/etcd/ --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --enable-v2=true --initial-cluster=etcd-cis-aio-0=https://192.168.1.225:2380 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --initial-cluster-token=etcd-cluster-1 --name=etcd-cis-aio-0 --listen-client-urls=https://0.0.0.0:2379 --peer-key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem --peer-client-cert-auth=true --initial-advertise-peer-urls=https://192.168.1.225:2380 --initial-cluster-state=new --key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem -root 4366 4349 0 16:15 ? 00:00:00 /opt/rke-tools/rke-etcd-backup etcd-backup save --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-node.pem --key /etc/kubernetes/ssl/kube-node-key.pem --name etcd-rolling-snapshots --endpoints=192.168.1.225:2379 --retention=72h --creation=12h -root 4643 4626 23 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User -root 14998 14985 0 16:19 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=5 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.6-hardened --json --log_dir /tmp/results/logs --outputfile /tmp/results/etcd.json - -``` -### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --peer-auto-tls parameter or set it to false. ---peer-auto-tls=false - - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-auto-tls' is not present OR '--peer-auto-tls' is present -``` - -**Returned Value**: - -```console -etcd 4318 4301 6 16:15 ? 00:00:14 /usr/local/bin/etcd --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --advertise-client-urls=https://192.168.1.225:2379,https://192.168.1.225:4001 --election-timeout=5000 --data-dir=/var/lib/rancher/etcd/ --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --enable-v2=true --initial-cluster=etcd-cis-aio-0=https://192.168.1.225:2380 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --initial-cluster-token=etcd-cluster-1 --name=etcd-cis-aio-0 --listen-client-urls=https://0.0.0.0:2379 --peer-key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem --peer-client-cert-auth=true --initial-advertise-peer-urls=https://192.168.1.225:2380 --initial-cluster-state=new --key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem -root 4366 4349 0 16:15 ? 00:00:00 /opt/rke-tools/rke-etcd-backup etcd-backup save --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-node.pem --key /etc/kubernetes/ssl/kube-node-key.pem --name etcd-rolling-snapshots --endpoints=192.168.1.225:2379 --retention=72h --creation=12h -root 4643 4626 23 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User -root 14998 14985 0 16:19 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=5 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.6-hardened --json --log_dir /tmp/results/logs --outputfile /tmp/results/etcd.json - -``` -### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) - -**Result:** pass - -**Remediation:** -[Manual test] -Follow the etcd documentation and create a dedicated certificate authority setup for the -etcd service. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameter. ---trusted-ca-file= - - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--trusted-ca-file' is present -``` - -**Returned Value**: - -```console -etcd 4318 4301 6 16:15 ? 00:00:14 /usr/local/bin/etcd --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --advertise-client-urls=https://192.168.1.225:2379,https://192.168.1.225:4001 --election-timeout=5000 --data-dir=/var/lib/rancher/etcd/ --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --enable-v2=true --initial-cluster=etcd-cis-aio-0=https://192.168.1.225:2380 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --initial-cluster-token=etcd-cluster-1 --name=etcd-cis-aio-0 --listen-client-urls=https://0.0.0.0:2379 --peer-key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem --peer-client-cert-auth=true --initial-advertise-peer-urls=https://192.168.1.225:2380 --initial-cluster-state=new --key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem -root 4366 4349 0 16:15 ? 00:00:00 /opt/rke-tools/rke-etcd-backup etcd-backup save --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-node.pem --key /etc/kubernetes/ssl/kube-node-key.pem --name etcd-rolling-snapshots --endpoints=192.168.1.225:2379 --retention=72h --creation=12h -root 4643 4626 23 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User -root 14998 14985 0 16:19 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=5 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.6-hardened --json --log_dir /tmp/results/logs --outputfile /tmp/results/etcd.json - -``` -## 3.1 Authentication and Authorization -### 3.1.1 Client certificate authentication should not be used for users (Manual) - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be -implemented in place of client certificates. - - -**Audit:** - -```bash - -``` - - -## 3.2 Logging -### 3.2.1 Ensure that a minimal audit policy is created (Automated) - -**Result:** pass - -**Remediation:** -Create an audit policy file for your cluster. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-policy-file' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) - -**Result:** warn - -**Remediation:** -Consider modification of the audit policy in use on the cluster to include these items, at a -minimum. - - -**Audit:** - -```bash - -``` - - -## 4.1 Worker Node Configuration Files -### 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/systemd/system/kubelet.service.d/10-kubeadm.conf; then stat -c permissions=%a /etc/systemd/system/kubelet.service.d/10-kubeadm.conf; fi' -``` - - -### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/systemd/system/kubelet.service.d/10-kubeadm.conf; then stat -c %U:%G /etc/systemd/system/kubelet.service.d/10-kubeadm.conf; fi' -``` - - -### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Automated) - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 644 $proykubeconfig - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -'644' is present OR '640' is present OR '600' is equal to '600' OR '444' is present OR '440' is present OR '400' is present OR '000' is present -``` - -**Returned Value**: - -```console -600 - -``` -### 4.1.4 Ensure that the proxy kubeconfig file ownership is set to root:root (Automated) - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chown root:root /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is not present OR '/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml' is not present -``` - -### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated) - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 644 /etc/kubernetes/ssl/kubecfg-kube-node.yaml - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -'permissions' is not present -``` - -### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /etc/kubernetes/ssl/kubecfg-kube-node.yaml - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is equal to 'root:root' -``` - -**Returned Value**: - -```console -root:root - -``` -### 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Automated) - -**Result:** pass - -**Remediation:** -Run the following command to modify the file permissions of the ---client-ca-file chmod 644 - - -**Audit:** - -```bash -check_cafile_permissions.sh -``` - -**Expected Result**: - -```console -'permissions' is not present -``` - -**Audit Script:** -```bash -#!/usr/bin/env bash - -CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}') -if test -z $CAFILE; then CAFILE=$kubeletcafile; fi -if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi - -``` -### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) - -**Result:** pass - -**Remediation:** -Run the following command to modify the ownership of the --client-ca-file. -chown root:root - - -**Audit:** - -```bash -check_cafile_ownership.sh -``` - -**Expected Result**: - -```console -'root:root' is not present -``` - -**Audit Script:** -```bash -#!/usr/bin/env bash - -CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}') -if test -z $CAFILE; then CAFILE=$kubeletcafile; fi -if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi - -``` -### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chmod 644 /var/lib/kubelet/config.yaml - -Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then stat -c permissions=%a /var/lib/kubelet/config.yaml; fi' -``` - - -### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chown root:root /var/lib/kubelet/config.yaml - -Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then stat -c %U:%G /var/lib/kubelet/config.yaml; fi' -``` - - -## 4.2 Kubelet -### 4.2.1 Ensure that the anonymous-auth argument is set to false (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to -false. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---anonymous-auth=false -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present -``` - -### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set authorization: mode to Webhook. If -using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---authorization-mode=Webhook -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present -``` - -### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to -the location of the client CA file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---client-ca-file= -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present -``` - -### 4.2.4 Ensure that the --read-only-port argument is set to 0 (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set readOnlyPort to 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---read-only-port=0 -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present OR '' is not present -``` - -### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set streamingConnectionIdleTimeout to a -value other than 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---streaming-connection-idle-timeout=5m -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'30m' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD -root 5103 5086 7 16:16 ? 00:00:12 kubelet --resolv-conf=/etc/resolv.conf --read-only-port=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --feature-gates=RotateKubeletServerCertificate=true --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --make-iptables-util-chains=true --streaming-connection-idle-timeout=30m --cluster-dns=10.43.0.10 --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-192-168-1-225-key.pem --address=0.0.0.0 --cni-bin-dir=/opt/cni/bin --anonymous-auth=false --protect-kernel-defaults=true --cloud-provider= --hostname-override=cis-aio-0 --fail-swap-on=false --cgroups-per-qos=True --authentication-token-webhook=true --event-qps=0 --v=2 --pod-infra-container-image=rancher/pause:3.1 --authorization-mode=Webhook --network-plugin=cni --cluster-domain=cluster.local --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --volume-plugin-dir=/var/lib/kubelet/volumeplugins --cni-conf-dir=/etc/cni/net.d --root-dir=/var/lib/kubelet --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-192-168-1-225.pem --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf - -``` -### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set protectKernelDefaults: true. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---protect-kernel-defaults=true -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present -``` - -### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove the --make-iptables-util-chains argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present OR '' is not present -``` - -### 4.2.8 Ensure that the --hostname-override argument is not set (Manual) - -**Result:** notApplicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and remove the --hostname-override argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - - -### 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present -``` - -### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set tlsCertFile to the location -of the certificate file to use to identify this Kubelet, and tlsPrivateKeyFile -to the location of the corresponding private key file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameters in KUBELET_CERTIFICATE_ARGS variable. ---tls-cert-file= ---tls-private-key-file= -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present AND '' is not present -``` - -### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to add the line rotateCertificates: true or -remove it altogether to use the default value. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS -variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'--rotate-certificates' is not present OR '--rotate-certificates' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD -root 5103 5086 6 16:16 ? 00:00:12 kubelet --resolv-conf=/etc/resolv.conf --read-only-port=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --feature-gates=RotateKubeletServerCertificate=true --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --make-iptables-util-chains=true --streaming-connection-idle-timeout=30m --cluster-dns=10.43.0.10 --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-192-168-1-225-key.pem --address=0.0.0.0 --cni-bin-dir=/opt/cni/bin --anonymous-auth=false --protect-kernel-defaults=true --cloud-provider= --hostname-override=cis-aio-0 --fail-swap-on=false --cgroups-per-qos=True --authentication-token-webhook=true --event-qps=0 --v=2 --pod-infra-container-image=rancher/pause:3.1 --authorization-mode=Webhook --network-plugin=cni --cluster-domain=cluster.local --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --volume-plugin-dir=/var/lib/kubelet/volumeplugins --cni-conf-dir=/etc/cni/net.d --root-dir=/var/lib/kubelet --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-192-168-1-225.pem --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf - -``` -### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Automated) - -**Result:** notApplicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. ---feature-gates=RotateKubeletServerCertificate=true -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -Clusters provisioned by RKE handles certificate rotation directly through RKE. - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - - -### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set TLSCipherSuites: to -TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -or to a subset of these values. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the --tls-cipher-suites parameter as follows, or to a subset of these values. ---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present -``` - -## 5.1 RBAC and Service Accounts -### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) - -**Result:** warn - -**Remediation:** -Identify all clusterrolebindings to the cluster-admin role. Check if they are used and -if they need this role or if they could use a role with fewer privileges. -Where possible, first bind users to a lower privileged role and then remove the -clusterrolebinding to the cluster-admin role : -kubectl delete clusterrolebinding [name] - - -**Audit:** - -```bash - -``` - - -### 5.1.2 Minimize access to secrets (Manual) - -**Result:** warn - -**Remediation:** -Where possible, remove get, list and watch access to secret objects in the cluster. - - -**Audit:** - -```bash - -``` - - -### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) - -**Result:** warn - -**Remediation:** -Where possible replace any use of wildcards in clusterroles and roles with specific -objects or actions. - - -**Audit:** - -```bash - -``` - - -### 5.1.4 Minimize access to create pods (Manual) - -**Result:** warn - -**Remediation:** -Where possible, remove create access to pod objects in the cluster. - - -**Audit:** - -```bash - -``` - - -### 5.1.5 Ensure that default service accounts are not actively used. (Automated) - -**Result:** pass - -**Remediation:** -Create explicit service accounts wherever a Kubernetes workload requires specific access -to the Kubernetes API server. -Modify the configuration of each default service account to include this value -automountServiceAccountToken: false - - -**Audit:** - -```bash -check_for_default_sa.sh -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Audit Script:** -```bash -#!/bin/bash - -set -eE - -handle_error() { - echo "false" -} - -trap 'handle_error' ERR - -count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) -if [[ ${count_sa} -gt 0 ]]; then - echo "false" - exit -fi - -for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") -do - for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[].kind=="ServiceAccount" and .subjects[].name=="default") or (.subjects[].kind=="Group" and .subjects[].name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') - do - read kind name <<<$(IFS=","; echo $result) - resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[] != "podsecuritypolicies")' | wc -l) - if [[ ${resource_count} -gt 0 ]]; then - echo "false" - exit - fi - done -done - - -echo "true" -``` -**Returned Value**: - -```console -true - -``` -### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) - -**Result:** warn - -**Remediation:** -Modify the definition of pods and service accounts which do not need to mount service -account tokens to disable it. - - -**Audit:** - -```bash - -``` - - -## 5.2 Pod Security Policies -### 5.2.1 Minimize the admission of privileged containers (Manual) - -**Result:** warn - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that -the .spec.privileged field is omitted or set to false. - - -**Audit:** - -```bash - -``` - - -### 5.2.2 Minimize the admission of containers wishing to share the host process ID namespace (Automated) - -**Result:** pass - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -.spec.hostPID field is omitted or set to false. - - -**Audit:** - -```bash -kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected Result**: - -```console -1 is greater than 0 -``` - -**Returned Value**: - -```console ---count=1 - -``` -### 5.2.3 Minimize the admission of containers wishing to share the host IPC namespace (Automated) - -**Result:** pass - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -.spec.hostIPC field is omitted or set to false. - - -**Audit:** - -```bash -kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected Result**: - -```console -1 is greater than 0 -``` - -**Returned Value**: - -```console ---count=1 - -``` -### 5.2.4 Minimize the admission of containers wishing to share the host network namespace (Automated) - -**Result:** pass - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -.spec.hostNetwork field is omitted or set to false. - - -**Audit:** - -```bash -kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected Result**: - -```console -1 is greater than 0 -``` - -**Returned Value**: - -```console ---count=1 - -``` -### 5.2.5 Minimize the admission of containers with allowPrivilegeEscalation (Automated) - -**Result:** pass - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -.spec.allowPrivilegeEscalation field is omitted or set to false. - - -**Audit:** - -```bash -kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected Result**: - -```console -1 is greater than 0 -``` - -**Returned Value**: - -```console ---count=1 - -``` -### 5.2.6 Minimize the admission of root containers (Manual) - -**Result:** warn - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -.spec.runAsUser.rule is set to either MustRunAsNonRoot or MustRunAs with the range of -UIDs not including 0. - - -**Audit:** - -```bash - -``` - - -### 5.2.7 Minimize the admission of containers with the NET_RAW capability (Manual) - -**Result:** warn - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -.spec.requiredDropCapabilities is set to include either NET_RAW or ALL. - - -**Audit:** - -```bash - -``` - - -### 5.2.8 Minimize the admission of containers with added capabilities (Manual) - -**Result:** warn - -**Remediation:** -Ensure that allowedCapabilities is not present in PSPs for the cluster unless -it is set to an empty array. - - -**Audit:** - -```bash - -``` - - -### 5.2.9 Minimize the admission of containers with capabilities assigned (Manual) - -**Result:** warn - -**Remediation:** -Review the use of capabilites in applications runnning on your cluster. Where a namespace -contains applicaions which do not require any Linux capabities to operate consider adding -a PSP which forbids the admission of containers which do not drop all capabilities. - - -**Audit:** - -```bash - -``` - - -## 5.3 Network Policies and CNI -### 5.3.1 Ensure that the CNI in use supports Network Policies (Manual) - -**Result:** warn - -**Remediation:** -If the CNI plugin in use does not support network policies, consideration should be given to -making use of a different plugin, or finding an alternate mechanism for restricting traffic -in the Kubernetes cluster. - - -**Audit:** - -```bash - -``` - - -### 5.3.2 Ensure that all Namespaces have Network Policies defined (Automated) - -**Result:** pass - -**Remediation:** -Follow the documentation and create NetworkPolicy objects as you need them. - - -**Audit:** - -```bash -check_for_network_policies.sh -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Audit Script:** -```bash -#!/bin/bash - -set -eE - -handle_error() { - echo "false" -} - -trap 'handle_error' ERR - -for namespace in $(kubectl get namespaces --all-namespaces -o json | jq -r '.items[].metadata.name'); do - policy_count=$(kubectl get networkpolicy -n ${namespace} -o json | jq '.items | length') - if [[ ${policy_count} -eq 0 ]]; then - echo "false" - exit - fi -done - -echo "true" - -``` -**Returned Value**: - -```console -true - -``` -## 5.4 Secrets Management -### 5.4.1 Prefer using secrets as files over secrets as environment variables (Manual) - -**Result:** warn - -**Remediation:** -if possible, rewrite application code to read secrets from mounted secret files, rather than -from environment variables. - - -**Audit:** - -```bash - -``` - - -### 5.4.2 Consider external secret storage (Manual) - -**Result:** warn - -**Remediation:** -Refer to the secrets management options offered by your cloud provider or a third-party -secrets management solution. - - -**Audit:** - -```bash - -``` - - -## 5.5 Extensible Admission Control -### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and setup image provenance. - - -**Audit:** - -```bash - -``` - - -## 5.7 General Policies -### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) - -**Result:** warn - -**Remediation:** -Follow the documentation and create namespaces for objects in your deployment as you need -them. - - -**Audit:** - -```bash - -``` - - -### 5.7.2 Ensure that the seccomp profile is set to docker/default in your pod definitions (Manual) - -**Result:** warn - -**Remediation:** -Seccomp is an alpha feature currently. By default, all alpha features are disabled. So, you -would need to enable alpha features in the apiserver by passing "--feature- -gates=AllAlpha=true" argument. -Edit the /etc/kubernetes/apiserver file on the master node and set the KUBE_API_ARGS -parameter to "--feature-gates=AllAlpha=true" -KUBE_API_ARGS="--feature-gates=AllAlpha=true" -Based on your system, restart the kube-apiserver service. For example: -systemctl restart kube-apiserver.service -Use annotations to enable the docker/default seccomp profile in your pod definitions. An -example is as below: -apiVersion: v1 -kind: Pod -metadata: - name: trustworthy-pod - annotations: - seccomp.security.alpha.kubernetes.io/pod: docker/default -spec: - containers: - - name: trustworthy-container - image: sotrustworthy:latest - - -**Audit:** - -```bash - -``` - - -### 5.7.3 Apply Security Context to Your Pods and Containers (Manual) - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and apply security contexts to your pods. For a -suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker -Containers. - - -**Audit:** - -```bash - -``` - - -### 5.7.4 The default namespace should not be used (Automated) - -**Result:** pass - -**Remediation:** -Ensure that namespaces are created to allow for appropriate segregation of Kubernetes -resources and that all new resources are created in a specific namespace. - - -**Audit:** - -```bash -check_for_default_ns.sh -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Audit Script:** -```bash -#!/bin/bash - -set -eE - -handle_error() { - echo "false" -} - -trap 'handle_error' ERR - -count=$(kubectl get all -n default -o json | jq .items[] | jq -r 'select((.metadata.name!="kubernetes"))' | jq .metadata.name | wc -l) -if [[ ${count} -gt 0 ]]; then - echo "false" - exit -fi - -echo "true" - - -``` -**Returned Value**: - -```console -true - -``` diff --git a/content/rancher/v2.6/en/security/rancher-2.5/1.6-hardening-2.5/_index.md b/content/rancher/v2.6/en/security/rancher-2.5/1.6-hardening-2.5/_index.md deleted file mode 100644 index 78f2763e57c..00000000000 --- a/content/rancher/v2.6/en/security/rancher-2.5/1.6-hardening-2.5/_index.md +++ /dev/null @@ -1,570 +0,0 @@ ---- -title: Hardening Guide with CIS 1.6 Benchmark -weight: 100 ---- - -This document provides prescriptive guidance for hardening a production installation of a RKE cluster to be used with Rancher v2.5.4. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). - -> This hardening guide describes how to secure the nodes in your cluster, and it is recommended to follow this guide before installing Kubernetes. - -This hardening guide is intended to be used for RKE clusters and associated with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: - - Rancher Version | CIS Benchmark Version | Kubernetes Version -----------------|-----------------------|------------------ - Rancher v2.5.4 | Benchmark 1.6 | Kubernetes v1.18 - -[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.5/Rancher_Hardening_Guide_CIS_1.6.pdf) - -### Overview - -This document provides prescriptive guidance for hardening a RKE cluster to be used for installing Rancher v2.5.4 with Kubernetes v1.18 or provisioning a RKE cluster with Kubernetes v1.18 to be used within Rancher v2.5.4. It outlines the configurations required to address Kubernetes benchmark controls from the Center for Information Security (CIS). - -For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS 1.6 Benchmark - Self-Assessment Guide - Rancher v2.5.4]({{< baseurl >}}/rancher/v2.6/en/security/rancher-2.5/1.6-benchmark-2.5/). - -#### Known Issues - -- Rancher **exec shell** and **view logs** for pods are **not** functional in a CIS 1.6 hardened setup when only public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes. -- When setting the `default_pod_security_policy_template_id:` to `restricted` Rancher creates **RoleBindings** and **ClusterRoleBindings** on the default service accounts. The CIS 1.6 5.1.5 check requires the default service accounts have no roles or cluster roles bound to it apart from the defaults. In addition the default service accounts should be configured such that it does not provide a service account token and does not have any explicit rights assignments. - -Migration Rancher from 2.4 to 2.5. Addons were removed in HG 2.5, and therefore namespaces on migration may be not created on the downstream clusters. Pod may fail to run because of missing namesapce like ingress-nginx, cattle-system. - -### Configure Kernel Runtime Parameters - -The following `sysctl` configuration is recommended for all nodes type in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`: - -```ini -vm.overcommit_memory=1 -vm.panic_on_oom=0 -kernel.panic=10 -kernel.panic_on_oops=1 -kernel.keys.root_maxbytes=25000000 -``` - -Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings. - -### Configure `etcd` user and group -A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time. - -#### create `etcd` user and group -To create the **etcd** group run the following console commands. - -The commands below use `52034` for **uid** and **gid** are for example purposes. Any valid unused **uid** or **gid** could also be used in lieu of `52034`. - -```bash -groupadd --gid 52034 etcd -useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd -``` - -Update the RKE **config.yml** with the **uid** and **gid** of the **etcd** user: - -```yaml -services: - etcd: - gid: 52034 - uid: 52034 -``` - -#### Set `automountServiceAccountToken` to `false` for `default` service accounts -Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. - -For each namespace including **default** and **kube-system** on a standard RKE install the **default** service account must include this value: - -```yaml -automountServiceAccountToken: false -``` - -Save the following yaml to a file called `account_update.yaml` - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: default -automountServiceAccountToken: false -``` - -Create a bash script file called `account_update.sh`. Be sure to `chmod +x account_update.sh` so the script has execute permissions. - -```bash -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do - kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)" -done -``` - -### Ensure that all Namespaces have Network Policies defined - -Running different applications on the same Kubernetes cluster creates a risk of one -compromised application attacking a neighboring application. Network segmentation is -important to ensure that containers can communicate only with those they are supposed -to. A network policy is a specification of how selections of pods are allowed to -communicate with each other and other network endpoints. - -Network Policies are namespace scoped. When a network policy is introduced to a given -namespace, all traffic not allowed by the policy is denied. However, if there are no network -policies in a namespace all traffic will be allowed into and out of the pods in that -namespace. To enforce network policies, a CNI (container network interface) plugin must be enabled. -This guide uses [canal](https://github.com/projectcalico/canal) to provide the policy enforcement. -Additional information about CNI providers can be found -[here](https://rancher.com/blog/2019/2019-03-21-comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/) - -Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a -**permissive** example is provide below. If you want to allow all traffic to all pods in a namespace -(even if policies are added that cause some pods to be treated as “isolated”), -you can create a policy that explicitly allows all traffic in that namespace. Save the following `yaml` as -`default-allow-all.yaml`. Additional [documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) -about network policies can be found on the Kubernetes site. - -> This `NetworkPolicy` is not recommended for production use - -```yaml ---- -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: default-allow-all -spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress -``` - -Create a bash script file called `apply_networkPolicy_to_all_ns.sh`. Be sure to -`chmod +x apply_networkPolicy_to_all_ns.sh` so the script has execute permissions. - -```bash -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do - kubectl apply -f default-allow-all.yaml -n ${namespace} -done -``` - -Execute this script to apply the `default-allow-all.yaml` the **permissive** `NetworkPolicy` to all namespaces. - -### Reference Hardened RKE `cluster.yml` configuration - -The reference `cluster.yml` is used by the RKE CLI that provides the configuration needed to achieve a hardened install -of Rancher Kubernetes Engine (RKE). Install [documentation](https://rancher.com/docs/rke/latest/en/installation/) is -provided with additional details about the configuration items. This reference `cluster.yml` does not include the required **nodes** directive which will vary depending on your environment. Documentation for node configuration can be found here: https://rancher.com/docs/rke/latest/en/config-options/nodes - - -```yaml -# If you intend to deploy Kubernetes in an air-gapped environment, -# please consult the documentation on how to configure custom RKE images. -# https://rancher.com/docs/rke/latest/en/installation/ - -# the nodes directive is required and will vary depending on your environment -# documentation for node configuration can be found here: -# https://rancher.com/docs/rke/latest/en/config-options/nodes -nodes: [] -services: - etcd: - image: "" - extra_args: {} - extra_binds: [] - extra_env: [] - win_extra_args: {} - win_extra_binds: [] - win_extra_env: [] - external_urls: [] - ca_cert: "" - cert: "" - key: "" - path: "" - uid: 52034 - gid: 52034 - snapshot: false - retention: "" - creation: "" - backup_config: null - kube-api: - image: "" - extra_args: {} - extra_binds: [] - extra_env: [] - win_extra_args: {} - win_extra_binds: [] - win_extra_env: [] - service_cluster_ip_range: "" - service_node_port_range: "" - pod_security_policy: true - always_pull_images: false - secrets_encryption_config: - enabled: true - custom_config: null - audit_log: - enabled: true - configuration: null - admission_configuration: null - event_rate_limit: - enabled: true - configuration: null - kube-controller: - image: "" - extra_args: - feature-gates: RotateKubeletServerCertificate=true - extra_binds: [] - extra_env: [] - win_extra_args: {} - win_extra_binds: [] - win_extra_env: [] - cluster_cidr: "" - service_cluster_ip_range: "" - scheduler: - image: "" - extra_args: {} - extra_binds: [] - extra_env: [] - win_extra_args: {} - win_extra_binds: [] - win_extra_env: [] - kubelet: - image: "" - extra_args: - feature-gates: RotateKubeletServerCertificate=true - protect-kernel-defaults: "true" - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - extra_binds: [] - extra_env: [] - win_extra_args: {} - win_extra_binds: [] - win_extra_env: [] - cluster_domain: cluster.local - infra_container_image: "" - cluster_dns_server: "" - fail_swap_on: false - generate_serving_certificate: true - kubeproxy: - image: "" - extra_args: {} - extra_binds: [] - extra_env: [] - win_extra_args: {} - win_extra_binds: [] - win_extra_env: [] -network: - plugin: "" - options: {} - mtu: 0 - node_selector: {} - update_strategy: null -authentication: - strategy: "" - sans: [] - webhook: null -addons: | - apiVersion: policy/v1beta1 - kind: PodSecurityPolicy - metadata: - name: restricted - spec: - requiredDropCapabilities: - - NET_RAW - privileged: false - allowPrivilegeEscalation: false - defaultAllowPrivilegeEscalation: false - fsGroup: - rule: RunAsAny - runAsUser: - rule: MustRunAsNonRoot - seLinux: - rule: RunAsAny - supplementalGroups: - rule: RunAsAny - volumes: - - emptyDir - - secret - - persistentVolumeClaim - - downwardAPI - - configMap - - projected - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: psp:restricted - rules: - - apiGroups: - - extensions - resourceNames: - - restricted - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: psp:restricted - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: psp:restricted - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: networking.k8s.io/v1 - kind: NetworkPolicy - metadata: - name: default-allow-all - spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: default - automountServiceAccountToken: false -addons_include: [] -system_images: - etcd: "" - alpine: "" - nginx_proxy: "" - cert_downloader: "" - kubernetes_services_sidecar: "" - kubedns: "" - dnsmasq: "" - kubedns_sidecar: "" - kubedns_autoscaler: "" - coredns: "" - coredns_autoscaler: "" - nodelocal: "" - kubernetes: "" - flannel: "" - flannel_cni: "" - calico_node: "" - calico_cni: "" - calico_controllers: "" - calico_ctl: "" - calico_flexvol: "" - canal_node: "" - canal_cni: "" - canal_controllers: "" - canal_flannel: "" - canal_flexvol: "" - weave_node: "" - weave_cni: "" - pod_infra_container: "" - ingress: "" - ingress_backend: "" - metrics_server: "" - windows_pod_infra_container: "" -ssh_key_path: "" -ssh_cert_path: "" -ssh_agent_auth: false -authorization: - mode: "" - options: {} -ignore_docker_version: false -kubernetes_version: v1.18.12-rancher1-1 -private_registries: [] -ingress: - provider: "" - options: {} - node_selector: {} - extra_args: {} - dns_policy: "" - extra_envs: [] - extra_volumes: [] - extra_volume_mounts: [] - update_strategy: null - http_port: 0 - https_port: 0 - network_mode: "" -cluster_name: -cloud_provider: - name: "" -prefix_path: "" -win_prefix_path: "" -addon_job_timeout: 0 -bastion_host: - address: "" - port: "" - user: "" - ssh_key: "" - ssh_key_path: "" - ssh_cert: "" - ssh_cert_path: "" -monitoring: - provider: "" - options: {} - node_selector: {} - update_strategy: null - replicas: null -restore: - restore: false - snapshot_name: "" -dns: null -upgrade_strategy: - max_unavailable_worker: "" - max_unavailable_controlplane: "" - drain: null - node_drain_input: null -``` - -### Reference Hardened RKE Template configuration - -The reference RKE Template provides the configuration needed to achieve a hardened install of Kubenetes. -RKE Templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher -[documentaion](https://rancher.com/docs/rancher/v2.6/en/installation) for additional installation and RKE Template details. - -```yaml -# -# Cluster Config -# -default_pod_security_policy_template_id: restricted -docker_root_dir: /var/lib/docker -enable_cluster_alerting: false -enable_cluster_monitoring: false -enable_network_policy: true -# -# Rancher Config -# -rancher_kubernetes_engine_config: - addon_job_timeout: 45 - ignore_docker_version: true - kubernetes_version: v1.18.12-rancher1-1 -# -# If you are using calico on AWS -# -# network: -# plugin: calico -# calico_network_provider: -# cloud_provider: aws -# -# # To specify flannel interface -# -# network: -# plugin: flannel -# flannel_network_provider: -# iface: eth1 -# -# # To specify flannel interface for canal plugin -# -# network: -# plugin: canal -# canal_network_provider: -# iface: eth1 -# - network: - mtu: 0 - plugin: canal - rotate_encryption_key: false -# -# services: -# kube-api: -# service_cluster_ip_range: 10.43.0.0/16 -# kube-controller: -# cluster_cidr: 10.42.0.0/16 -# service_cluster_ip_range: 10.43.0.0/16 -# kubelet: -# cluster_domain: cluster.local -# cluster_dns_server: 10.43.0.10 -# - services: - etcd: - backup_config: - enabled: false - interval_hours: 12 - retention: 6 - safe_timestamp: false - creation: 12h - extra_args: - election-timeout: '5000' - heartbeat-interval: '500' - gid: 52034 - retention: 72h - snapshot: false - uid: 52034 - kube_api: - always_pull_images: false - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: true - secrets_encryption_config: - enabled: true - service_node_port_range: 30000-32767 - kube_controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - protect-kernel-defaults: 'true' - tls-cipher-suites: >- - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - fail_swap_on: false - generate_serving_certificate: true - ssh_agent_auth: false - upgrade_strategy: - max_unavailable_controlplane: '1' - max_unavailable_worker: 10% -windows_prefered_cluster: false -``` - -### Hardened Reference Ubuntu 20.04 LTS **cloud-config**: - -The reference **cloud-config** is generally used in cloud infrastructure environments to allow for -configuration management of compute instances. The reference config configures Ubuntu operating system level settings -needed before installing kubernetes. - -```yaml -#cloud-config -apt: - sources: - docker.list: - source: deb [arch=amd64] http://download.docker.com/linux/ubuntu $RELEASE stable - keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 -system_info: - default_user: - groups: - - docker -write_files: -- path: "/etc/apt/preferences.d/docker" - owner: root:root - permissions: '0600' - content: | - Package: docker-ce - Pin: version 5:19* - Pin-Priority: 800 -- path: "/etc/sysctl.d/90-kubelet.conf" - owner: root:root - permissions: '0644' - content: | - vm.overcommit_memory=1 - vm.panic_on_oom=0 - kernel.panic=10 - kernel.panic_on_oops=1 - kernel.keys.root_maxbytes=25000000 -package_update: true -packages: -- docker-ce -- docker-ce-cli -- containerd.io -runcmd: -- sysctl -p /etc/sysctl.d/90-kubelet.conf -- groupadd --gid 52034 etcd -- useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd -``` diff --git a/content/rancher/v2.6/en/security/rancher-2.5/_index.md b/content/rancher/v2.6/en/security/rancher-2.5/_index.md deleted file mode 100644 index 7282a7d813a..00000000000 --- a/content/rancher/v2.6/en/security/rancher-2.5/_index.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: Self-Assessment and Hardening Guides for Rancher v2.5 -shortTitle: Rancher v2.5 Guides -weight: 1 ---- - -Rancher v2.5 introduced the capability to deploy Rancher on any Kubernetes cluster. For that reason, we now provide separate security hardening guides for Rancher deployments on each of Rancher's Kubernetes distributions. - -- [Rancher Kubernetes Distributions](#rancher-kubernetes-distributions) -- [Hardening Guides and Benchmark Versions](#hardening-guides-and-benchmark-versions) - - [RKE Guides](#rke-guides) - - [RKE2 Guides](#rke2-guides) - - [K3s Guides](#k3s) -- [Rancher with SELinux](#rancher-with-selinux) - -# Rancher Kubernetes Distributions - -Rancher has the following Kubernetes distributions: - -- [**RKE,**]({{}}/rke/latest/en/) Rancher Kubernetes Engine, is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. -- [**K3s,**]({{}}/k3s/latest/en/) is a fully conformant, lightweight Kubernetes distribution. It is easy to install, with half the memory of upstream Kubernetes, all in a binary of less than 100 MB. -- [**RKE2**](https://docs.rke2.io/) is a fully conformant Kubernetes distribution that focuses on security and compliance within the U.S. Federal Government sector. - -To harden a Kubernetes cluster outside of Rancher's distributions, refer to your Kubernetes provider docs. - -# Hardening Guides and Benchmark Versions - -These guides have been tested along with the Rancher v2.5 release. Each self-assessment guide is accompanied with a hardening guide and tested on a specific Kubernetes version and CIS benchmark version. If a CIS benchmark has not been validated for your Kubernetes version, you can choose to use the existing guides until a newer version is added. - -### RKE Guides - -Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides ----|---|---|--- -Kubernetes v1.15+ | CIS v1.5 | [Link](./1.5-benchmark-2.5) | [Link](./1.5-hardening-2.5) -Kubernetes v1.18+ | CIS v1.6 | [Link](./1.6-benchmark-2.5) | [Link](./1.6-hardening-2.5) - -### RKE2 Guides - -Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides ----|---|---|--- -Kubernetes v1.18 | CIS v1.5 | [Link](https://docs.rke2.io/security/cis_self_assessment15/) | [Link](https://docs.rke2.io/security/hardening_guide/) -Kubernetes v1.20 | CIS v1.6 | [Link](https://docs.rke2.io/security/cis_self_assessment16/) | [Link](https://docs.rke2.io/security/hardening_guide/) - -### K3s Guides - -Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guide ----|---|---|--- -Kubernetes v1.17, v1.18, & v1.19 | CIS v1.5 | [Link]({{}}/k3s/latest/en/security/self_assessment/) | [Link]({{}}/k3s/latest/en/security/hardening_guide/) - - -# Rancher with SELinux - -[Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8. - -To use Rancher with SELinux, we recommend installing the `rancher-selinux` RPM according to the instructions on [this page.]({{}}/rancher/v2.6/en/security/selinux/#installing-the-rancher-selinux-rpm) diff --git a/content/rancher/v2.6/en/security/security-scan/_index.md b/content/rancher/v2.6/en/security/security-scan/_index.md deleted file mode 100644 index c5a3cdb21d4..00000000000 --- a/content/rancher/v2.6/en/security/security-scan/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: Security Scans -weight: 299 ---- - -The documentation about CIS security scans has moved [here.]({{}}/rancher/v2.6/en/cis-scans) From f673e6586aec8a74ec26d2c5c897a7c166765702 Mon Sep 17 00:00:00 2001 From: Guilherme Macedo Date: Thu, 30 Dec 2021 09:57:46 +0100 Subject: [PATCH 09/33] Restructure v2.6 security docs Signed-off-by: Guilherme Macedo --- content/rancher/v2.6/en/cis-scans/_index.md | 289 ------------------ .../v2.6/en/cis-scans/configuration/_index.md | 98 ------ .../en/cis-scans/custom-benchmark/_index.md | 84 ----- .../rancher/v2.6/en/cis-scans/rbac/_index.md | 50 --- .../v2.6/en/cis-scans/skipped-tests/_index.md | 54 ---- 5 files changed, 575 deletions(-) delete mode 100644 content/rancher/v2.6/en/cis-scans/_index.md delete mode 100644 content/rancher/v2.6/en/cis-scans/configuration/_index.md delete mode 100644 content/rancher/v2.6/en/cis-scans/custom-benchmark/_index.md delete mode 100644 content/rancher/v2.6/en/cis-scans/rbac/_index.md delete mode 100644 content/rancher/v2.6/en/cis-scans/skipped-tests/_index.md diff --git a/content/rancher/v2.6/en/cis-scans/_index.md b/content/rancher/v2.6/en/cis-scans/_index.md deleted file mode 100644 index 17aa5a5a3b1..00000000000 --- a/content/rancher/v2.6/en/cis-scans/_index.md +++ /dev/null @@ -1,289 +0,0 @@ ---- -title: CIS Scans -weight: 17 ---- - -Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. The CIS scans can run on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE. - -The `rancher-cis-benchmark` app leverages kube-bench, an open-source tool from Aqua Security, to check clusters for CIS Kubernetes Benchmark compliance. Also, to generate a cluster-wide report, the application utilizes Sonobuoy for report aggregation. - -- [About the CIS Benchmark](#about-the-cis-benchmark) -- [About the Generated Report](#about-the-generated-report) -- [Test Profiles](#test-profiles) -- [About Skipped and Not Applicable Tests](#about-skipped-and-not-applicable-tests) -- [Roles-based Access Control](./rbac) -- [Configuration](./configuration) -- [How-to Guides](#how-to-guides) - - [Installing CIS Benchmark](#installing-cis-benchmark) - - [Uninstalling CIS Benchmark](#uninstalling-cis-benchmark) - - [Running a Scan](#running-a-scan) - - [Running a Scan Periodically on a Schedule](#running-a-scan-periodically-on-a-schedule) - - [Skipping Tests](#skipping-tests) - - [Viewing Reports](#viewing-reports) - - [Enabling Alerting for rancher-cis-benchmark](#enabling-alerting-for-rancher-cis-benchmark) - - [Configuring Alerts for a Periodic Scan on a Schedule](#configuring-alerts-for-a-periodic-scan-on-a-schedule) - - [Creating a Custom Benchmark Version for Running a Cluster Scan](#creating-a-custom-benchmark-version-for-running-a-cluster-scan) - - -# About the CIS Benchmark - -The Center for Internet Security is a 501(c\)(3) non-profit organization, formed in October 2000, with a mission to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace". The organization is headquartered in East Greenbush, New York, with members including large corporations, government agencies, and academic institutions. - -CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team. - -The official Benchmark documents are available through the CIS website. The sign-up form to access the documents is -here. - -# About the Generated Report - -Each scan generates a report can be viewed in the Rancher UI and can be downloaded in CSV format. - -By default, the CIS Benchmark v1.6 is used. - -The Benchmark version is included in the generated report. - -The Benchmark provides recommendations of two types: Automated and Manual. Recommendations marked as Manual in the Benchmark are not included in the generated report. - -Some tests are designated as "Not Applicable." These tests will not be run on any CIS scan because of the way that Rancher provisions RKE clusters. For information on how test results can be audited, and why some tests are designated to be not applicable, refer to Rancher's self-assessment guide for the corresponding Kubernetes version. - -The report contains the following information: - -| Column in Report | Description | -|------------------|-------------| -| `id` | The ID number of the CIS Benchmark. | -| `description` | The description of the CIS Benchmark test. | -| `remediation` | What needs to be fixed in order to pass the test. | -| `state` | Indicates if the test passed, failed, was skipped, or was not applicable. | -| `node_type` | The node role, which affects which tests are run on the node. Master tests are run on controlplane nodes, etcd tests are run on etcd nodes, and node tests are run on the worker nodes. | -| `audit` | This is the audit check that `kube-bench` runs for this test. | -| `audit_config` | Any configuration applicable to the audit script. | -| `test_info` | Test-related info as reported by `kube-bench`, if any. | -| `commands` | Test-related commands as reported by `kube-bench`, if any. | -| `config_commands` | Test-related configuration data as reported by `kube-bench`, if any. | -| `actual_value` | The test's actual value, present if reported by `kube-bench`. | -| `expected_result` | The test's expected result, present if reported by `kube-bench`. | - -Refer to the table in the cluster hardening guide for information on which versions of Kubernetes, the Benchmark, Rancher, and our cluster hardening guide correspond to each other. Also refer to the hardening guide for configuration files of CIS-compliant clusters and information on remediating failed tests. - -# Test Profiles - -The following profiles are available: - -- Generic CIS 1.5 -- Generic CIS 1.6 -- RKE permissive 1.5 -- RKE hardened 1.5 -- RKE permissive 1.6 -- RKE hardened 1.6 -- EKS -- GKE -- RKE2 permissive 1.5 -- RKE2 permissive 1.5 - -You also have the ability to customize a profile by saving a set of tests to skip. - -All profiles will have a set of not applicable tests that will be skipped during the CIS scan. These tests are not applicable based on how a RKE cluster manages Kubernetes. - -There are two types of RKE cluster scan profiles: - -- **Permissive:** This profile has a set of tests that have been will be skipped as these tests will fail on a default RKE Kubernetes cluster. Besides the list of skipped tests, the profile will also not run the not applicable tests. -- **Hardened:** This profile will not skip any tests, except for the non-applicable tests. - -The EKS and GKE cluster scan profiles are based on CIS Benchmark versions that are specific to those types of clusters. - -In order to pass the "Hardened" profile, you will need to follow the steps on the hardening guide and use the `cluster.yml` defined in the hardening guide to provision a hardened cluster. - -The default profile and the supported CIS benchmark version depends on the type of cluster that will be scanned: - -The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version. - -- For RKE Kubernetes clusters, the RKE Permissive 1.6 profile is the default. -- EKS and GKE have their own CIS Benchmarks published by `kube-bench`. The corresponding test profiles are used by default for those clusters. -- For RKE2 Kubernetes clusters, the RKE2 Permissive 1.5 profile is the default. -- For cluster types other than RKE, RKE2, EKS and GKE, the Generic CIS 1.5 profile will be used by default. - -# About Skipped and Not Applicable Tests - -For a list of skipped and not applicable tests, refer to this page. - -For now, only user-defined skipped tests are marked as skipped in the generated report. - -Any skipped tests that are defined as being skipped by one of the default profiles are marked as not applicable. - -# Roles-based Access Control - -For information about permissions, refer to this page. - -# Configuration - -For more information about configuring the custom resources for the scans, profiles, and benchmark versions, refer to this page. - -# How-to Guides - -- [Installing rancher-cis-benchmark](#installing-rancher-cis-benchmark) -- [Uninstalling rancher-cis-benchmark](#uninstalling-rancher-cis-benchmark) -- [Running a Scan](#running-a-scan) -- [Running a Scan Periodically on a Schedule](#running-a-scan-periodically-on-a-schedule) -- [Skipping Tests](#skipping-tests) -- [Viewing Reports](#viewing-reports) -- [Enabling Alerting for rancher-cis-benchmark](#enabling-alerting-for-rancher-cis-benchmark) -- [Configuring Alerts for a Periodic Scan on a Schedule](#configuring-alerts-for-a-periodic-scan-on-a-schedule) -- [Creating a Custom Benchmark Version for Running a Cluster Scan](#creating-a-custom-benchmark-version-for-running-a-cluster-scan) -### Installing CIS Benchmark - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to install CIS Benchmark and click **Explore**. -1. In the left navigation bar, click **Apps & Marketplace > Charts**. -1. Click **CIS Benchmark** -1. Click **Install**. - -**Result:** The CIS scan application is deployed on the Kubernetes cluster. - -### Uninstalling CIS Benchmark - -1. From the **Cluster Dashboard,** go to the left navigation bar and click **Apps & Marketplace > Installed Apps**. -1. Go to the `cis-operator-system` namespace and check the boxes next to `rancher-cis-benchmark-crd` and `rancher-cis-benchmark`. -1. Click **Delete** and confirm **Delete**. - -**Result:** The `rancher-cis-benchmark` application is uninstalled. - -### Running a Scan - -When a ClusterScan custom resource is created, it launches a new CIS scan on the cluster for the chosen ClusterScanProfile. - -Note: There is currently a limitation of running only one CIS scan at a time for a cluster. If you create multiple ClusterScan custom resources, they will be run one after the other by the operator, and until one scan finishes, the rest of the ClusterScan custom resources will be in the "Pending" state. - -To run a scan, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. -1. Click **Create**. -1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. -1. Click **Create**. - -**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears. -### Running a Scan Periodically on a Schedule - -To run a ClusterScan on a schedule, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. -1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. -1. Choose the option **Run scan on a schedule**. -1. Enter a valid cron schedule expression in the field **Schedule**. -1. Choose a **Retention** count, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged. -1. Click **Create**. - -**Result:** The scan runs and reschedules to run according to the cron schedule provided. The **Next Scan** value indicates the next time this scan will run again. - -A report is generated with the scan results every time the scan runs. To see the latest results, click the name of the scan that appears. - -You can also see the previous reports by choosing the report from the **Reports** dropdown on the scan detail page. - -### Skipping Tests - -CIS scans can be run using test profiles with user-defined skips. - -To skip tests, you will create a custom CIS scan profile. A profile contains the configuration for the CIS scan, which includes the benchmark versions to use and any specific tests to skip in that benchmark. - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Profile**. -1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile and click **⋮ Clone**. If you are filling out the form, add the tests to skip using the test IDs, using the relevant CIS Benchmark as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name: - - ```yaml - apiVersion: cis.cattle.io/v1 - kind: ClusterScanProfile - metadata: - annotations: - meta.helm.sh/release-name: clusterscan-operator - meta.helm.sh/release-namespace: cis-operator-system - labels: - app.kubernetes.io/managed-by: Helm - name: "" - spec: - benchmarkVersion: cis-1.5 - skipTests: - - "1.1.20" - - "1.1.21" - ``` -1. Click **Create**. - -**Result:** A new CIS scan profile is created. - -When you [run a scan](#running-a-scan) that uses this profile, the defined tests will be skipped during the scan. The skipped tests will be marked in the generated report as `Skip`. - -### Viewing Reports - -To view the generated CIS scan reports, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. -1. The **Scans** page will show the generated reports. To see a detailed report, go to a scan report and click the name. - -One can download the report from the Scans list or from the scan detail page. - -### Enabling Alerting for rancher-cis-benchmark - -Alerts can be configured to be sent out for a scan that runs on a schedule. - -> **Prerequisite:** -> -> Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration) -> -> While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration/receiver/#example-route-config-for-cis-scan-alerts) - -While installing or upgrading the `rancher-cis-benchmark` Helm chart, set the following flag to `true` in the `values.yaml`: - -```yaml -alerts: - enabled: true -``` - -### Configuring Alerts for a Periodic Scan on a Schedule - -It is possible to run a ClusterScan on a schedule. - -A scheduled scan can also specify if you should receive alerts when the scan completes. - -Alerts are supported only for a scan that runs on a schedule. - -The CIS Benchmark application supports two types of alerts: - -- Alert on scan completion: This alert is sent out when the scan run finishes. The alert includes details including the ClusterScan's name and the ClusterScanProfile name. -- Alert on scan failure: This alert is sent out if there are some test failures in the scan run or if the scan is in a `Fail` state. - -> **Prerequisite:** -> -> Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration) -> -> While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration/receiver/#example-route-config-for-cis-scan-alerts) - -To configure alerts for a scan that runs on a schedule, - -1. Please enable alerts on the `rancher-cis-benchmark` application (#enabling-alerting-for-rancher-cis-benchmark) -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. -1. Click **Create**. -1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. -1. Choose the option **Run scan on a schedule**. -1. Enter a valid [cron schedule expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) in the field **Schedule**. -1. Check the boxes next to the Alert types under **Alerting**. -1. Optional: Choose a **Retention Count**, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged. -1. Click **Create**. - -**Result:** The scan runs and reschedules to run according to the cron schedule provided. Alerts are sent out when the scan finishes if routes and receiver are configured under `rancher-monitoring` application. - -A report is generated with the scan results every time the scan runs. To see the latest results, click the name of the scan that appears. - -### Creating a Custom Benchmark Version for Running a Cluster Scan - -There could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them. - -It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application. - -For details, see [this page.](./custom-benchmark) diff --git a/content/rancher/v2.6/en/cis-scans/configuration/_index.md b/content/rancher/v2.6/en/cis-scans/configuration/_index.md deleted file mode 100644 index 26df1932e25..00000000000 --- a/content/rancher/v2.6/en/cis-scans/configuration/_index.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -title: Configuration -weight: 3 ---- - -This configuration reference is intended to help you manage the custom resources created by the `rancher-cis-benchmark` application. These resources are used for performing CIS scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization. - -To configure the custom resources, go to the **Cluster Dashboard** To configure the CIS scans, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to configure CIS scans and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark**. - -### Scans - -A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed. - -When configuring a scan, you need to define the name of the scan profile that will be used with the `scanProfileName` directive. - -An example ClusterScan custom resource is below: - -```yaml -apiVersion: cis.cattle.io/v1 -kind: ClusterScan -metadata: - name: rke-cis -spec: - scanProfileName: rke-profile-hardened -``` - -### Profiles - -A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark. - -> By default, a few ClusterScanProfiles are installed as part of the `rancher-cis-benchmark` chart. If a user edits these default benchmarks or profiles, the next chart update will reset them back. So it is advisable for users to not edit the default ClusterScanProfiles. - -Users can clone the ClusterScanProfiles to create custom profiles. - -Skipped tests are listed under the `skipTests` directive. - -When you create a new profile, you will also need to give it a name. - -An example `ClusterScanProfile` is below: - -```yaml -apiVersion: cis.cattle.io/v1 -kind: ClusterScanProfile -metadata: - annotations: - meta.helm.sh/release-name: clusterscan-operator - meta.helm.sh/release-namespace: cis-operator-system - labels: - app.kubernetes.io/managed-by: Helm - name: "" -spec: - benchmarkVersion: cis-1.5 - skipTests: - - "1.1.20" - - "1.1.21" -``` - -### Benchmark Versions - -A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark. - -A `ClusterScanBenchmark` defines the CIS `BenchmarkVersion` name and test configurations. The `BenchmarkVersion` name is a parameter provided to the `kube-bench` tool. - -By default, a few `BenchmarkVersion` names and test configurations are packaged as part of the CIS scan application. When this feature is enabled, these default BenchmarkVersions will be automatically installed and available for users to create a ClusterScanProfile. - -> If the default BenchmarkVersions are edited, the next chart update will reset them back. Therefore we don't recommend editing the default ClusterScanBenchmarks. - -A ClusterScanBenchmark consists of the fields: - -- `ClusterProvider`: This is the cluster provider name for which this benchmark is applicable. For example: RKE, EKS, GKE, etc. Leave it empty if this benchmark can be run on any cluster type. -- `MinKubernetesVersion`: Specifies the cluster's minimum kubernetes version necessary to run this benchmark. Leave it empty if there is no dependency on a particular Kubernetes version. -- `MaxKubernetesVersion`: Specifies the cluster's maximum Kubernetes version necessary to run this benchmark. Leave it empty if there is no dependency on a particular k8s version. - -An example `ClusterScanBenchmark` is below: - -```yaml -apiVersion: cis.cattle.io/v1 -kind: ClusterScanBenchmark -metadata: - annotations: - meta.helm.sh/release-name: clusterscan-operator - meta.helm.sh/release-namespace: cis-operator-system - creationTimestamp: "2020-08-28T18:18:07Z" - generation: 1 - labels: - app.kubernetes.io/managed-by: Helm - name: cis-1.5 - resourceVersion: "203878" - selfLink: /apis/cis.cattle.io/v1/clusterscanbenchmarks/cis-1.5 - uid: 309e543e-9102-4091-be91-08d7af7fb7a7 -spec: - clusterProvider: "" - minKubernetesVersion: 1.15.0 -``` \ No newline at end of file diff --git a/content/rancher/v2.6/en/cis-scans/custom-benchmark/_index.md b/content/rancher/v2.6/en/cis-scans/custom-benchmark/_index.md deleted file mode 100644 index 36f70ccaa55..00000000000 --- a/content/rancher/v2.6/en/cis-scans/custom-benchmark/_index.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -title: Creating a Custom Benchmark Version for Running a Cluster Scan -weight: 4 ---- - -Each Benchmark Version defines a set of test configuration files that define the CIS tests to be run by the kube-bench tool. -The `rancher-cis-benchmark` application installs a few default Benchmark Versions which are listed under CIS Benchmark application menu. - -But there could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them. - -It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application. - -When a cluster scan is run, you need to select a Profile which points to a specific Benchmark Version. - -Follow all the steps below to add a custom Benchmark Version and run a scan using it. - -1. [Prepare the Custom Benchmark Version ConfigMap](#1-prepare-the-custom-benchmark-version-configmap) -2. [Add a Custom Benchmark Version to a Cluster](#2-add-a-custom-benchmark-version-to-a-cluster) -3. [Create a New Profile for the Custom Benchmark Version](#3-create-a-new-profile-for-the-custom-benchmark-version) -4. [Run a Scan Using the Custom Benchmark Version](#4-run-a-scan-using-the-custom-benchmark-version) - -### 1. Prepare the Custom Benchmark Version ConfigMap - -To create a custom benchmark version, first you need to create a ConfigMap containing the benchmark version's config files and upload it to your Kubernetes cluster where you want to run the scan. - -To prepare a custom benchmark version ConfigMap, suppose we want to add a custom Benchmark Version named `foo`. - -1. Create a directory named `foo` and inside this directory, place all the config YAML files that the kube-bench tool looks for. For example, here are the config YAML files for a Generic CIS 1.5 Benchmark Version https://github.com/aquasecurity/kube-bench/tree/master/cfg/cis-1.5 -1. Place the complete `config.yaml` file, which includes all the components that should be tested. -1. Add the Benchmark version name to the `target_mapping` section of the `config.yaml`: - - ```yaml - target_mapping: - "foo": - - "master" - - "node" - - "controlplane" - - "etcd" - - "policies" - ``` -1. Upload this directory to your Kubernetes Cluster by creating a ConfigMap: - - ```yaml - kubectl create configmap -n foo --from-file= - ``` - -### 2. Add a Custom Benchmark Version to a Cluster - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark > Benchmark Version**. -1. Click **Create**. -1. Enter the **Name** and a description for your custom benchmark version. -1. Choose the cluster provider that your benchmark version applies to. -1. Choose the ConfigMap you have uploaded from the dropdown. -1. Add the minimum and maximum Kubernetes version limits applicable, if any. -1. Click **Create**. - -### 3. Create a New Profile for the Custom Benchmark Version - -To run a scan using your custom benchmark version, you need to add a new Profile pointing to this benchmark version. - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark > Profile**. -1. Click **Create**. -1. Provide a **Name** and description. In this example, we name it `foo-profile`. -1. Choose the Benchmark Version from the dropdown. -1. Click **Create**. - -### 4. Run a Scan Using the Custom Benchmark Version - -Once the Profile pointing to your custom benchmark version `foo` has been created, you can create a new Scan to run the custom test configs in the Benchmark Version. - -To run a scan, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark > Scan**. -1. Click **Create**. -1. Choose the new cluster scan profile. -1. Click **Create**. - -**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears. diff --git a/content/rancher/v2.6/en/cis-scans/rbac/_index.md b/content/rancher/v2.6/en/cis-scans/rbac/_index.md deleted file mode 100644 index ad2b47bff2c..00000000000 --- a/content/rancher/v2.6/en/cis-scans/rbac/_index.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: Roles-based Access Control -shortTitle: RBAC -weight: 3 ---- - -This section describes the permissions required to use the rancher-cis-benchmark App. - -The rancher-cis-benchmark is a cluster-admin only feature by default. - -However, the `rancher-cis-benchmark` chart installs these two default `ClusterRoles`: - -- cis-admin -- cis-view - -In Rancher, only cluster owners and global administrators have `cis-admin` access by default. - -Note: If you were using the `cis-edit` role added in Rancher v2.5 setup, it has now been removed since -Rancher v2.5.2 because it essentially is same as `cis-admin`. If you happen to create any clusterrolebindings -for `cis-edit`, please update them to use `cis-admin` ClusterRole instead. - -# Cluster-Admin Access - -Rancher CIS Scans is a cluster-admin only feature by default. -This means only the Rancher global admins, and the cluster’s cluster-owner can: - -- Install/Uninstall the rancher-cis-benchmark App -- See the navigation links for CIS Benchmark CRDs - ClusterScanBenchmarks, ClusterScanProfiles, ClusterScans -- List the default ClusterScanBenchmarks and ClusterScanProfiles -- Create/Edit/Delete new ClusterScanProfiles -- Create/Edit/Delete a new ClusterScan to run the CIS scan on the cluster -- View and Download the ClusterScanReport created after the ClusterScan is complete - - -# Summary of Default Permissions for Kubernetes Default Roles - -The rancher-cis-benchmark creates three `ClusterRoles` and adds the CIS Benchmark CRD access to the following default K8s `ClusterRoles`: - -| ClusterRole created by chart | Default K8s ClusterRole | Permissions given with Role -| ------------------------------| ---------------------------| ---------------------------| -| `cis-admin` | `admin`| Ability to CRUD clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR -| `cis-view` | `view `| Ability to List(R) clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR - - -By default only cluster-owner role will have ability to manage and use `rancher-cis-benchmark` feature. - -The other Rancher roles (cluster-member, project-owner, project-member) do not have any default permissions to manage and use rancher-cis-benchmark resources. - -But if a cluster-owner wants to delegate access to other users, they can do so by creating ClusterRoleBindings between these users and the above CIS ClusterRoles manually. -There is no automatic role aggregation supported for the `rancher-cis-benchmark` ClusterRoles. diff --git a/content/rancher/v2.6/en/cis-scans/skipped-tests/_index.md b/content/rancher/v2.6/en/cis-scans/skipped-tests/_index.md deleted file mode 100644 index f2b125c0262..00000000000 --- a/content/rancher/v2.6/en/cis-scans/skipped-tests/_index.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -title: Skipped and Not Applicable Tests -weight: 3 ---- - -This section lists the tests that are skipped in the permissive test profile for RKE. - -> All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile. - -# CIS Benchmark v1.5 - -### CIS Benchmark v1.5 Skipped Tests - -| Number | Description | Reason for Skipping | -| ---------- | ------------- | --------- | -| 1.1.12 | Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership. | -| 1.2.6 | Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | -| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. | -| 1.2.34 | Ensure that encryption providers are appropriately configured (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. | -| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Automated) | System level configurations are required before provisioning the cluster in order for this argument to be set to true. | -| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | -| 5.1.5 | Ensure that default service accounts are not actively used. (Automated) | Kubernetes provides default service accounts to be used. | -| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.2.3 | Minimize the admission of containers wishing to share the host IPC namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.2.4 | Minimize the admission of containers wishing to share the host network namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.2.5 | Minimize the admission of containers with allowPrivilegeEscalation (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.3.2 | Ensure that all Namespaces have Network Policies defined (Automated) | Enabling Network Policies can prevent certain applications from communicating with each other. | -| 5.6.4 | The default namespace should not be used (Automated) | Kubernetes provides a default namespace. | - -### CIS Benchmark v1.5 Not Applicable Tests - -| Number | Description | Reason for being not applicable | -| ---------- | ------------- | --------- | -| 1.1.1 | Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | -| 1.1.2 | Ensure that the API server pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | -| 1.1.3 | Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.1.4 | Ensure that the controller manager pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.1.5 | Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.6 | Ensure that the scheduler pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.7 | Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | -| 1.1.8 | Ensure that the etcd pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | -| 1.1.13 | Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | -| 1.1.14 | Ensure that the admin.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | -| 1.1.15 | Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.16 | Ensure that the scheduler.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.17 | Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.1.18 | Ensure that the controller-manager.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.3.6 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | -| 4.1.1 | Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | -| 4.1.2 | Ensure that the kubelet service file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | -| 4.1.9 | Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | -| 4.1.10 | Ensure that the kubelet configuration file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | -| 4.2.12 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | \ No newline at end of file From f9e5509c9e74008a561efc5b3e1720aac4dda3fb Mon Sep 17 00:00:00 2001 From: Guilherme Macedo Date: Thu, 30 Dec 2021 14:15:02 +0100 Subject: [PATCH 10/33] Restructure v2.6 security docs Signed-off-by: Guilherme Macedo --- content/rancher/v2.6/en/cis-scans/_index.md | 6 ++++ content/rancher/v2.6/en/security/_index.md | 22 +++++++------- .../v2.6/en/security/best-practices/_index.md | 30 ++++++++++++++++--- 3 files changed, 43 insertions(+), 15 deletions(-) create mode 100644 content/rancher/v2.6/en/cis-scans/_index.md diff --git a/content/rancher/v2.6/en/cis-scans/_index.md b/content/rancher/v2.6/en/cis-scans/_index.md new file mode 100644 index 00000000000..c11624ba466 --- /dev/null +++ b/content/rancher/v2.6/en/cis-scans/_index.md @@ -0,0 +1,6 @@ +--- +title: CIS Scans +weight: 17 +--- + +The CIS scans documentation is now part of the [Security > CIS Scans]({{}}/rancher/v2.6/en/security/cis-scans) section. diff --git a/content/rancher/v2.6/en/security/_index.md b/content/rancher/v2.6/en/security/_index.md index 5780e27a265..f936f7d9461 100644 --- a/content/rancher/v2.6/en/security/_index.md +++ b/content/rancher/v2.6/en/security/_index.md @@ -20,25 +20,25 @@ weight: 20 -Security is at the heart of all Rancher features. From integrating with all the popular authentication tools and services, to an enterprise grade [RBAC capability,]({{}}/rancher/v2.6/en/admin-settings/rbac) Rancher makes your Kubernetes clusters even more secure. +Security is at the heart of all Rancher features. From integrating with all the popular authentication tools and services, to an enterprise grade [RBAC capability]({{}}/rancher/v2.6/en/admin-settings/rbac), Rancher makes your Kubernetes clusters even more secure. -On this page, we provide security-related documentation along with resources to help you secure your Rancher installation and your downstream Kubernetes clusters: +On this page, we provide security related documentation along with resources to help you secure your Rancher installation and your downstream Kubernetes clusters: - [Running a CIS security scan on a Kubernetes cluster](#running-a-cis-security-scan-on-a-kubernetes-cluster) - [SELinux RPM](#selinux-rpm) - [Guide to hardening Rancher installations](#rancher-hardening-guide) - [The CIS Benchmark and self-assessment](#the-cis-benchmark-and-self-assessment) - [Third-party penetration test reports](#third-party-penetration-test-reports) -- [Rancher CVEs and resolutions](#rancher-cves-and-resolutions) +- [Rancher Security Advisories and CVEs](#rancher-security-advisories-and-cves) - [Kubernetes Security Best Practices](#kubernetes-security-best-practices) ### Running a CIS Security Scan on a Kubernetes Cluster -Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS (Center for Internet Security) Kubernetes Benchmark. +Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the [CIS](https://www.cisecurity.org/cis-benchmarks/) (Center for Internet Security) Kubernetes Benchmark. The CIS Kubernetes Benchmark is a reference document that can be used to establish a secure configuration baseline for Kubernetes. -The Center for Internet Security (CIS) is a 501(c\)(3) non-profit organization, formed in October 2000, with a mission to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace." +The Center for Internet Security (CIS) is a 501(c\)(3) non-profit organization, formed in October 2000, with a mission to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace". CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team. @@ -46,13 +46,13 @@ The Benchmark provides recommendations of two types: Automated and Manual. We ru When Rancher runs a CIS security scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests. -For details, refer to the section on [security scans.]({{}}/rancher/v2.6/en/cis-scans) +For details, refer to the section on [security scans.]({{}}/rancher/v2.6/en/security/cis-scans/) ### SELinux RPM [Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8. -We provide two RPMs (Red Hat packages) that enable Rancher products to function properly on SELinux-enforcing hosts: `rancher-selinux` and `rke2-selinux`. For details, see [this page.]({{}}/rancher/v2.6/en/security/selinux) +We provide two RPMs (Red Hat packages) that enable Rancher products to function properly on SELinux-enforcing hosts: `rancher-selinux` and `rke2-selinux`. For details, see [this page]({{}}/rancher/v2.6/en/security/selinux). ### Rancher Hardening Guide @@ -78,13 +78,13 @@ Rancher periodically hires third parties to perform security audits and penetrat Results: -- [Cure53 Pen Test - 7/2019](https://releases.rancher.com/documents/security/pen-tests/2019/RAN-01-cure53-report.final.pdf) -- [Untamed Theory Pen Test- 3/2019](https://releases.rancher.com/documents/security/pen-tests/2019/UntamedTheory-Rancher_SecurityAssessment-20190712_v5.pdf) +- [Cure53 Pen Test - July 2019](https://releases.rancher.com/documents/security/pen-tests/2019/RAN-01-cure53-report.final.pdf) +- [Untamed Theory Pen Test- March 2019](https://releases.rancher.com/documents/security/pen-tests/2019/UntamedTheory-Rancher_SecurityAssessment-20190712_v5.pdf) -### Rancher CVEs and Resolutions +### Rancher Security Advisories and CVEs Rancher is committed to informing the community of security issues in our products. For the list of CVEs (Common Vulnerabilities and Exposures) for issues we have resolved, refer to [this page.](./cve) ### Kubernetes Security Best Practices -For recommendations on securing your Kubernetes cluster, refer to the [Best Practices](./best-practices) guide. +For recommendations on securing your Kubernetes cluster, refer to the [Kubernetes Security Best Practices](./best-practices) guide. diff --git a/content/rancher/v2.6/en/security/best-practices/_index.md b/content/rancher/v2.6/en/security/best-practices/_index.md index bb31bfca47c..706d8ed71ec 100644 --- a/content/rancher/v2.6/en/security/best-practices/_index.md +++ b/content/rancher/v2.6/en/security/best-practices/_index.md @@ -1,13 +1,32 @@ --- -title: Security Best Practices +title: Kubernetes Security Best Practices weight: 5 --- ### Restricting cloud metadata API access -Cloud providers such as AWS, Azure, or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets. +Cloud providers such as AWS, Azure, DigitalOcean or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS, DigitalOcean Kubernetes or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets. -Below is an example of a global network policy (`GlobalNetworkPolicy`) using Calico to restrict access to cloud metadata API. +Below is an example of a network policy for Kubernetes to restrict access to cloud metadata API, but allowing access to everything else `0.0.0.0/0`. + +``` +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: allow-all-egress-except-metadata-api +spec: + podSelector: {} + policyTypes: + - Egress + egress: + - to: + - ipBlock: + cidr: 0.0.0.0/0 + except: + - 169.254.169.254/32 +``` + +A similar global network policy (`GlobalNetworkPolicy`) can be created using Calico. ``` apiVersion: projectcalico.org/v3 @@ -27,7 +46,10 @@ spec: nets: - 0.0.0.0/0 ``` -A similar policy can be created with Canal and Weave. For the list of approved CNI (Container Network Interface) providers in Rancher, please consult this [list]({{}}/rancher/v2.6/en/faq/networking/cni-providers). + +Identical policies can be created with Canal and Weave. For the list of approved CNI (Container Network Interface) providers in Rancher, please consult this [list]({{}}/rancher/v2.6/en/faq/networking/cni-providers). + +The provided policies are just examples and must be tailored to your security needs and environment, since they may conflict with other network policies in place in your cluster. Removing unnecessary packages and using reduced base images is also a good defense in depth strategy to reduce the attack surface against possible abuses of the cloud metadata API. As well as following the principle of least privilege and using instances profiles and roles with the least amount of required permissions for the services to fully function. From 929ecf3fdbb6ddb7d38917aa4f3f1b9a42e6616b Mon Sep 17 00:00:00 2001 From: Guilherme Macedo Date: Thu, 30 Dec 2021 17:17:02 +0100 Subject: [PATCH 11/33] Add aliases for removed pages Signed-off-by: Guilherme Macedo --- content/rancher/v2.6/en/cis-scans/_index.md | 5 +++++ content/rancher/v2.6/en/security/cis-scans/_index.md | 2 ++ content/rancher/v2.6/en/security/hardening-guides/_index.md | 6 ++++++ 3 files changed, 13 insertions(+) diff --git a/content/rancher/v2.6/en/cis-scans/_index.md b/content/rancher/v2.6/en/cis-scans/_index.md index c11624ba466..a2e267612ce 100644 --- a/content/rancher/v2.6/en/cis-scans/_index.md +++ b/content/rancher/v2.6/en/cis-scans/_index.md @@ -1,6 +1,11 @@ --- title: CIS Scans weight: 17 +aliases: + - /rancher/v2.6/en/cis-scans/configuration/ + - /rancher/v2.6/en/cis-scans/custom-benchmark/ + - /rancher/v2.6/en/cis-scans/rbac/ + - /rancher/v2.6/en/cis-scans/skipped-tests/ --- The CIS scans documentation is now part of the [Security > CIS Scans]({{}}/rancher/v2.6/en/security/cis-scans) section. diff --git a/content/rancher/v2.6/en/security/cis-scans/_index.md b/content/rancher/v2.6/en/security/cis-scans/_index.md index 17aa5a5a3b1..88da06fb74e 100644 --- a/content/rancher/v2.6/en/security/cis-scans/_index.md +++ b/content/rancher/v2.6/en/security/cis-scans/_index.md @@ -1,6 +1,8 @@ --- title: CIS Scans weight: 17 +aliases: + - /rancher/v2.6/en/security/security-scan/ --- Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. The CIS scans can run on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE. diff --git a/content/rancher/v2.6/en/security/hardening-guides/_index.md b/content/rancher/v2.6/en/security/hardening-guides/_index.md index e6a16445fc1..509aad121c5 100644 --- a/content/rancher/v2.6/en/security/hardening-guides/_index.md +++ b/content/rancher/v2.6/en/security/hardening-guides/_index.md @@ -2,6 +2,12 @@ title: Self-Assessment and Hardening Guides for Rancher v2.6 shortTitle: Rancher v2.6 Guides weight: 1 +aliases: + - /rancher/v2.6/en/security/rancher-2.5/ + - /rancher/v2.6/en/security/rancher-2.5/1.5-hardening-2.5/ + - /rancher/v2.6/en/security/rancher-2.5/1.5-benchmark-2.5/ + - /rancher/v2.6/en/security/rancher-2.5/1.6-hardening-2.5/ + - /rancher/v2.6/en/security/rancher-2.5/1.6-benchmark-2.5/ --- Rancher v2.6 hardening guides are currently being updated. For the time being, please consult Rancher v2.5 [self-assessment and hardening guides]({{}}/rancher/v2.5/en/security/rancher-2.5) for more information. From 558c5919ce5e0e431900e90228706ffcd3522dbb Mon Sep 17 00:00:00 2001 From: Guilherme Macedo Date: Thu, 30 Dec 2021 23:20:56 +0100 Subject: [PATCH 12/33] Update _index.md --- .../v2.6/en/security/best-practices/_index.md | 46 ------------------- 1 file changed, 46 deletions(-) diff --git a/content/rancher/v2.6/en/security/best-practices/_index.md b/content/rancher/v2.6/en/security/best-practices/_index.md index 706d8ed71ec..4dc70b3d510 100644 --- a/content/rancher/v2.6/en/security/best-practices/_index.md +++ b/content/rancher/v2.6/en/security/best-practices/_index.md @@ -7,52 +7,6 @@ weight: 5 Cloud providers such as AWS, Azure, DigitalOcean or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS, DigitalOcean Kubernetes or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets. -Below is an example of a network policy for Kubernetes to restrict access to cloud metadata API, but allowing access to everything else `0.0.0.0/0`. - -``` -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: allow-all-egress-except-metadata-api -spec: - podSelector: {} - policyTypes: - - Egress - egress: - - to: - - ipBlock: - cidr: 0.0.0.0/0 - except: - - 169.254.169.254/32 -``` - -A similar global network policy (`GlobalNetworkPolicy`) can be created using Calico. - -``` -apiVersion: projectcalico.org/v3 -kind: GlobalNetworkPolicy -metadata: - name: allow-all-egress-except-metadata-api -spec: - selector: all() - egress: - - action: Deny - protocol: TCP - destination: - nets: - - 169.254.169.254/32 - - action: Allow - destination: - nets: - - 0.0.0.0/0 -``` - -Identical policies can be created with Canal and Weave. For the list of approved CNI (Container Network Interface) providers in Rancher, please consult this [list]({{}}/rancher/v2.6/en/faq/networking/cni-providers). - -The provided policies are just examples and must be tailored to your security needs and environment, since they may conflict with other network policies in place in your cluster. - -Removing unnecessary packages and using reduced base images is also a good defense in depth strategy to reduce the attack surface against possible abuses of the cloud metadata API. As well as following the principle of least privilege and using instances profiles and roles with the least amount of required permissions for the services to fully function. - It is advised to consult your cloud provider's security best practices for further recommendations and specific details on how to restrict access to cloud instance metadata API. Further references: MITRE ATT&CK knowledge base on - [Unsecured Credentials: Cloud Instance Metadata API](https://attack.mitre.org/techniques/T1552/005/). From 23381cc6683af4af7782c7cc9fc2d442d7f495dd Mon Sep 17 00:00:00 2001 From: Guilherme Macedo Date: Sat, 1 Jan 2022 14:42:23 +0100 Subject: [PATCH 13/33] Restore CIS docs to its original place Signed-off-by: Guilherme Macedo --- content/rancher/v2.6/en/cis-scans/_index.md | 290 ++++++++++++++++- .../v2.6/en/cis-scans/configuration/_index.md | 98 ++++++ .../en/cis-scans/custom-benchmark/_index.md | 84 +++++ .../rancher/v2.6/en/cis-scans/rbac/_index.md | 50 +++ .../v2.6/en/cis-scans/skipped-tests/_index.md | 54 ++++ .../v2.6/en/security/cis-scans/_index.md | 291 ------------------ .../v2.6/en/security/security-scan/_index.md | 6 + 7 files changed, 576 insertions(+), 297 deletions(-) create mode 100644 content/rancher/v2.6/en/cis-scans/configuration/_index.md create mode 100644 content/rancher/v2.6/en/cis-scans/custom-benchmark/_index.md create mode 100644 content/rancher/v2.6/en/cis-scans/rbac/_index.md create mode 100644 content/rancher/v2.6/en/cis-scans/skipped-tests/_index.md delete mode 100644 content/rancher/v2.6/en/security/cis-scans/_index.md create mode 100644 content/rancher/v2.6/en/security/security-scan/_index.md diff --git a/content/rancher/v2.6/en/cis-scans/_index.md b/content/rancher/v2.6/en/cis-scans/_index.md index a2e267612ce..17aa5a5a3b1 100644 --- a/content/rancher/v2.6/en/cis-scans/_index.md +++ b/content/rancher/v2.6/en/cis-scans/_index.md @@ -1,11 +1,289 @@ --- title: CIS Scans weight: 17 -aliases: - - /rancher/v2.6/en/cis-scans/configuration/ - - /rancher/v2.6/en/cis-scans/custom-benchmark/ - - /rancher/v2.6/en/cis-scans/rbac/ - - /rancher/v2.6/en/cis-scans/skipped-tests/ --- -The CIS scans documentation is now part of the [Security > CIS Scans]({{}}/rancher/v2.6/en/security/cis-scans) section. +Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. The CIS scans can run on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE. + +The `rancher-cis-benchmark` app leverages kube-bench, an open-source tool from Aqua Security, to check clusters for CIS Kubernetes Benchmark compliance. Also, to generate a cluster-wide report, the application utilizes Sonobuoy for report aggregation. + +- [About the CIS Benchmark](#about-the-cis-benchmark) +- [About the Generated Report](#about-the-generated-report) +- [Test Profiles](#test-profiles) +- [About Skipped and Not Applicable Tests](#about-skipped-and-not-applicable-tests) +- [Roles-based Access Control](./rbac) +- [Configuration](./configuration) +- [How-to Guides](#how-to-guides) + - [Installing CIS Benchmark](#installing-cis-benchmark) + - [Uninstalling CIS Benchmark](#uninstalling-cis-benchmark) + - [Running a Scan](#running-a-scan) + - [Running a Scan Periodically on a Schedule](#running-a-scan-periodically-on-a-schedule) + - [Skipping Tests](#skipping-tests) + - [Viewing Reports](#viewing-reports) + - [Enabling Alerting for rancher-cis-benchmark](#enabling-alerting-for-rancher-cis-benchmark) + - [Configuring Alerts for a Periodic Scan on a Schedule](#configuring-alerts-for-a-periodic-scan-on-a-schedule) + - [Creating a Custom Benchmark Version for Running a Cluster Scan](#creating-a-custom-benchmark-version-for-running-a-cluster-scan) + + +# About the CIS Benchmark + +The Center for Internet Security is a 501(c\)(3) non-profit organization, formed in October 2000, with a mission to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace". The organization is headquartered in East Greenbush, New York, with members including large corporations, government agencies, and academic institutions. + +CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team. + +The official Benchmark documents are available through the CIS website. The sign-up form to access the documents is +here. + +# About the Generated Report + +Each scan generates a report can be viewed in the Rancher UI and can be downloaded in CSV format. + +By default, the CIS Benchmark v1.6 is used. + +The Benchmark version is included in the generated report. + +The Benchmark provides recommendations of two types: Automated and Manual. Recommendations marked as Manual in the Benchmark are not included in the generated report. + +Some tests are designated as "Not Applicable." These tests will not be run on any CIS scan because of the way that Rancher provisions RKE clusters. For information on how test results can be audited, and why some tests are designated to be not applicable, refer to Rancher's self-assessment guide for the corresponding Kubernetes version. + +The report contains the following information: + +| Column in Report | Description | +|------------------|-------------| +| `id` | The ID number of the CIS Benchmark. | +| `description` | The description of the CIS Benchmark test. | +| `remediation` | What needs to be fixed in order to pass the test. | +| `state` | Indicates if the test passed, failed, was skipped, or was not applicable. | +| `node_type` | The node role, which affects which tests are run on the node. Master tests are run on controlplane nodes, etcd tests are run on etcd nodes, and node tests are run on the worker nodes. | +| `audit` | This is the audit check that `kube-bench` runs for this test. | +| `audit_config` | Any configuration applicable to the audit script. | +| `test_info` | Test-related info as reported by `kube-bench`, if any. | +| `commands` | Test-related commands as reported by `kube-bench`, if any. | +| `config_commands` | Test-related configuration data as reported by `kube-bench`, if any. | +| `actual_value` | The test's actual value, present if reported by `kube-bench`. | +| `expected_result` | The test's expected result, present if reported by `kube-bench`. | + +Refer to the table in the cluster hardening guide for information on which versions of Kubernetes, the Benchmark, Rancher, and our cluster hardening guide correspond to each other. Also refer to the hardening guide for configuration files of CIS-compliant clusters and information on remediating failed tests. + +# Test Profiles + +The following profiles are available: + +- Generic CIS 1.5 +- Generic CIS 1.6 +- RKE permissive 1.5 +- RKE hardened 1.5 +- RKE permissive 1.6 +- RKE hardened 1.6 +- EKS +- GKE +- RKE2 permissive 1.5 +- RKE2 permissive 1.5 + +You also have the ability to customize a profile by saving a set of tests to skip. + +All profiles will have a set of not applicable tests that will be skipped during the CIS scan. These tests are not applicable based on how a RKE cluster manages Kubernetes. + +There are two types of RKE cluster scan profiles: + +- **Permissive:** This profile has a set of tests that have been will be skipped as these tests will fail on a default RKE Kubernetes cluster. Besides the list of skipped tests, the profile will also not run the not applicable tests. +- **Hardened:** This profile will not skip any tests, except for the non-applicable tests. + +The EKS and GKE cluster scan profiles are based on CIS Benchmark versions that are specific to those types of clusters. + +In order to pass the "Hardened" profile, you will need to follow the steps on the hardening guide and use the `cluster.yml` defined in the hardening guide to provision a hardened cluster. + +The default profile and the supported CIS benchmark version depends on the type of cluster that will be scanned: + +The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version. + +- For RKE Kubernetes clusters, the RKE Permissive 1.6 profile is the default. +- EKS and GKE have their own CIS Benchmarks published by `kube-bench`. The corresponding test profiles are used by default for those clusters. +- For RKE2 Kubernetes clusters, the RKE2 Permissive 1.5 profile is the default. +- For cluster types other than RKE, RKE2, EKS and GKE, the Generic CIS 1.5 profile will be used by default. + +# About Skipped and Not Applicable Tests + +For a list of skipped and not applicable tests, refer to this page. + +For now, only user-defined skipped tests are marked as skipped in the generated report. + +Any skipped tests that are defined as being skipped by one of the default profiles are marked as not applicable. + +# Roles-based Access Control + +For information about permissions, refer to this page. + +# Configuration + +For more information about configuring the custom resources for the scans, profiles, and benchmark versions, refer to this page. + +# How-to Guides + +- [Installing rancher-cis-benchmark](#installing-rancher-cis-benchmark) +- [Uninstalling rancher-cis-benchmark](#uninstalling-rancher-cis-benchmark) +- [Running a Scan](#running-a-scan) +- [Running a Scan Periodically on a Schedule](#running-a-scan-periodically-on-a-schedule) +- [Skipping Tests](#skipping-tests) +- [Viewing Reports](#viewing-reports) +- [Enabling Alerting for rancher-cis-benchmark](#enabling-alerting-for-rancher-cis-benchmark) +- [Configuring Alerts for a Periodic Scan on a Schedule](#configuring-alerts-for-a-periodic-scan-on-a-schedule) +- [Creating a Custom Benchmark Version for Running a Cluster Scan](#creating-a-custom-benchmark-version-for-running-a-cluster-scan) +### Installing CIS Benchmark + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to install CIS Benchmark and click **Explore**. +1. In the left navigation bar, click **Apps & Marketplace > Charts**. +1. Click **CIS Benchmark** +1. Click **Install**. + +**Result:** The CIS scan application is deployed on the Kubernetes cluster. + +### Uninstalling CIS Benchmark + +1. From the **Cluster Dashboard,** go to the left navigation bar and click **Apps & Marketplace > Installed Apps**. +1. Go to the `cis-operator-system` namespace and check the boxes next to `rancher-cis-benchmark-crd` and `rancher-cis-benchmark`. +1. Click **Delete** and confirm **Delete**. + +**Result:** The `rancher-cis-benchmark` application is uninstalled. + +### Running a Scan + +When a ClusterScan custom resource is created, it launches a new CIS scan on the cluster for the chosen ClusterScanProfile. + +Note: There is currently a limitation of running only one CIS scan at a time for a cluster. If you create multiple ClusterScan custom resources, they will be run one after the other by the operator, and until one scan finishes, the rest of the ClusterScan custom resources will be in the "Pending" state. + +To run a scan, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. +1. Click **CIS Benchmark > Scan**. +1. Click **Create**. +1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. +1. Click **Create**. + +**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears. +### Running a Scan Periodically on a Schedule + +To run a ClusterScan on a schedule, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. +1. Click **CIS Benchmark > Scan**. +1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. +1. Choose the option **Run scan on a schedule**. +1. Enter a valid cron schedule expression in the field **Schedule**. +1. Choose a **Retention** count, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged. +1. Click **Create**. + +**Result:** The scan runs and reschedules to run according to the cron schedule provided. The **Next Scan** value indicates the next time this scan will run again. + +A report is generated with the scan results every time the scan runs. To see the latest results, click the name of the scan that appears. + +You can also see the previous reports by choosing the report from the **Reports** dropdown on the scan detail page. + +### Skipping Tests + +CIS scans can be run using test profiles with user-defined skips. + +To skip tests, you will create a custom CIS scan profile. A profile contains the configuration for the CIS scan, which includes the benchmark versions to use and any specific tests to skip in that benchmark. + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. +1. Click **CIS Benchmark > Profile**. +1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile and click **⋮ Clone**. If you are filling out the form, add the tests to skip using the test IDs, using the relevant CIS Benchmark as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name: + + ```yaml + apiVersion: cis.cattle.io/v1 + kind: ClusterScanProfile + metadata: + annotations: + meta.helm.sh/release-name: clusterscan-operator + meta.helm.sh/release-namespace: cis-operator-system + labels: + app.kubernetes.io/managed-by: Helm + name: "" + spec: + benchmarkVersion: cis-1.5 + skipTests: + - "1.1.20" + - "1.1.21" + ``` +1. Click **Create**. + +**Result:** A new CIS scan profile is created. + +When you [run a scan](#running-a-scan) that uses this profile, the defined tests will be skipped during the scan. The skipped tests will be marked in the generated report as `Skip`. + +### Viewing Reports + +To view the generated CIS scan reports, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. +1. Click **CIS Benchmark > Scan**. +1. The **Scans** page will show the generated reports. To see a detailed report, go to a scan report and click the name. + +One can download the report from the Scans list or from the scan detail page. + +### Enabling Alerting for rancher-cis-benchmark + +Alerts can be configured to be sent out for a scan that runs on a schedule. + +> **Prerequisite:** +> +> Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration) +> +> While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration/receiver/#example-route-config-for-cis-scan-alerts) + +While installing or upgrading the `rancher-cis-benchmark` Helm chart, set the following flag to `true` in the `values.yaml`: + +```yaml +alerts: + enabled: true +``` + +### Configuring Alerts for a Periodic Scan on a Schedule + +It is possible to run a ClusterScan on a schedule. + +A scheduled scan can also specify if you should receive alerts when the scan completes. + +Alerts are supported only for a scan that runs on a schedule. + +The CIS Benchmark application supports two types of alerts: + +- Alert on scan completion: This alert is sent out when the scan run finishes. The alert includes details including the ClusterScan's name and the ClusterScanProfile name. +- Alert on scan failure: This alert is sent out if there are some test failures in the scan run or if the scan is in a `Fail` state. + +> **Prerequisite:** +> +> Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration) +> +> While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration/receiver/#example-route-config-for-cis-scan-alerts) + +To configure alerts for a scan that runs on a schedule, + +1. Please enable alerts on the `rancher-cis-benchmark` application (#enabling-alerting-for-rancher-cis-benchmark) +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. +1. Click **CIS Benchmark > Scan**. +1. Click **Create**. +1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. +1. Choose the option **Run scan on a schedule**. +1. Enter a valid [cron schedule expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) in the field **Schedule**. +1. Check the boxes next to the Alert types under **Alerting**. +1. Optional: Choose a **Retention Count**, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged. +1. Click **Create**. + +**Result:** The scan runs and reschedules to run according to the cron schedule provided. Alerts are sent out when the scan finishes if routes and receiver are configured under `rancher-monitoring` application. + +A report is generated with the scan results every time the scan runs. To see the latest results, click the name of the scan that appears. + +### Creating a Custom Benchmark Version for Running a Cluster Scan + +There could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them. + +It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application. + +For details, see [this page.](./custom-benchmark) diff --git a/content/rancher/v2.6/en/cis-scans/configuration/_index.md b/content/rancher/v2.6/en/cis-scans/configuration/_index.md new file mode 100644 index 00000000000..26df1932e25 --- /dev/null +++ b/content/rancher/v2.6/en/cis-scans/configuration/_index.md @@ -0,0 +1,98 @@ +--- +title: Configuration +weight: 3 +--- + +This configuration reference is intended to help you manage the custom resources created by the `rancher-cis-benchmark` application. These resources are used for performing CIS scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization. + +To configure the custom resources, go to the **Cluster Dashboard** To configure the CIS scans, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to configure CIS scans and click **Explore**. +1. In the left navigation bar, click **CIS Benchmark**. + +### Scans + +A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed. + +When configuring a scan, you need to define the name of the scan profile that will be used with the `scanProfileName` directive. + +An example ClusterScan custom resource is below: + +```yaml +apiVersion: cis.cattle.io/v1 +kind: ClusterScan +metadata: + name: rke-cis +spec: + scanProfileName: rke-profile-hardened +``` + +### Profiles + +A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark. + +> By default, a few ClusterScanProfiles are installed as part of the `rancher-cis-benchmark` chart. If a user edits these default benchmarks or profiles, the next chart update will reset them back. So it is advisable for users to not edit the default ClusterScanProfiles. + +Users can clone the ClusterScanProfiles to create custom profiles. + +Skipped tests are listed under the `skipTests` directive. + +When you create a new profile, you will also need to give it a name. + +An example `ClusterScanProfile` is below: + +```yaml +apiVersion: cis.cattle.io/v1 +kind: ClusterScanProfile +metadata: + annotations: + meta.helm.sh/release-name: clusterscan-operator + meta.helm.sh/release-namespace: cis-operator-system + labels: + app.kubernetes.io/managed-by: Helm + name: "" +spec: + benchmarkVersion: cis-1.5 + skipTests: + - "1.1.20" + - "1.1.21" +``` + +### Benchmark Versions + +A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark. + +A `ClusterScanBenchmark` defines the CIS `BenchmarkVersion` name and test configurations. The `BenchmarkVersion` name is a parameter provided to the `kube-bench` tool. + +By default, a few `BenchmarkVersion` names and test configurations are packaged as part of the CIS scan application. When this feature is enabled, these default BenchmarkVersions will be automatically installed and available for users to create a ClusterScanProfile. + +> If the default BenchmarkVersions are edited, the next chart update will reset them back. Therefore we don't recommend editing the default ClusterScanBenchmarks. + +A ClusterScanBenchmark consists of the fields: + +- `ClusterProvider`: This is the cluster provider name for which this benchmark is applicable. For example: RKE, EKS, GKE, etc. Leave it empty if this benchmark can be run on any cluster type. +- `MinKubernetesVersion`: Specifies the cluster's minimum kubernetes version necessary to run this benchmark. Leave it empty if there is no dependency on a particular Kubernetes version. +- `MaxKubernetesVersion`: Specifies the cluster's maximum Kubernetes version necessary to run this benchmark. Leave it empty if there is no dependency on a particular k8s version. + +An example `ClusterScanBenchmark` is below: + +```yaml +apiVersion: cis.cattle.io/v1 +kind: ClusterScanBenchmark +metadata: + annotations: + meta.helm.sh/release-name: clusterscan-operator + meta.helm.sh/release-namespace: cis-operator-system + creationTimestamp: "2020-08-28T18:18:07Z" + generation: 1 + labels: + app.kubernetes.io/managed-by: Helm + name: cis-1.5 + resourceVersion: "203878" + selfLink: /apis/cis.cattle.io/v1/clusterscanbenchmarks/cis-1.5 + uid: 309e543e-9102-4091-be91-08d7af7fb7a7 +spec: + clusterProvider: "" + minKubernetesVersion: 1.15.0 +``` \ No newline at end of file diff --git a/content/rancher/v2.6/en/cis-scans/custom-benchmark/_index.md b/content/rancher/v2.6/en/cis-scans/custom-benchmark/_index.md new file mode 100644 index 00000000000..36f70ccaa55 --- /dev/null +++ b/content/rancher/v2.6/en/cis-scans/custom-benchmark/_index.md @@ -0,0 +1,84 @@ +--- +title: Creating a Custom Benchmark Version for Running a Cluster Scan +weight: 4 +--- + +Each Benchmark Version defines a set of test configuration files that define the CIS tests to be run by the kube-bench tool. +The `rancher-cis-benchmark` application installs a few default Benchmark Versions which are listed under CIS Benchmark application menu. + +But there could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them. + +It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application. + +When a cluster scan is run, you need to select a Profile which points to a specific Benchmark Version. + +Follow all the steps below to add a custom Benchmark Version and run a scan using it. + +1. [Prepare the Custom Benchmark Version ConfigMap](#1-prepare-the-custom-benchmark-version-configmap) +2. [Add a Custom Benchmark Version to a Cluster](#2-add-a-custom-benchmark-version-to-a-cluster) +3. [Create a New Profile for the Custom Benchmark Version](#3-create-a-new-profile-for-the-custom-benchmark-version) +4. [Run a Scan Using the Custom Benchmark Version](#4-run-a-scan-using-the-custom-benchmark-version) + +### 1. Prepare the Custom Benchmark Version ConfigMap + +To create a custom benchmark version, first you need to create a ConfigMap containing the benchmark version's config files and upload it to your Kubernetes cluster where you want to run the scan. + +To prepare a custom benchmark version ConfigMap, suppose we want to add a custom Benchmark Version named `foo`. + +1. Create a directory named `foo` and inside this directory, place all the config YAML files that the kube-bench tool looks for. For example, here are the config YAML files for a Generic CIS 1.5 Benchmark Version https://github.com/aquasecurity/kube-bench/tree/master/cfg/cis-1.5 +1. Place the complete `config.yaml` file, which includes all the components that should be tested. +1. Add the Benchmark version name to the `target_mapping` section of the `config.yaml`: + + ```yaml + target_mapping: + "foo": + - "master" + - "node" + - "controlplane" + - "etcd" + - "policies" + ``` +1. Upload this directory to your Kubernetes Cluster by creating a ConfigMap: + + ```yaml + kubectl create configmap -n foo --from-file= + ``` + +### 2. Add a Custom Benchmark Version to a Cluster + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. +1. In the left navigation bar, click **CIS Benchmark > Benchmark Version**. +1. Click **Create**. +1. Enter the **Name** and a description for your custom benchmark version. +1. Choose the cluster provider that your benchmark version applies to. +1. Choose the ConfigMap you have uploaded from the dropdown. +1. Add the minimum and maximum Kubernetes version limits applicable, if any. +1. Click **Create**. + +### 3. Create a New Profile for the Custom Benchmark Version + +To run a scan using your custom benchmark version, you need to add a new Profile pointing to this benchmark version. + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. +1. In the left navigation bar, click **CIS Benchmark > Profile**. +1. Click **Create**. +1. Provide a **Name** and description. In this example, we name it `foo-profile`. +1. Choose the Benchmark Version from the dropdown. +1. Click **Create**. + +### 4. Run a Scan Using the Custom Benchmark Version + +Once the Profile pointing to your custom benchmark version `foo` has been created, you can create a new Scan to run the custom test configs in the Benchmark Version. + +To run a scan, + +1. In the upper left corner, click **☰ > Cluster Management**. +1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. +1. In the left navigation bar, click **CIS Benchmark > Scan**. +1. Click **Create**. +1. Choose the new cluster scan profile. +1. Click **Create**. + +**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears. diff --git a/content/rancher/v2.6/en/cis-scans/rbac/_index.md b/content/rancher/v2.6/en/cis-scans/rbac/_index.md new file mode 100644 index 00000000000..ad2b47bff2c --- /dev/null +++ b/content/rancher/v2.6/en/cis-scans/rbac/_index.md @@ -0,0 +1,50 @@ +--- +title: Roles-based Access Control +shortTitle: RBAC +weight: 3 +--- + +This section describes the permissions required to use the rancher-cis-benchmark App. + +The rancher-cis-benchmark is a cluster-admin only feature by default. + +However, the `rancher-cis-benchmark` chart installs these two default `ClusterRoles`: + +- cis-admin +- cis-view + +In Rancher, only cluster owners and global administrators have `cis-admin` access by default. + +Note: If you were using the `cis-edit` role added in Rancher v2.5 setup, it has now been removed since +Rancher v2.5.2 because it essentially is same as `cis-admin`. If you happen to create any clusterrolebindings +for `cis-edit`, please update them to use `cis-admin` ClusterRole instead. + +# Cluster-Admin Access + +Rancher CIS Scans is a cluster-admin only feature by default. +This means only the Rancher global admins, and the cluster’s cluster-owner can: + +- Install/Uninstall the rancher-cis-benchmark App +- See the navigation links for CIS Benchmark CRDs - ClusterScanBenchmarks, ClusterScanProfiles, ClusterScans +- List the default ClusterScanBenchmarks and ClusterScanProfiles +- Create/Edit/Delete new ClusterScanProfiles +- Create/Edit/Delete a new ClusterScan to run the CIS scan on the cluster +- View and Download the ClusterScanReport created after the ClusterScan is complete + + +# Summary of Default Permissions for Kubernetes Default Roles + +The rancher-cis-benchmark creates three `ClusterRoles` and adds the CIS Benchmark CRD access to the following default K8s `ClusterRoles`: + +| ClusterRole created by chart | Default K8s ClusterRole | Permissions given with Role +| ------------------------------| ---------------------------| ---------------------------| +| `cis-admin` | `admin`| Ability to CRUD clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR +| `cis-view` | `view `| Ability to List(R) clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR + + +By default only cluster-owner role will have ability to manage and use `rancher-cis-benchmark` feature. + +The other Rancher roles (cluster-member, project-owner, project-member) do not have any default permissions to manage and use rancher-cis-benchmark resources. + +But if a cluster-owner wants to delegate access to other users, they can do so by creating ClusterRoleBindings between these users and the above CIS ClusterRoles manually. +There is no automatic role aggregation supported for the `rancher-cis-benchmark` ClusterRoles. diff --git a/content/rancher/v2.6/en/cis-scans/skipped-tests/_index.md b/content/rancher/v2.6/en/cis-scans/skipped-tests/_index.md new file mode 100644 index 00000000000..f2b125c0262 --- /dev/null +++ b/content/rancher/v2.6/en/cis-scans/skipped-tests/_index.md @@ -0,0 +1,54 @@ +--- +title: Skipped and Not Applicable Tests +weight: 3 +--- + +This section lists the tests that are skipped in the permissive test profile for RKE. + +> All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile. + +# CIS Benchmark v1.5 + +### CIS Benchmark v1.5 Skipped Tests + +| Number | Description | Reason for Skipping | +| ---------- | ------------- | --------- | +| 1.1.12 | Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership. | +| 1.2.6 | Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | +| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. | +| 1.2.34 | Ensure that encryption providers are appropriately configured (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. | +| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Automated) | System level configurations are required before provisioning the cluster in order for this argument to be set to true. | +| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | +| 5.1.5 | Ensure that default service accounts are not actively used. (Automated) | Kubernetes provides default service accounts to be used. | +| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.2.3 | Minimize the admission of containers wishing to share the host IPC namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.2.4 | Minimize the admission of containers wishing to share the host network namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.2.5 | Minimize the admission of containers with allowPrivilegeEscalation (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.3.2 | Ensure that all Namespaces have Network Policies defined (Automated) | Enabling Network Policies can prevent certain applications from communicating with each other. | +| 5.6.4 | The default namespace should not be used (Automated) | Kubernetes provides a default namespace. | + +### CIS Benchmark v1.5 Not Applicable Tests + +| Number | Description | Reason for being not applicable | +| ---------- | ------------- | --------- | +| 1.1.1 | Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | +| 1.1.2 | Ensure that the API server pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | +| 1.1.3 | Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.1.4 | Ensure that the controller manager pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.1.5 | Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.6 | Ensure that the scheduler pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.7 | Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | +| 1.1.8 | Ensure that the etcd pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | +| 1.1.13 | Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | +| 1.1.14 | Ensure that the admin.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | +| 1.1.15 | Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.16 | Ensure that the scheduler.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.17 | Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.1.18 | Ensure that the controller-manager.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.3.6 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | +| 4.1.1 | Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | +| 4.1.2 | Ensure that the kubelet service file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | +| 4.1.9 | Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | +| 4.1.10 | Ensure that the kubelet configuration file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | +| 4.2.12 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | \ No newline at end of file diff --git a/content/rancher/v2.6/en/security/cis-scans/_index.md b/content/rancher/v2.6/en/security/cis-scans/_index.md deleted file mode 100644 index 88da06fb74e..00000000000 --- a/content/rancher/v2.6/en/security/cis-scans/_index.md +++ /dev/null @@ -1,291 +0,0 @@ ---- -title: CIS Scans -weight: 17 -aliases: - - /rancher/v2.6/en/security/security-scan/ ---- - -Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. The CIS scans can run on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE. - -The `rancher-cis-benchmark` app leverages kube-bench, an open-source tool from Aqua Security, to check clusters for CIS Kubernetes Benchmark compliance. Also, to generate a cluster-wide report, the application utilizes Sonobuoy for report aggregation. - -- [About the CIS Benchmark](#about-the-cis-benchmark) -- [About the Generated Report](#about-the-generated-report) -- [Test Profiles](#test-profiles) -- [About Skipped and Not Applicable Tests](#about-skipped-and-not-applicable-tests) -- [Roles-based Access Control](./rbac) -- [Configuration](./configuration) -- [How-to Guides](#how-to-guides) - - [Installing CIS Benchmark](#installing-cis-benchmark) - - [Uninstalling CIS Benchmark](#uninstalling-cis-benchmark) - - [Running a Scan](#running-a-scan) - - [Running a Scan Periodically on a Schedule](#running-a-scan-periodically-on-a-schedule) - - [Skipping Tests](#skipping-tests) - - [Viewing Reports](#viewing-reports) - - [Enabling Alerting for rancher-cis-benchmark](#enabling-alerting-for-rancher-cis-benchmark) - - [Configuring Alerts for a Periodic Scan on a Schedule](#configuring-alerts-for-a-periodic-scan-on-a-schedule) - - [Creating a Custom Benchmark Version for Running a Cluster Scan](#creating-a-custom-benchmark-version-for-running-a-cluster-scan) - - -# About the CIS Benchmark - -The Center for Internet Security is a 501(c\)(3) non-profit organization, formed in October 2000, with a mission to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace". The organization is headquartered in East Greenbush, New York, with members including large corporations, government agencies, and academic institutions. - -CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team. - -The official Benchmark documents are available through the CIS website. The sign-up form to access the documents is -here. - -# About the Generated Report - -Each scan generates a report can be viewed in the Rancher UI and can be downloaded in CSV format. - -By default, the CIS Benchmark v1.6 is used. - -The Benchmark version is included in the generated report. - -The Benchmark provides recommendations of two types: Automated and Manual. Recommendations marked as Manual in the Benchmark are not included in the generated report. - -Some tests are designated as "Not Applicable." These tests will not be run on any CIS scan because of the way that Rancher provisions RKE clusters. For information on how test results can be audited, and why some tests are designated to be not applicable, refer to Rancher's self-assessment guide for the corresponding Kubernetes version. - -The report contains the following information: - -| Column in Report | Description | -|------------------|-------------| -| `id` | The ID number of the CIS Benchmark. | -| `description` | The description of the CIS Benchmark test. | -| `remediation` | What needs to be fixed in order to pass the test. | -| `state` | Indicates if the test passed, failed, was skipped, or was not applicable. | -| `node_type` | The node role, which affects which tests are run on the node. Master tests are run on controlplane nodes, etcd tests are run on etcd nodes, and node tests are run on the worker nodes. | -| `audit` | This is the audit check that `kube-bench` runs for this test. | -| `audit_config` | Any configuration applicable to the audit script. | -| `test_info` | Test-related info as reported by `kube-bench`, if any. | -| `commands` | Test-related commands as reported by `kube-bench`, if any. | -| `config_commands` | Test-related configuration data as reported by `kube-bench`, if any. | -| `actual_value` | The test's actual value, present if reported by `kube-bench`. | -| `expected_result` | The test's expected result, present if reported by `kube-bench`. | - -Refer to the table in the cluster hardening guide for information on which versions of Kubernetes, the Benchmark, Rancher, and our cluster hardening guide correspond to each other. Also refer to the hardening guide for configuration files of CIS-compliant clusters and information on remediating failed tests. - -# Test Profiles - -The following profiles are available: - -- Generic CIS 1.5 -- Generic CIS 1.6 -- RKE permissive 1.5 -- RKE hardened 1.5 -- RKE permissive 1.6 -- RKE hardened 1.6 -- EKS -- GKE -- RKE2 permissive 1.5 -- RKE2 permissive 1.5 - -You also have the ability to customize a profile by saving a set of tests to skip. - -All profiles will have a set of not applicable tests that will be skipped during the CIS scan. These tests are not applicable based on how a RKE cluster manages Kubernetes. - -There are two types of RKE cluster scan profiles: - -- **Permissive:** This profile has a set of tests that have been will be skipped as these tests will fail on a default RKE Kubernetes cluster. Besides the list of skipped tests, the profile will also not run the not applicable tests. -- **Hardened:** This profile will not skip any tests, except for the non-applicable tests. - -The EKS and GKE cluster scan profiles are based on CIS Benchmark versions that are specific to those types of clusters. - -In order to pass the "Hardened" profile, you will need to follow the steps on the hardening guide and use the `cluster.yml` defined in the hardening guide to provision a hardened cluster. - -The default profile and the supported CIS benchmark version depends on the type of cluster that will be scanned: - -The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version. - -- For RKE Kubernetes clusters, the RKE Permissive 1.6 profile is the default. -- EKS and GKE have their own CIS Benchmarks published by `kube-bench`. The corresponding test profiles are used by default for those clusters. -- For RKE2 Kubernetes clusters, the RKE2 Permissive 1.5 profile is the default. -- For cluster types other than RKE, RKE2, EKS and GKE, the Generic CIS 1.5 profile will be used by default. - -# About Skipped and Not Applicable Tests - -For a list of skipped and not applicable tests, refer to this page. - -For now, only user-defined skipped tests are marked as skipped in the generated report. - -Any skipped tests that are defined as being skipped by one of the default profiles are marked as not applicable. - -# Roles-based Access Control - -For information about permissions, refer to this page. - -# Configuration - -For more information about configuring the custom resources for the scans, profiles, and benchmark versions, refer to this page. - -# How-to Guides - -- [Installing rancher-cis-benchmark](#installing-rancher-cis-benchmark) -- [Uninstalling rancher-cis-benchmark](#uninstalling-rancher-cis-benchmark) -- [Running a Scan](#running-a-scan) -- [Running a Scan Periodically on a Schedule](#running-a-scan-periodically-on-a-schedule) -- [Skipping Tests](#skipping-tests) -- [Viewing Reports](#viewing-reports) -- [Enabling Alerting for rancher-cis-benchmark](#enabling-alerting-for-rancher-cis-benchmark) -- [Configuring Alerts for a Periodic Scan on a Schedule](#configuring-alerts-for-a-periodic-scan-on-a-schedule) -- [Creating a Custom Benchmark Version for Running a Cluster Scan](#creating-a-custom-benchmark-version-for-running-a-cluster-scan) -### Installing CIS Benchmark - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to install CIS Benchmark and click **Explore**. -1. In the left navigation bar, click **Apps & Marketplace > Charts**. -1. Click **CIS Benchmark** -1. Click **Install**. - -**Result:** The CIS scan application is deployed on the Kubernetes cluster. - -### Uninstalling CIS Benchmark - -1. From the **Cluster Dashboard,** go to the left navigation bar and click **Apps & Marketplace > Installed Apps**. -1. Go to the `cis-operator-system` namespace and check the boxes next to `rancher-cis-benchmark-crd` and `rancher-cis-benchmark`. -1. Click **Delete** and confirm **Delete**. - -**Result:** The `rancher-cis-benchmark` application is uninstalled. - -### Running a Scan - -When a ClusterScan custom resource is created, it launches a new CIS scan on the cluster for the chosen ClusterScanProfile. - -Note: There is currently a limitation of running only one CIS scan at a time for a cluster. If you create multiple ClusterScan custom resources, they will be run one after the other by the operator, and until one scan finishes, the rest of the ClusterScan custom resources will be in the "Pending" state. - -To run a scan, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. -1. Click **Create**. -1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. -1. Click **Create**. - -**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears. -### Running a Scan Periodically on a Schedule - -To run a ClusterScan on a schedule, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. -1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. -1. Choose the option **Run scan on a schedule**. -1. Enter a valid cron schedule expression in the field **Schedule**. -1. Choose a **Retention** count, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged. -1. Click **Create**. - -**Result:** The scan runs and reschedules to run according to the cron schedule provided. The **Next Scan** value indicates the next time this scan will run again. - -A report is generated with the scan results every time the scan runs. To see the latest results, click the name of the scan that appears. - -You can also see the previous reports by choosing the report from the **Reports** dropdown on the scan detail page. - -### Skipping Tests - -CIS scans can be run using test profiles with user-defined skips. - -To skip tests, you will create a custom CIS scan profile. A profile contains the configuration for the CIS scan, which includes the benchmark versions to use and any specific tests to skip in that benchmark. - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Profile**. -1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile and click **⋮ Clone**. If you are filling out the form, add the tests to skip using the test IDs, using the relevant CIS Benchmark as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name: - - ```yaml - apiVersion: cis.cattle.io/v1 - kind: ClusterScanProfile - metadata: - annotations: - meta.helm.sh/release-name: clusterscan-operator - meta.helm.sh/release-namespace: cis-operator-system - labels: - app.kubernetes.io/managed-by: Helm - name: "" - spec: - benchmarkVersion: cis-1.5 - skipTests: - - "1.1.20" - - "1.1.21" - ``` -1. Click **Create**. - -**Result:** A new CIS scan profile is created. - -When you [run a scan](#running-a-scan) that uses this profile, the defined tests will be skipped during the scan. The skipped tests will be marked in the generated report as `Skip`. - -### Viewing Reports - -To view the generated CIS scan reports, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. -1. The **Scans** page will show the generated reports. To see a detailed report, go to a scan report and click the name. - -One can download the report from the Scans list or from the scan detail page. - -### Enabling Alerting for rancher-cis-benchmark - -Alerts can be configured to be sent out for a scan that runs on a schedule. - -> **Prerequisite:** -> -> Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration) -> -> While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration/receiver/#example-route-config-for-cis-scan-alerts) - -While installing or upgrading the `rancher-cis-benchmark` Helm chart, set the following flag to `true` in the `values.yaml`: - -```yaml -alerts: - enabled: true -``` - -### Configuring Alerts for a Periodic Scan on a Schedule - -It is possible to run a ClusterScan on a schedule. - -A scheduled scan can also specify if you should receive alerts when the scan completes. - -Alerts are supported only for a scan that runs on a schedule. - -The CIS Benchmark application supports two types of alerts: - -- Alert on scan completion: This alert is sent out when the scan run finishes. The alert includes details including the ClusterScan's name and the ClusterScanProfile name. -- Alert on scan failure: This alert is sent out if there are some test failures in the scan run or if the scan is in a `Fail` state. - -> **Prerequisite:** -> -> Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration) -> -> While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.]({{}}/rancher/v2.6/en/monitoring-alerting/configuration/receiver/#example-route-config-for-cis-scan-alerts) - -To configure alerts for a scan that runs on a schedule, - -1. Please enable alerts on the `rancher-cis-benchmark` application (#enabling-alerting-for-rancher-cis-benchmark) -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**. -1. Click **CIS Benchmark > Scan**. -1. Click **Create**. -1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. -1. Choose the option **Run scan on a schedule**. -1. Enter a valid [cron schedule expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) in the field **Schedule**. -1. Check the boxes next to the Alert types under **Alerting**. -1. Optional: Choose a **Retention Count**, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged. -1. Click **Create**. - -**Result:** The scan runs and reschedules to run according to the cron schedule provided. Alerts are sent out when the scan finishes if routes and receiver are configured under `rancher-monitoring` application. - -A report is generated with the scan results every time the scan runs. To see the latest results, click the name of the scan that appears. - -### Creating a Custom Benchmark Version for Running a Cluster Scan - -There could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them. - -It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application. - -For details, see [this page.](./custom-benchmark) diff --git a/content/rancher/v2.6/en/security/security-scan/_index.md b/content/rancher/v2.6/en/security/security-scan/_index.md new file mode 100644 index 00000000000..c5a3cdb21d4 --- /dev/null +++ b/content/rancher/v2.6/en/security/security-scan/_index.md @@ -0,0 +1,6 @@ +--- +title: Security Scans +weight: 299 +--- + +The documentation about CIS security scans has moved [here.]({{}}/rancher/v2.6/en/cis-scans) From 80ffb7f8d880af1b04975116d7917dcee94da4bc Mon Sep 17 00:00:00 2001 From: Guilherme Macedo Date: Sat, 1 Jan 2022 14:46:33 +0100 Subject: [PATCH 14/33] Restore CIS docs to its original place Signed-off-by: Guilherme Macedo --- content/rancher/v2.6/en/security/_index.md | 2 +- .../cis-scans/configuration/_index.md | 98 ------------------- .../cis-scans/custom-benchmark/_index.md | 84 ---------------- .../v2.6/en/security/cis-scans/rbac/_index.md | 50 ---------- .../cis-scans/skipped-tests/_index.md | 54 ---------- 5 files changed, 1 insertion(+), 287 deletions(-) delete mode 100644 content/rancher/v2.6/en/security/cis-scans/configuration/_index.md delete mode 100644 content/rancher/v2.6/en/security/cis-scans/custom-benchmark/_index.md delete mode 100644 content/rancher/v2.6/en/security/cis-scans/rbac/_index.md delete mode 100644 content/rancher/v2.6/en/security/cis-scans/skipped-tests/_index.md diff --git a/content/rancher/v2.6/en/security/_index.md b/content/rancher/v2.6/en/security/_index.md index f936f7d9461..1f121f11ed2 100644 --- a/content/rancher/v2.6/en/security/_index.md +++ b/content/rancher/v2.6/en/security/_index.md @@ -46,7 +46,7 @@ The Benchmark provides recommendations of two types: Automated and Manual. We ru When Rancher runs a CIS security scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests. -For details, refer to the section on [security scans.]({{}}/rancher/v2.6/en/security/cis-scans/) +For details, refer to the section on [security scans]({{}}/rancher/v2.6/en/cis-scans). ### SELinux RPM diff --git a/content/rancher/v2.6/en/security/cis-scans/configuration/_index.md b/content/rancher/v2.6/en/security/cis-scans/configuration/_index.md deleted file mode 100644 index 26df1932e25..00000000000 --- a/content/rancher/v2.6/en/security/cis-scans/configuration/_index.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -title: Configuration -weight: 3 ---- - -This configuration reference is intended to help you manage the custom resources created by the `rancher-cis-benchmark` application. These resources are used for performing CIS scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization. - -To configure the custom resources, go to the **Cluster Dashboard** To configure the CIS scans, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to configure CIS scans and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark**. - -### Scans - -A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed. - -When configuring a scan, you need to define the name of the scan profile that will be used with the `scanProfileName` directive. - -An example ClusterScan custom resource is below: - -```yaml -apiVersion: cis.cattle.io/v1 -kind: ClusterScan -metadata: - name: rke-cis -spec: - scanProfileName: rke-profile-hardened -``` - -### Profiles - -A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark. - -> By default, a few ClusterScanProfiles are installed as part of the `rancher-cis-benchmark` chart. If a user edits these default benchmarks or profiles, the next chart update will reset them back. So it is advisable for users to not edit the default ClusterScanProfiles. - -Users can clone the ClusterScanProfiles to create custom profiles. - -Skipped tests are listed under the `skipTests` directive. - -When you create a new profile, you will also need to give it a name. - -An example `ClusterScanProfile` is below: - -```yaml -apiVersion: cis.cattle.io/v1 -kind: ClusterScanProfile -metadata: - annotations: - meta.helm.sh/release-name: clusterscan-operator - meta.helm.sh/release-namespace: cis-operator-system - labels: - app.kubernetes.io/managed-by: Helm - name: "" -spec: - benchmarkVersion: cis-1.5 - skipTests: - - "1.1.20" - - "1.1.21" -``` - -### Benchmark Versions - -A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark. - -A `ClusterScanBenchmark` defines the CIS `BenchmarkVersion` name and test configurations. The `BenchmarkVersion` name is a parameter provided to the `kube-bench` tool. - -By default, a few `BenchmarkVersion` names and test configurations are packaged as part of the CIS scan application. When this feature is enabled, these default BenchmarkVersions will be automatically installed and available for users to create a ClusterScanProfile. - -> If the default BenchmarkVersions are edited, the next chart update will reset them back. Therefore we don't recommend editing the default ClusterScanBenchmarks. - -A ClusterScanBenchmark consists of the fields: - -- `ClusterProvider`: This is the cluster provider name for which this benchmark is applicable. For example: RKE, EKS, GKE, etc. Leave it empty if this benchmark can be run on any cluster type. -- `MinKubernetesVersion`: Specifies the cluster's minimum kubernetes version necessary to run this benchmark. Leave it empty if there is no dependency on a particular Kubernetes version. -- `MaxKubernetesVersion`: Specifies the cluster's maximum Kubernetes version necessary to run this benchmark. Leave it empty if there is no dependency on a particular k8s version. - -An example `ClusterScanBenchmark` is below: - -```yaml -apiVersion: cis.cattle.io/v1 -kind: ClusterScanBenchmark -metadata: - annotations: - meta.helm.sh/release-name: clusterscan-operator - meta.helm.sh/release-namespace: cis-operator-system - creationTimestamp: "2020-08-28T18:18:07Z" - generation: 1 - labels: - app.kubernetes.io/managed-by: Helm - name: cis-1.5 - resourceVersion: "203878" - selfLink: /apis/cis.cattle.io/v1/clusterscanbenchmarks/cis-1.5 - uid: 309e543e-9102-4091-be91-08d7af7fb7a7 -spec: - clusterProvider: "" - minKubernetesVersion: 1.15.0 -``` \ No newline at end of file diff --git a/content/rancher/v2.6/en/security/cis-scans/custom-benchmark/_index.md b/content/rancher/v2.6/en/security/cis-scans/custom-benchmark/_index.md deleted file mode 100644 index 36f70ccaa55..00000000000 --- a/content/rancher/v2.6/en/security/cis-scans/custom-benchmark/_index.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -title: Creating a Custom Benchmark Version for Running a Cluster Scan -weight: 4 ---- - -Each Benchmark Version defines a set of test configuration files that define the CIS tests to be run by the kube-bench tool. -The `rancher-cis-benchmark` application installs a few default Benchmark Versions which are listed under CIS Benchmark application menu. - -But there could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them. - -It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application. - -When a cluster scan is run, you need to select a Profile which points to a specific Benchmark Version. - -Follow all the steps below to add a custom Benchmark Version and run a scan using it. - -1. [Prepare the Custom Benchmark Version ConfigMap](#1-prepare-the-custom-benchmark-version-configmap) -2. [Add a Custom Benchmark Version to a Cluster](#2-add-a-custom-benchmark-version-to-a-cluster) -3. [Create a New Profile for the Custom Benchmark Version](#3-create-a-new-profile-for-the-custom-benchmark-version) -4. [Run a Scan Using the Custom Benchmark Version](#4-run-a-scan-using-the-custom-benchmark-version) - -### 1. Prepare the Custom Benchmark Version ConfigMap - -To create a custom benchmark version, first you need to create a ConfigMap containing the benchmark version's config files and upload it to your Kubernetes cluster where you want to run the scan. - -To prepare a custom benchmark version ConfigMap, suppose we want to add a custom Benchmark Version named `foo`. - -1. Create a directory named `foo` and inside this directory, place all the config YAML files that the kube-bench tool looks for. For example, here are the config YAML files for a Generic CIS 1.5 Benchmark Version https://github.com/aquasecurity/kube-bench/tree/master/cfg/cis-1.5 -1. Place the complete `config.yaml` file, which includes all the components that should be tested. -1. Add the Benchmark version name to the `target_mapping` section of the `config.yaml`: - - ```yaml - target_mapping: - "foo": - - "master" - - "node" - - "controlplane" - - "etcd" - - "policies" - ``` -1. Upload this directory to your Kubernetes Cluster by creating a ConfigMap: - - ```yaml - kubectl create configmap -n foo --from-file= - ``` - -### 2. Add a Custom Benchmark Version to a Cluster - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark > Benchmark Version**. -1. Click **Create**. -1. Enter the **Name** and a description for your custom benchmark version. -1. Choose the cluster provider that your benchmark version applies to. -1. Choose the ConfigMap you have uploaded from the dropdown. -1. Add the minimum and maximum Kubernetes version limits applicable, if any. -1. Click **Create**. - -### 3. Create a New Profile for the Custom Benchmark Version - -To run a scan using your custom benchmark version, you need to add a new Profile pointing to this benchmark version. - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark > Profile**. -1. Click **Create**. -1. Provide a **Name** and description. In this example, we name it `foo-profile`. -1. Choose the Benchmark Version from the dropdown. -1. Click **Create**. - -### 4. Run a Scan Using the Custom Benchmark Version - -Once the Profile pointing to your custom benchmark version `foo` has been created, you can create a new Scan to run the custom test configs in the Benchmark Version. - -To run a scan, - -1. In the upper left corner, click **☰ > Cluster Management**. -1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**. -1. In the left navigation bar, click **CIS Benchmark > Scan**. -1. Click **Create**. -1. Choose the new cluster scan profile. -1. Click **Create**. - -**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears. diff --git a/content/rancher/v2.6/en/security/cis-scans/rbac/_index.md b/content/rancher/v2.6/en/security/cis-scans/rbac/_index.md deleted file mode 100644 index ad2b47bff2c..00000000000 --- a/content/rancher/v2.6/en/security/cis-scans/rbac/_index.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: Roles-based Access Control -shortTitle: RBAC -weight: 3 ---- - -This section describes the permissions required to use the rancher-cis-benchmark App. - -The rancher-cis-benchmark is a cluster-admin only feature by default. - -However, the `rancher-cis-benchmark` chart installs these two default `ClusterRoles`: - -- cis-admin -- cis-view - -In Rancher, only cluster owners and global administrators have `cis-admin` access by default. - -Note: If you were using the `cis-edit` role added in Rancher v2.5 setup, it has now been removed since -Rancher v2.5.2 because it essentially is same as `cis-admin`. If you happen to create any clusterrolebindings -for `cis-edit`, please update them to use `cis-admin` ClusterRole instead. - -# Cluster-Admin Access - -Rancher CIS Scans is a cluster-admin only feature by default. -This means only the Rancher global admins, and the cluster’s cluster-owner can: - -- Install/Uninstall the rancher-cis-benchmark App -- See the navigation links for CIS Benchmark CRDs - ClusterScanBenchmarks, ClusterScanProfiles, ClusterScans -- List the default ClusterScanBenchmarks and ClusterScanProfiles -- Create/Edit/Delete new ClusterScanProfiles -- Create/Edit/Delete a new ClusterScan to run the CIS scan on the cluster -- View and Download the ClusterScanReport created after the ClusterScan is complete - - -# Summary of Default Permissions for Kubernetes Default Roles - -The rancher-cis-benchmark creates three `ClusterRoles` and adds the CIS Benchmark CRD access to the following default K8s `ClusterRoles`: - -| ClusterRole created by chart | Default K8s ClusterRole | Permissions given with Role -| ------------------------------| ---------------------------| ---------------------------| -| `cis-admin` | `admin`| Ability to CRUD clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR -| `cis-view` | `view `| Ability to List(R) clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR - - -By default only cluster-owner role will have ability to manage and use `rancher-cis-benchmark` feature. - -The other Rancher roles (cluster-member, project-owner, project-member) do not have any default permissions to manage and use rancher-cis-benchmark resources. - -But if a cluster-owner wants to delegate access to other users, they can do so by creating ClusterRoleBindings between these users and the above CIS ClusterRoles manually. -There is no automatic role aggregation supported for the `rancher-cis-benchmark` ClusterRoles. diff --git a/content/rancher/v2.6/en/security/cis-scans/skipped-tests/_index.md b/content/rancher/v2.6/en/security/cis-scans/skipped-tests/_index.md deleted file mode 100644 index f2b125c0262..00000000000 --- a/content/rancher/v2.6/en/security/cis-scans/skipped-tests/_index.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -title: Skipped and Not Applicable Tests -weight: 3 ---- - -This section lists the tests that are skipped in the permissive test profile for RKE. - -> All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile. - -# CIS Benchmark v1.5 - -### CIS Benchmark v1.5 Skipped Tests - -| Number | Description | Reason for Skipping | -| ---------- | ------------- | --------- | -| 1.1.12 | Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership. | -| 1.2.6 | Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | -| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. | -| 1.2.34 | Ensure that encryption providers are appropriately configured (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. | -| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Automated) | System level configurations are required before provisioning the cluster in order for this argument to be set to true. | -| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | -| 5.1.5 | Ensure that default service accounts are not actively used. (Automated) | Kubernetes provides default service accounts to be used. | -| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.2.3 | Minimize the admission of containers wishing to share the host IPC namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.2.4 | Minimize the admission of containers wishing to share the host network namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.2.5 | Minimize the admission of containers with allowPrivilegeEscalation (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | -| 5.3.2 | Ensure that all Namespaces have Network Policies defined (Automated) | Enabling Network Policies can prevent certain applications from communicating with each other. | -| 5.6.4 | The default namespace should not be used (Automated) | Kubernetes provides a default namespace. | - -### CIS Benchmark v1.5 Not Applicable Tests - -| Number | Description | Reason for being not applicable | -| ---------- | ------------- | --------- | -| 1.1.1 | Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | -| 1.1.2 | Ensure that the API server pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | -| 1.1.3 | Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.1.4 | Ensure that the controller manager pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.1.5 | Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.6 | Ensure that the scheduler pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.7 | Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | -| 1.1.8 | Ensure that the etcd pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | -| 1.1.13 | Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | -| 1.1.14 | Ensure that the admin.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | -| 1.1.15 | Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.16 | Ensure that the scheduler.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | -| 1.1.17 | Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.1.18 | Ensure that the controller-manager.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | -| 1.3.6 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | -| 4.1.1 | Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | -| 4.1.2 | Ensure that the kubelet service file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | -| 4.1.9 | Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | -| 4.1.10 | Ensure that the kubelet configuration file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | -| 4.2.12 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | \ No newline at end of file From c2681fac482eb25d4f120077bf868d4deb455ab8 Mon Sep 17 00:00:00 2001 From: Guilherme Macedo Date: Mon, 3 Jan 2022 13:22:00 +0100 Subject: [PATCH 15/33] Fix link Signed-off-by: Guilherme Macedo --- content/rancher/v2.6/en/security/hardening-guides/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.6/en/security/hardening-guides/_index.md b/content/rancher/v2.6/en/security/hardening-guides/_index.md index 509aad121c5..a3635419be5 100644 --- a/content/rancher/v2.6/en/security/hardening-guides/_index.md +++ b/content/rancher/v2.6/en/security/hardening-guides/_index.md @@ -10,4 +10,4 @@ aliases: - /rancher/v2.6/en/security/rancher-2.5/1.6-benchmark-2.5/ --- -Rancher v2.6 hardening guides are currently being updated. For the time being, please consult Rancher v2.5 [self-assessment and hardening guides]({{}}/rancher/v2.5/en/security/rancher-2.5) for more information. +Rancher v2.6 hardening guides are currently being updated. For the time being, please consult [Rancher v2.5 self-assessment and hardening guides]({{}}/rancher/v2.5/en/security/rancher-2.5) for more information. From e9d7bbbe1061a40bda483506b4fcbc15e7802435 Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Tue, 4 Jan 2022 18:00:05 +0000 Subject: [PATCH 16/33] Updated chart and removed monitoring-specific verbiage --- content/rancher/v2.6/en/helm-charts/_index.md | 48 +++++++------------ 1 file changed, 17 insertions(+), 31 deletions(-) diff --git a/content/rancher/v2.6/en/helm-charts/_index.md b/content/rancher/v2.6/en/helm-charts/_index.md index a41c062340f..c9bb859a5bc 100644 --- a/content/rancher/v2.6/en/helm-charts/_index.md +++ b/content/rancher/v2.6/en/helm-charts/_index.md @@ -7,50 +7,36 @@ In this section, you'll learn how to manage Helm chart repositories and applicat ### Changes in Rancher v2.6 -Starting in Rancher v2.6.0, a new versioning scheme for Rancher feature charts was implemented for Monitoring. The changes are centered around the major version of the charts and the +up annotation for upstream charts, where applicable. +Starting in Rancher v2.6.0, a new versioning scheme for Rancher feature charts was implemented. The changes are centered around the major version of the charts and the +up annotation for upstream charts, where applicable. **Major Version:** The major version of the charts is tied to Rancher minor versions. When you upgrade to a new Rancher minor version, you should ensure that all of your **Apps & Marketplace** charts are also upgraded to the correct release line for the chart. >**Note:** Any major versions that are less than the ones mentioned in the table below are meant for 2.5 and below only. For example, you are advised to not use <100.x.x versions of Monitoring in 2.6.x+. -**Feature Charts for Monitoring V2:** +**Feature Charts:** | **Name** | **Starting Major Version** | | ---------------- | ------------------ | -| rancher-vsphere-csi | 100.0.0 | -| rancher-vsphere-cpi | 100.0.0 | -| Logging V2 | 100.0.0+up3.12.0 | -| Monitoring V2 | 100.0.0+up16.6.0 | -| gke-operator | 100.0.0+up1.1.1 | -| aks-operator | 100.0.0+up1.0.1 | -| eks-operator | 100.0.0+up1.1.1 | -| rancher-backup | 2.0.0 | -| rancher-alerting-drivers | 100.0.0 | -| rancher-prom2teams | 100.0.0+up0.2.0 | -| rancher-sachet | 100.0.0 | -| rancher-cis-benchmark | 2.0.0 | -| fleet | 100.0.0+up0.3.6 | -| longhorn | 100.0.0+up1.1.2 | -| rancher-gatekeeper | 100.0.0+up3.5.1 | -| rancher-istio | 100.0.0+up1.10.4 | -| rancher-kiali-server | 100.0.0+up1.35.0 | -| rancher-tracing | 100.0.0 | -| rancher-pushprox | 100.0.0 | -| rancher-prometheus-adapter | 100.0.0+up2.14.0 | -| rancher-grafana | 100.0.0+up6.11.0 | -| rancher-windows-exporter | 100.0.0 | -| rancher-node-exporter | 100.0.0+up1.18.1 | -| rancher-kube-state-metrics | 100.0.0+up3.2.0 | +| rancher-alerting-drivers | 100.0.0 | +| rancher-cis-benchmark | 2.0.0 | +| external-ip-webhook | 100.0.0+up1.0.0 | +| harvester-cloud-provider | 100.0.0+up0.1.8 | +| harvester-csi-driver | 100.0.0+up0.1.9 | +| rancher-istio | 100.0.0+up1.10.4 | +| rancher-logging | 100.0.0+up3.12.0 | +| rancher-longhorn | 100.0.0+up1.1.2 | +| rancher-monitoring | 100.0.0+up16.6.0 | +| rancher-gatekeeper | 100.0.0+up3.5.1 | +| rancher-backups | 2.0.0 | | rancher-wins-upgrader | 100.0.0+up0.0.1 | -| system-upgrade-controller | 100.0.0+up0.3.0 | -| rancher-webhook | 1.0.0+up0.2.0 | -| external-ip-webhook | 100.0.0+up1.0.0 | -| rancher-sriov | 100.0.0+up0.1.0 | +| rancher-sriov (experimental) | 100.0.0+up0.1.0 | +| rancher-vsphere-cpi | 100.0.0 | +| rancher-vsphere-csi | 100.0.0 |
**Charts based on upstream:** For charts that are based on upstreams, the +up annotation should inform you of what upstream version the Rancher chart is tracking. Check the upstream version compatibility with Rancher during upgrades also. -- As an example, `100.x.x+up16.6.0` for Monitoring tracks upstream kube-prometheus-stack `16.6.0` with some Rancher patches added to it +- As an example, `100.x.x+up16.6.0` for Monitoring tracks upstream kube-prometheus-stack `16.6.0` with some Rancher patches added to it. - On upgrades, ensure that you are not downgrading the version of the chart that you are using. For example, if you are using a version of Monitoring > `16.6.0` in Rancher 2.5, you should not upgrade to `100.x.x+up16.6.0`. Instead, you should upgrade to the appropriate version in the next release. From 3bc443bdd750c9f6d7dad292e5c857a9757744dc Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Thu, 6 Jan 2022 21:40:15 +0000 Subject: [PATCH 17/33] Updated table based on feedback --- content/rancher/v2.6/en/helm-charts/_index.md | 34 +++++++++---------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/content/rancher/v2.6/en/helm-charts/_index.md b/content/rancher/v2.6/en/helm-charts/_index.md index c9bb859a5bc..bf42cde74fb 100644 --- a/content/rancher/v2.6/en/helm-charts/_index.md +++ b/content/rancher/v2.6/en/helm-charts/_index.md @@ -15,23 +15,23 @@ Starting in Rancher v2.6.0, a new versioning scheme for Rancher feature charts w **Feature Charts:** -| **Name** | **Starting Major Version** | -| ---------------- | ------------------ | -| rancher-alerting-drivers | 100.0.0 | -| rancher-cis-benchmark | 2.0.0 | -| external-ip-webhook | 100.0.0+up1.0.0 | -| harvester-cloud-provider | 100.0.0+up0.1.8 | -| harvester-csi-driver | 100.0.0+up0.1.9 | -| rancher-istio | 100.0.0+up1.10.4 | -| rancher-logging | 100.0.0+up3.12.0 | -| rancher-longhorn | 100.0.0+up1.1.2 | -| rancher-monitoring | 100.0.0+up16.6.0 | -| rancher-gatekeeper | 100.0.0+up3.5.1 | -| rancher-backups | 2.0.0 | -| rancher-wins-upgrader | 100.0.0+up0.0.1 | -| rancher-sriov (experimental) | 100.0.0+up0.1.0 | -| rancher-vsphere-cpi | 100.0.0 | -| rancher-vsphere-csi | 100.0.0 | +| **Name** | **Supported Minimum Version** | **Supported Maximum Version** | +| ---------------- | ------------ | ------------ | +| rancher-alerting-drivers | 100.0.0 | 100.0.1 | +| rancher-cis-benchmark | 2.0.0 | 2.0.2 | +| external-ip-webhook | 100.0.0+up1.0.0 | 100.0.1+up1.0.1 | +| harvester-cloud-provider | 100.0.0+up0.1.8 | 100.0.0+up0.1.8 | +| harvester-csi-driver | 100.0.0+up0.1.9 | 100.0.0+up0.1.9 | +| rancher-istio | 100.0.0+up1.10.4 | 100.1.0+up1.11.4 | +| rancher-logging | 100.0.0+up3.12.0 | 100.0.1+up3.15.0 | +| rancher-longhorn | 100.0.0+up1.1.2 | 100.1.1+up1.2.3 | +| rancher-monitoring | 100.0.0+up16.6.0 | 100.1.0+up19.0.3 +| rancher-gatekeeper | 100.0.0+up3.5.1 | 100.0.1+up3.6.0 | +| rancher-backups | 2.0.0 | 2.1.0 | +| rancher-wins-upgrader | 100.0.0+up0.0.1 | 100.0.0+up0.0.1 | +| rancher-sriov (experimental) | 100.0.0+up0.1.0 | 100.0.1+up0.1.0 | +| rancher-vsphere-cpi | 100.0.0 | 100.1.0+up1.0.100 +| rancher-vsphere-csi | 100.0.0 | 100.1.0+up2.3.0 |
**Charts based on upstream:** For charts that are based on upstreams, the +up annotation should inform you of what upstream version the Rancher chart is tracking. Check the upstream version compatibility with Rancher during upgrades also. From 6bf52ac4236435628d79db2bffc6f105a650f35d Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Mon, 10 Jan 2022 22:15:59 +0000 Subject: [PATCH 18/33] Alphabetized table --- content/rancher/v2.6/en/helm-charts/_index.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/content/rancher/v2.6/en/helm-charts/_index.md b/content/rancher/v2.6/en/helm-charts/_index.md index bf42cde74fb..09ceebc7c98 100644 --- a/content/rancher/v2.6/en/helm-charts/_index.md +++ b/content/rancher/v2.6/en/helm-charts/_index.md @@ -17,23 +17,23 @@ Starting in Rancher v2.6.0, a new versioning scheme for Rancher feature charts w | **Name** | **Supported Minimum Version** | **Supported Maximum Version** | | ---------------- | ------------ | ------------ | -| rancher-alerting-drivers | 100.0.0 | 100.0.1 | -| rancher-cis-benchmark | 2.0.0 | 2.0.2 | | external-ip-webhook | 100.0.0+up1.0.0 | 100.0.1+up1.0.1 | | harvester-cloud-provider | 100.0.0+up0.1.8 | 100.0.0+up0.1.8 | | harvester-csi-driver | 100.0.0+up0.1.9 | 100.0.0+up0.1.9 | +| rancher-alerting-drivers | 100.0.0 | 100.0.1 | +| rancher-backups | 2.0.0 | 2.1.0 | +| rancher-cis-benchmark | 2.0.0 | 2.0.2 | +| rancher-gatekeeper | 100.0.0+up3.5.1 | 100.0.1+up3.6.0 | | rancher-istio | 100.0.0+up1.10.4 | 100.1.0+up1.11.4 | | rancher-logging | 100.0.0+up3.12.0 | 100.0.1+up3.15.0 | | rancher-longhorn | 100.0.0+up1.1.2 | 100.1.1+up1.2.3 | | rancher-monitoring | 100.0.0+up16.6.0 | 100.1.0+up19.0.3 -| rancher-gatekeeper | 100.0.0+up3.5.1 | 100.0.1+up3.6.0 | -| rancher-backups | 2.0.0 | 2.1.0 | -| rancher-wins-upgrader | 100.0.0+up0.0.1 | 100.0.0+up0.0.1 | -| rancher-sriov (experimental) | 100.0.0+up0.1.0 | 100.0.1+up0.1.0 | +| rancher-sriov (experimental) | 100.0.0+up0.1.0 | 100.0.1+up0.1.0 | | rancher-vsphere-cpi | 100.0.0 | 100.1.0+up1.0.100 | rancher-vsphere-csi | 100.0.0 | 100.1.0+up2.3.0 | +| rancher-wins-upgrader | 100.0.0+up0.0.1 | 100.0.0+up0.0.1 | -
+
**Charts based on upstream:** For charts that are based on upstreams, the +up annotation should inform you of what upstream version the Rancher chart is tracking. Check the upstream version compatibility with Rancher during upgrades also. - As an example, `100.x.x+up16.6.0` for Monitoring tracks upstream kube-prometheus-stack `16.6.0` with some Rancher patches added to it. From b505968a9385cca1f7b2504ee9d6f7ac766d17ee Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Thu, 13 Jan 2022 16:17:37 -0800 Subject: [PATCH 19/33] Move section placement and add missing header --- .../eks/permissions/_index.md | 105 +++++++++--------- .../eks/permissions/_index.md | 105 +++++++++--------- 2 files changed, 110 insertions(+), 100 deletions(-) diff --git a/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md b/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md index 0567e110d02..521046e6e86 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md @@ -123,31 +123,6 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b ### Service Role Permissions -Rancher will create a service role with the following trust policy: - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Action": "sts:AssumeRole", - "Principal": { - "Service": "eks.amazonaws.com" - }, - "Effect": "Allow", - "Sid": "" - } - ] -} -``` - -This role will also have two role policy attachments with the following policies ARNs: - -``` -arn:aws:iam::aws:policy/AmazonEKSClusterPolicy -arn:aws:iam::aws:policy/AmazonEKSServicePolicy -``` - Permissions required for Rancher to create service role on users behalf during the EKS cluster creation process. ```json @@ -182,36 +157,66 @@ Permissions required for Rancher to create service role on users behalf during t } ``` +When an EKS cluster is created, Rancher will create a service role with the following trust policy: + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Action": "sts:AssumeRole", + "Principal": { + "Service": "eks.amazonaws.com" + }, + "Effect": "Allow", + "Sid": "" + } + ] +} +``` + +This role will also have two role policy attachments with the following policies ARNs: + +``` +arn:aws:iam::aws:policy/AmazonEKSClusterPolicy +arn:aws:iam::aws:policy/AmazonEKSServicePolicy +``` + ### VPC Permissions Permissions required for Rancher to create VPC and associated resources. ```json { - "Sid": "VPCPermissions", - "Effect": "Allow", - "Action": [ - "ec2:ReplaceRoute", - "ec2:ModifyVpcAttribute", - "ec2:ModifySubnetAttribute", - "ec2:DisassociateRouteTable", - "ec2:DetachInternetGateway", - "ec2:DescribeVpcs", - "ec2:DeleteVpc", - "ec2:DeleteTags", - "ec2:DeleteSubnet", - "ec2:DeleteRouteTable", - "ec2:DeleteRoute", - "ec2:DeleteInternetGateway", - "ec2:CreateVpc", - "ec2:CreateSubnet", - "ec2:CreateSecurityGroup", - "ec2:CreateRouteTable", - "ec2:CreateRoute", - "ec2:CreateInternetGateway", - "ec2:AttachInternetGateway", - "ec2:AssociateRouteTable" - ], - "Resource": "*" + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "VPCPermissions", + "Effect": "Allow", + "Action": [ + "ec2:ReplaceRoute", + "ec2:ModifyVpcAttribute", + "ec2:ModifySubnetAttribute", + "ec2:DisassociateRouteTable", + "ec2:DetachInternetGateway", + "ec2:DescribeVpcs", + "ec2:DeleteVpc", + "ec2:DeleteTags", + "ec2:DeleteSubnet", + "ec2:DeleteRouteTable", + "ec2:DeleteRoute", + "ec2:DeleteInternetGateway", + "ec2:CreateVpc", + "ec2:CreateSubnet", + "ec2:CreateSecurityGroup", + "ec2:CreateRouteTable", + "ec2:CreateRoute", + "ec2:CreateInternetGateway", + "ec2:AttachInternetGateway", + "ec2:AssociateRouteTable" + ], + "Resource": "*" + } + ] } ``` \ No newline at end of file diff --git a/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md b/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md index 0567e110d02..521046e6e86 100644 --- a/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md +++ b/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md @@ -123,31 +123,6 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b ### Service Role Permissions -Rancher will create a service role with the following trust policy: - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Action": "sts:AssumeRole", - "Principal": { - "Service": "eks.amazonaws.com" - }, - "Effect": "Allow", - "Sid": "" - } - ] -} -``` - -This role will also have two role policy attachments with the following policies ARNs: - -``` -arn:aws:iam::aws:policy/AmazonEKSClusterPolicy -arn:aws:iam::aws:policy/AmazonEKSServicePolicy -``` - Permissions required for Rancher to create service role on users behalf during the EKS cluster creation process. ```json @@ -182,36 +157,66 @@ Permissions required for Rancher to create service role on users behalf during t } ``` +When an EKS cluster is created, Rancher will create a service role with the following trust policy: + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Action": "sts:AssumeRole", + "Principal": { + "Service": "eks.amazonaws.com" + }, + "Effect": "Allow", + "Sid": "" + } + ] +} +``` + +This role will also have two role policy attachments with the following policies ARNs: + +``` +arn:aws:iam::aws:policy/AmazonEKSClusterPolicy +arn:aws:iam::aws:policy/AmazonEKSServicePolicy +``` + ### VPC Permissions Permissions required for Rancher to create VPC and associated resources. ```json { - "Sid": "VPCPermissions", - "Effect": "Allow", - "Action": [ - "ec2:ReplaceRoute", - "ec2:ModifyVpcAttribute", - "ec2:ModifySubnetAttribute", - "ec2:DisassociateRouteTable", - "ec2:DetachInternetGateway", - "ec2:DescribeVpcs", - "ec2:DeleteVpc", - "ec2:DeleteTags", - "ec2:DeleteSubnet", - "ec2:DeleteRouteTable", - "ec2:DeleteRoute", - "ec2:DeleteInternetGateway", - "ec2:CreateVpc", - "ec2:CreateSubnet", - "ec2:CreateSecurityGroup", - "ec2:CreateRouteTable", - "ec2:CreateRoute", - "ec2:CreateInternetGateway", - "ec2:AttachInternetGateway", - "ec2:AssociateRouteTable" - ], - "Resource": "*" + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "VPCPermissions", + "Effect": "Allow", + "Action": [ + "ec2:ReplaceRoute", + "ec2:ModifyVpcAttribute", + "ec2:ModifySubnetAttribute", + "ec2:DisassociateRouteTable", + "ec2:DetachInternetGateway", + "ec2:DescribeVpcs", + "ec2:DeleteVpc", + "ec2:DeleteTags", + "ec2:DeleteSubnet", + "ec2:DeleteRouteTable", + "ec2:DeleteRoute", + "ec2:DeleteInternetGateway", + "ec2:CreateVpc", + "ec2:CreateSubnet", + "ec2:CreateSecurityGroup", + "ec2:CreateRouteTable", + "ec2:CreateRoute", + "ec2:CreateInternetGateway", + "ec2:AttachInternetGateway", + "ec2:AssociateRouteTable" + ], + "Resource": "*" + } + ] } ``` \ No newline at end of file From 7bc771f5f469138a1359ef474ecabf10bdc0906b Mon Sep 17 00:00:00 2001 From: Ray Terrill Date: Fri, 14 Jan 2022 15:25:02 -0800 Subject: [PATCH 20/33] Update minimum permissions doc to include DescribeRegions --- .../hosted-kubernetes-clusters/eks/permissions/_index.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md b/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md index 521046e6e86..8f4539e1c6f 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md @@ -24,6 +24,7 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b "ec2:RunInstances", "ec2:RevokeSecurityGroupIngress", "ec2:RevokeSecurityGroupEgress", + "ec2:DescribeRegions", "ec2:DescribeVpcs", "ec2:DescribeTags", "ec2:DescribeSubnets", @@ -219,4 +220,4 @@ Permissions required for Rancher to create VPC and associated resources. } ] } -``` \ No newline at end of file +``` From e54ac0b0a1400251d4a7699403aee07ad903e080 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Mon, 17 Jan 2022 20:54:19 -0800 Subject: [PATCH 21/33] Apply 7bc771f5 (author: rayterrill) to Rancher 2.6 docs Update minimum permissions doc to include DescribeRegions --- .../hosted-kubernetes-clusters/eks/permissions/_index.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md b/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md index 521046e6e86..8f4539e1c6f 100644 --- a/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md +++ b/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md @@ -24,6 +24,7 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b "ec2:RunInstances", "ec2:RevokeSecurityGroupIngress", "ec2:RevokeSecurityGroupEgress", + "ec2:DescribeRegions", "ec2:DescribeVpcs", "ec2:DescribeTags", "ec2:DescribeSubnets", @@ -219,4 +220,4 @@ Permissions required for Rancher to create VPC and associated resources. } ] } -``` \ No newline at end of file +``` From 303323a5dc5275cc01b03fab93e5f7a09bebc81e Mon Sep 17 00:00:00 2001 From: the-it-jaeger <61639149+the-it-jaeger@users.noreply.github.com> Date: Tue, 18 Jan 2022 16:02:49 -0500 Subject: [PATCH 22/33] Update _index.md The branch name listed when installing with `--set privateCA=true` should match everything else. --- .../v2.6/en/installation/install-rancher-on-k8s/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.6/en/installation/install-rancher-on-k8s/_index.md b/content/rancher/v2.6/en/installation/install-rancher-on-k8s/_index.md index 11b5f6c2def..5f9e50cc198 100644 --- a/content/rancher/v2.6/en/installation/install-rancher-on-k8s/_index.md +++ b/content/rancher/v2.6/en/installation/install-rancher-on-k8s/_index.md @@ -234,7 +234,7 @@ helm install rancher rancher-/rancher \ If you are using a Private CA signed certificate , add `--set privateCA=true` to the command: ``` -helm install rancher rancher-latest/rancher \ +helm install rancher rancher-/rancher \ --namespace cattle-system \ --set hostname=rancher.my.org \ --set bootstrapPassword=admin \ From 845b63914015a915f2f3a219564d7f21d9eee071 Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Tue, 18 Jan 2022 21:08:07 +0000 Subject: [PATCH 23/33] Updated directions for upgrading workloads --- .../en/k8s-in-rancher/workloads/upgrade-workloads/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.6/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md b/content/rancher/v2.6/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md index be8c09f7792..f6804adb2ea 100644 --- a/content/rancher/v2.6/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md +++ b/content/rancher/v2.6/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md @@ -16,6 +16,6 @@ When a new version of an application image is released on Docker Hub, you can up These options control how the upgrade rolls out to containers that are currently running. For example, for scalable deployments, you can choose whether you want to stop old pods before deploying new ones, or vice versa, as well as the upgrade batch size. -1. Click **Upgrade**. +1. Click **Save**. **Result:** The workload begins upgrading its containers, per your specifications. Note that scaling up the deployment or updating the upgrade/scaling policy won't result in the pods recreation. From bf959a22705651e95d6962dcc7686241d910253c Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Tue, 18 Jan 2022 21:29:14 -0800 Subject: [PATCH 24/33] Apply 303323a5 (author: the-it-jaeger) to Rancher v2.0-v2.4 and v2.5 docs The branch name listed when installing with should match everything else. --- .../v2.0-v2.4/en/installation/install-rancher-on-k8s/_index.md | 2 +- .../v2.5/en/installation/install-rancher-on-k8s/_index.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/_index.md b/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/_index.md index 98c8db4a6e8..b98a08e0085 100644 --- a/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/_index.md +++ b/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/_index.md @@ -231,7 +231,7 @@ helm install rancher rancher-/rancher \ If you are using a Private CA signed certificate , add `--set privateCA=true` to the command: ``` -helm install rancher rancher-latest/rancher \ +helm install rancher rancher-/rancher \ --namespace cattle-system \ --set hostname=rancher.my.org \ --set ingress.tls.source=secret \ diff --git a/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md index 94497592e86..cd3d7cef001 100644 --- a/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md +++ b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md @@ -245,7 +245,7 @@ helm install rancher rancher-/rancher \ If you are using a Private CA signed certificate , add `--set privateCA=true` to the command: ``` -helm install rancher rancher-latest/rancher \ +helm install rancher rancher-/rancher \ --namespace cattle-system \ --set hostname=rancher.my.org \ --set ingress.tls.source=secret \ From 72223b7e2102009c52c7f8ed40050839398eb335 Mon Sep 17 00:00:00 2001 From: Wataru Sekiguchi Date: Wed, 19 Jan 2022 17:39:55 +0900 Subject: [PATCH 25/33] Update _index.md Fix a typo. --- .../configuration/servicemonitor-podmonitor/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.5/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md b/content/rancher/v2.5/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md index 79f3da27930..39ddfd2b5a0 100644 --- a/content/rancher/v2.5/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md +++ b/content/rancher/v2.5/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md @@ -26,6 +26,6 @@ For more information about how ServiceMonitors work, refer to the [Prometheus Op This pseudo-CRD maps to a section of the Prometheus custom resource configuration. It declaratively specifies how group of pods should be monitored. -When a PodMonitor is created, the Prometheus Operator updates the Prometheus scrape configuration to include the PodMonitor configuration. Then Prometheus begins scraping metrics from the endpoint defined in the ServiceMonitor. +When a PodMonitor is created, the Prometheus Operator updates the Prometheus scrape configuration to include the PodMonitor configuration. Then Prometheus begins scraping metrics from the endpoint defined in the PodMonitor. Any Pods in your cluster that match the labels located within the PodMonitor `selector` field will be monitored based on the `podMetricsEndpoints` specified on the PodMonitor. For more information on what fields can be specified, please look at the [spec](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#podmonitorspec) provided by Prometheus Operator. From 26b3ad4e45534ed1e7dfa756cc907effc8defc1a Mon Sep 17 00:00:00 2001 From: Manuel Buil Date: Wed, 19 Jan 2022 14:02:19 +0100 Subject: [PATCH 26/33] Update network docs Signed-off-by: Manuel Buil --- .../en/faq/networking/cni-providers/_index.md | 46 +++++++++++------- static/img/rancher/cilium-logo.png | Bin 0 -> 13971 bytes 2 files changed, 29 insertions(+), 17 deletions(-) create mode 100644 static/img/rancher/cilium-logo.png diff --git a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md index 498e63ad5f1..fc24cba8085 100644 --- a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md +++ b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md @@ -16,7 +16,7 @@ For more information visit [CNI GitHub project](https://github.com/containernetw ### What Network Models are Used in CNI? -CNI network providers implement their network fabric using either an encapsulated network model such as Virtual Extensible Lan ([VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan)) or an unencapsulated network model such as Border Gateway Protocol ([BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol)). +CNI network providers implement their network fabric using either an encapsulated network model such as Virtual Extensible Lan ([VXLAN](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan)) or an unencapsulated network model such as Border Gateway Protocol ([BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol)). #### What is an Encapsulated Network? @@ -26,7 +26,7 @@ In simple terms, this network model generates a kind of network bridge extended This network model is used when an extended L2 bridge is preferred. This network model is sensitive to L3 network latencies of the Kubernetes workers. If datacenters are in distinct geolocations, be sure to have low latencies between them to avoid eventual network segmentation. -CNI network providers using this network model include Flannel, Canal, and Weave. +CNI network providers using this network model include Flannel, Canal, Weave or Cilium. Calico by default is not using this model but it could be configured. ![Encapsulated Network]({{}}/img/rancher/encapsulated-network.png) @@ -38,13 +38,13 @@ In simple terms, this network model generates a kind of network router extended This network model is used when a routed L3 network is preferred. This mode dynamically updates routes at the OS level for Kubernetes workers. It's less sensitive to latency. -CNI network providers using this network model include Calico and Romana. +CNI network providers using this network model include Calico. Cilium can also be configured with this model although it is not the default mode. ![Unencapsulated Network]({{}}/img/rancher/unencapsulated-network.png) ### What CNI Providers are Provided by Rancher? -Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave. You can choose your CNI network provider when you create new Kubernetes clusters from Rancher. +Out-of-the-box, Rancher provides the following CNI network providers for RKE Kubernetes clusters: Canal, Flannel and Weave. For RKE2 Kubernetes clusters: Canal, Calico and Cilium. You can choose your CNI network provider when you create new Kubernetes clusters from Rancher. #### Canal @@ -64,23 +64,25 @@ For more information, see the [Canal GitHub Page.](https://github.com/projectcal ![Flannel Logo]({{}}/img/rancher/flannel-logo.png) -Flannel is a simple and easy way to configure L3 network fabric designed for Kubernetes. Flannel runs a single binary agent named flanneld on each host, which is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms, with the default encapsulation being [VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan). +Flannel is a simple and easy way to configure L3 network fabric designed for Kubernetes. Flannel runs a single binary agent named flanneld on each host, which is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms, with the default encapsulation being [VXLAN](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan). -Encapsulated traffic is unencrypted by default. Therefore, flannel provides an experimental backend for encryption, [IPSec](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#ipsec), which makes use of [strongSwan](https://www.strongswan.org/) to establish encrypted IPSec tunnels between Kubernetes workers. +Encapsulated traffic is unencrypted by default. Flannel provides two solutions for encryption: +* [IPSec](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#ipsec), which makes use of [strongSwan](https://www.strongswan.org/) to establish encrypted IPSec tunnels between Kubernetes workers. It is considered experimental +* [Wireguard](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#wireguard), which is a more performing alternative to strongswan Kubernetes workers should open UDP port `8472` (VXLAN) and TCP port `9099` (healthcheck). See [the port requirements for user clusters]({{}}/rancher/v2.6/en/cluster-provisioning/node-requirements/#networking-requirements) for more details. ![Flannel Diagram]({{}}/img/rancher/flannel-diagram.png) -For more information, see the [Flannel GitHub Page](https://github.com/coreos/flannel). +For more information, see the [Flannel GitHub Page](https://github.com/flannel-io/flannel). #### Calico ![Calico Logo]({{}}/img/rancher/calico-logo.png) -Calico enables networking and network policy in Kubernetes clusters across the cloud. Calico uses a pure, unencapsulated IP network fabric and policy engine to provide networking for your Kubernetes workloads. Workloads are able to communicate over both cloud infrastructure and on-prem using BGP. +Calico enables networking and network policy in Kubernetes clusters across the cloud. By default, Calico uses a pure, unencapsulated IP network fabric and policy engine to provide networking for your Kubernetes workloads. Workloads are able to communicate over both cloud infrastructure and on-prem using BGP. -Calico also provides a stateless IP-in-IP encapsulation mode that can be used, if necessary. Calico also offers policy isolation, allowing you to secure and govern your Kubernetes workloads using advanced ingress and egress policies. +Calico also provides a stateless IP-in-IP or VXLAN encapsulation mode that can be used, if necessary. Calico also offers policy isolation, allowing you to secure and govern your Kubernetes workloads using advanced ingress and egress policies. Kubernetes workers should open TCP port `179` (BGP). See [the port requirements for user clusters]({{}}/rancher/v2.6/en/cluster-provisioning/node-requirements/#networking-requirements) for more details. @@ -110,9 +112,9 @@ The following table summarizes the different features available for each CNI net | Provider | Network Model | Route Distribution | Network Policies | Mesh | External Datastore | Encryption | Ingress/Egress Policies | | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | -| Canal | Encapsulated (VXLAN) | No | Yes | No | K8S API | No | Yes | -| Flannel | Encapsulated (VXLAN) | No | No | No | K8S API | No | No | -| Calico | Encapsulated (VXLAN,IPIP) OR Unencapsulated | Yes | Yes | Yes | Etcd and K8S API | No | Yes | +| Canal | Encapsulated (VXLAN) | No | Yes | No | K8S API | Yes | Yes | +| Flannel | Encapsulated (VXLAN) | No | No | No | K8S API | Yes | No | +| Calico | Encapsulated (VXLAN,IPIP) OR Unencapsulated | Yes | Yes | Yes | Etcd and K8S API | Yes | Yes | | Weave | Encapsulated | Yes | Yes | Yes | No | Yes | Yes | - Network Model: Encapsulated or unencapsulated. For more information, see [What Network Models are Used in CNI?](#what-network-models-are-used-in-cni) @@ -129,16 +131,26 @@ The following table summarizes the different features available for each CNI net - Ingress/Egress Policies: This feature allows you to manage routing control for both Kubernetes and non-Kubernetes communications. + +### Cilium + +![Cilium Logo]({{}}/img/rancher/cilium-logo.png) + +Cilium enables networking and network policies (L3, L4 and L7) in Kubernetes. Cilium by default uses eBPF technologies to route packets inside the node and vxlan to send packets to other nodes, unencapsulated techniques can also be configured. + +Cilium recommends kernel versions greater than 5.2 to be able to leverage the full potential of eBPF. Kubernetes workers should open TCP port `8472` for VXLAN and TCP port `4140` for health checks. Besides ICMP 8/0 must be enabled for health checks too. Fro more information check [Cilium System Requirements](https://docs.cilium.io/en/latest/operations/system_requirements/#firewall-requirements) + #### CNI Community Popularity -The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in January 2020. +The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in January 2022. | Provider | Project | Stars | Forks | Contributors | | ---- | ---- | ---- | ---- | ---- | -| Canal | https://github.com/projectcalico/canal | 614 | 89 | 19 | -| flannel | https://github.com/coreos/flannel | 4977 | 1.4k | 140 | -| Calico | https://github.com/projectcalico/calico | 1534 | 429 | 135 | -| Weave | https://github.com/weaveworks/weave/ | 5737 | 559 | 73 | +| Canal | https://github.com/projectcalico/canal | 679 | 100 | 21 | +| flannel | https://github.com/flannel-io/flannel | 7k | 2.5k | 185 | +| Calico | https://github.com/projectcalico/calico | 3.1k | 741 | 224 | +| Weave | https://github.com/weaveworks/weave/ | 6.2k | 635 | 84 | +| Cilium | https://github.com/cilium/cilium | 10.6k | 1.3k | 352 |
diff --git a/static/img/rancher/cilium-logo.png b/static/img/rancher/cilium-logo.png new file mode 100644 index 0000000000000000000000000000000000000000..681a0b3c530e7049adda19da7249ff9f9de74075 GIT binary patch literal 13971 zcmV;EHf+g>P)p54 z#~yUNM#!1U z3QPdhBcgem*EXf^vWq5bwM*wr>1(LxSqjYP{_^tj0U_rT)t5#{6sZ!~h4ov`k4a}` z)g7t$T2T^!sUR5`3ZEm_15;NE7ydJrSuUSjXMgs~68qoYDAsQ4myfpumg)n*&I>tT zm1U-aRH3qs5n~&jV~S)HGa{mkgYY}sCJCIJw##y;9?(U0npem#eVJ4l-1i+ zZ2t+{F-hwhn)$7Zd_Gp}et&CU(rTIRGkw1rIdbGcMI9*)pbU*`qN6><0#vp#fSH9o zHx)sxdWWfkjQUR3!Ug9TWy1~QtS^q9D}ZQdz-O=Z(suyWwsm8C{uTIAor)J!7aTgG ztqK^kJtl4EIa>`#MRJI1@>G>z6E1Q-$^`61M`ro`v*mJDQfL%Z|LdjU_JuQiEcM>F zapOuARQf1}Q2lIZEYH<3DMx6g5igpH8}7yhxRMH7UtezmnC=jxvWGU-SpR;M0Z7+S zA6P`@>SY)U*{PSjKXLE*R=qz8+yP=|UQ82ZXtIZyML*@@n6#s>TqRHX#OPo)GMW)Nn`yZ*KZo?2b;TWC|QKE%ClM+DUh>p7Kqx(6QX#|;|4Vi*v zjvJRuwEpjF=L(=1%0+p-grR&u>&gI$0~uKto%=ghjI%^>6r#(ejDl_|-aM)-tg9vI zFqCQ(e^$~SW)A*KRnCxZ17OMnc6x<1btn@Y+dIRIYi&e7kym_4UZ*QE-`${s2 zKwvA*ha$`+n#&Bhiow~h_T~8i*z04QkC{Ud*pY2E76p@%z6X8D=k>H%VlvPl2UI$1ra?qGyz@ zrz4!I9FuZ57->#46>l>KC#jO#avKPXn?$kk$M>9Xxy%!$jQg0&?_>Go1XfLlWl!EW z%913LA5;);4UKa*jO@}is{a!RVAW_tH9;9Sb>seqMQPB{dC=ipqszj6lYM6VryIo! z3c3@=LVllUDDy7(R$W5_fXDw%8SCM1Rr>%1CfG51Wo2a;dHT*PZcC@rI++9`1~3C7 zdYy!#5tAKs*v^CqXbLe+r?8Q-5~slPDR;KnBZC{cO1X^_hOheC*h9CQqbBUoebWFFT`q7sg3#&t7m%*L?d{tFd9&yT>VA&V!@7G^d zYkmG|i4Pp2U)0MAo&hNrpP~-y{yVfyw~qBye}V~KfTc6o@EF4?MWfDBvhCXd(gIC3 ziX$x+A!FkFx95L;cc=J1Q;KPwJH>fQ8?> zhK59f-m>nWj07h1UGSs8P6H84aL$c*j8W$qbyJD!086!h?Q37txye;CA2}bKkWGbV z;BvU5il8IjQLT>D>0AKOThPG;1hhC1;S+YzOGkv z=)3rBPrp`z3@EZm9~?s+o-=@I032I)80)B1h6VsorKa7?H5OE=D^*KM4I`dH>oj1I zyTs19?e_WBo~PUUblbW+FOxaIn{bC%Kknbx6#Ivp&zJa3a!o48R}xL%-mRl#$FUWi zaB|KY#osqxQrq`(#~AIlbJ6&|hN(Pj16ZFl)GBHX4J)G~^W-GStGx}}u@&e*vv=mq z3#ZvXJASL(^Zpw9_=&a9*4RDpj^8tl`Rx3@)?JRZz*5c~3z}3JI>qeo zv!LAM{TD{~giH5xm#2WU-~YrZ0WVI(nEW$wkT+hzwZI&IQ>o<{k?}Yy6%q0mRS-9{V{kqNltIEs9xXJKEZZW za~j3DT9fyg)t9)c&Mqe32cPd-H^vejX4wBN3?n}zp7V|?#}nXG5fEis)OWhgE1hF3 ztZ5fm>d>fBqXsBMa=Il-q2J9c{2^V@jr$LXcIIF@gk_74o90;E|BwL>Wu@bUg&bg= zSmVC;aLc^)-u%NF`-WTRTB1(^Y&eGi3NPyOyn{Z~{>2YdE`WN4(v;p~LwV7X0__;j zUdQ%USQ208WoY^2oJYau)9i)6yk)Y(G88auqb$V!#xA4FKLcQ`zj&h07lU)1t5Fk` zbD}Z(iP?Pr3v@gXPWTX4)7@Ba?|rV*qUOwFBmx%7OcY=oW36#n3)@pKF1H#Q>ctK| zR8vzkkeJ#T@cj%6s->6LS^s*Zg!oflQrHF4G2ZNnJ{MLktnL^uI?fjCs}1)FdKf0C zpJk5;S!MsSv6f@%kZlkklzA8ya)3jBM?gu~>{lyBTP+Q~!KrV9i7m`w9a_JcJQa;Y z`ofwZV2I_SSz5JWzSaHS8W$8UD-jU&u)tziEAw4gAJ$#-%}Z^GW8BFkjCY8&QM z=W1QOD@qGu0)b7q4^k<*ig(nf*jp~0V1350k?1ddE&wd7vD-Yo>;kJ{s!vdT4791d zEUsN-8ZPbtLhURx`yFV10`*;JTX1Eo{l@Qa^8@I88KBliulK(9{`%kFW?#8vmTv$i zFowe%)z#GlkU4Zf69pvl{W!1!6;wx&e4D$@-073;4{sYq{2~z?RZ{jj?PDyN^Im0t z0w(m9Ig>0w?8HH*bBuU{5|qXRnu;|}D#DkN%#U{nThmS3j=x@JaiDmg2ZfZa-e1G` zOKSre4cqP~uk`^0I(ILOfXO4(2MJ-~tdWW@Lg#*xzwP%(K^w@tL*H*U4EY z2ykqaC;s-Eo6ol!>-lqrzCU{2dFKs`utaeS29&eTQM!X_K|Ef#S>m1CapN2d!+b9E zXTkKoCn5NmN{|#7A(ZvQ@c%nkTx5wU4e+7nQ5gZFub^bCZYl`=>t*=w0fsl+oy*sO z2x*-HnZOe%J?C29D6lFuOkkCL*Zhf=a1Is*Lz?9+hM_)US*x-yDqRygG6hUbA)L@# z>;-B8hxxz#;wFna-&}j2H*Y;sr###gvj6wok(;a;GpC6;FAS;CLC3+Lm`@yy4VH6_ zE~meV-`QpO8JM`3$OM?LJII0weVbR8*I0fNRvBv2xaC2>63pj&e=*8#Ynm)7=Tq#Iny7M%3Q#o7z-!GnU$-qo@F~C<_TUE_muE(Qx@byiDD>iORsISC4R*e zLw%UM$`EdoIp;zE6z4n5hU$ZlR-dQlV7shU8M>ZNVqi?^^0rC73Jf@=x)_zSMm8od z=lT_nn#D*%G!HxXUSm1tjtZG4c#+aCY_h+x0_Ykh6~l|7`2FFZTx&_j$vsJ@E(D*a z6TUwVzd0@FGu!BLd}r4IGuH`VGD*qIy<#cKql?E8A+u8I6H$@#a{(Zq3r_gwERcj{ z*j~S(#%`+Tg-$r0!y4C4b&?ZPrwC3}_=?O#gV{@>{=@?6;!9fWLy*wroN=mnOBjgF z*~u8HhkYl@fBx&0_FUMHaHoDqlC{VrH8cu~D;Ef$ISv0k0nZtf`65gNv+H~vSnAzd z&u8=}hE<5!`h1<>q%)D92f!+hp}cJ>Z4^rvxB3#sp90wHa$SCuO9nD^X%9@yN^$PW zyS858X-aG*fv>t?Ez{O;$~pbq!w8;qIci*zDGQyNZmhw^h%fHchoa-oFab?QoAjL7 zcHe>t7EP(hnCkR;K7#X@tI}m4xek>9)=TYU`o!bH2%D78=vo;w7huZO%(C425VjV< z$^DkS5Hnk+p>xieFfN~Wf|jayX0hjlf88DdjJV$4{Bnie3gs{Yh~EMIsgMA?k&Y&o zP0|`QX{)S!mo%&volDH#KxnVB+n`F`bz`-~0>{hS`6~)sK(Xa5-1^=&AFLW>q0S{7 z|4BH8@|f5B>@!b9u@?6o%W}mkgZ3NdSTaUB9@2bKAj@JX7l6axdDBHAIIwDR0UigI zc3_Ez2#(Ilb@ok~?M7(Lez5%((o6*eRP`w6vKJZUgHP7-@&KCzcG*jK=gc7sJi>CD zFP-4KAuxwYhQMb#R^ZFYD2M$&gsqs@IwER3nM#ATK1aKUan^kioFIa-(!!$$+UK*W+kd3%;5#(@{@n=^eaT_CkSoyf2k_>> z=f_1~?1k&bSgxsK9BMWl!#mru&z{2$xj}1Yqff+r8}^@{i=l3+ETUTCt1^BoYCk4P z%;K~_u4zF?D?0^^o`gBr*r}l97|be*%?{| z>-_rsu@-{kkLe6Z#C9rV`QMKkW6z&5S@f+=;mBnopeA9TAG61Vrs)HxQd1%UtXI$F z(boC(Nn(KT1@!eS)uC*frNMy{x9aE-Is-%lwH=6l@S@%;-mzQ1cabC-e1H5z>r|y2Fj0+e_oEBO`=UQ!!h5k~r>Lk175Q64zY7&@ zx8^BI0T0olp1Aq~OXANxGGxe*JQY{O(4Pu1R3(pN3biIm71smPhh;1uvz;)jt;v3= z>t;rSicSKv0K?CJ`{pffT{+XaCn1zCBVei!N55PQ%<_9+m_HJ8CTF2)*W3lQf2^vp zm)6yZ`~D2h6$MlSf`Nf-P~uF-2t{$fUNJf_cNeCj^JOcOWpkxhWLdJJoa0lS4_h@w zqw`;G1D0AT-m$B`G2ilZC^3(>#;>3=-?2b%&BjZ`Ogpl~EyIQlGn2_=9tr8WQj#Yw z7;o|5sFpBLHUAp}E)Qtc_KRc2S&jba@1;Z{F@QMezGU=GvnN?Pj*S&Y%Y&&xa`RnueTeZlvs#x4%11# z0KoG^DPZNKN%lXfm^SJ^xlVeu-?t(xwxPCGOh#L}$KXgETY$f|aJ?JR=dn2&Z>!UBQeU7*fEY&{dVp-{wu>APwvDU1qKH>8{@Er?D!5t1d zJ`8y=YfYuCq>@H%6)@`Ar_vDh1$i z>r{K&lHdLr+Vzv_#0abx!uckm$DxdHmw3MHIkL;duxhIWk3~GhWM# zL5Zd!DC3kQD+zHp{!a9@o&x6JnWx&PIf?I_Nn@9B+f@5fG(QCyLBhPxXr7D8O&@3M zT{hdr3KvzTKvaFqSSU!~!s7^(M(FV7s0QmfE#&t`I>|f^L1!NngrHhoJIOv-$wzgk z1c|_9qcqQAry|oZ)!eolK|&C~gc5=?Zx!f77?9MfoO4h=u-{XuRDR3^ug+w>q5rRo zUX=rhbyq7kj(WA!wDHb-HK$2EJ$Ae=1q8?NC`6ul=bUp6o^!i6*FhLl zg)y`+pTYx!1`Ub=hYB<44D5~lNYJP%1zpqWjEJF+YE|vDsNb2-AA=87LjZ+Z*W2tq zny_SL^jJ$4?TP+W@H&35y)$Zzg$ZZjm_Gt0E+sk4GO@m1MHWyQxr|yAoNIlI2tLlU zdXygztMd>(-LYLJBWR)@u}AEnTeddY-}&iu?Y8p1CTnUv-@(WVOq!=%ybyOztaXa& zc%|I$WCt;mX#Q~d)(O(Nnr_5nz`P&^DXiyA~Tf<APr zvNAL59YnZ6;@srZrcbr+equTS4YckIsCKiUasdTkqU+Ih;eE@#W~;HuXA+J=WHktP z3>UVR<(IV-ciC-me*UoxiT)BJEeMXd;3PkS6LTP#fFWmza~)hXa{;i^J{LTjNu!^)bTFSg<)0B)i|5|r z6HwtiS4kfFn61g{_Zlbd7o;4D;e=1H7j^oKdVAZ!W{a9L&)hW=z%+ZNuE%4pT|F%v zgW>YO{JUmrT2sB~puYzlTtY-t9CUh;-G3{Xsdt>J%^EIG=SvO?zfn;A14s~8O!hhA zm}aRWb@V7=zQa1)jSWrqnkC=!*oL$W10nnDXUyFJVgmQOAlpLCBD~*q-A^3TDFilH zwVC{~^LG6@X)76)zEcz}SKd3FuoD3j`MkS_HE{x%xOF^}h|9r2_l4V7$Ic|9JNd93 z6!b}xtbgVl?`kGTCziSJBVzt)`~c%soP9J#4*r*&*ZVo zIy^JDrGsB*1vu#S^$lV<%X3+Z*RhF4r{+obySp4myoi&IwOrU>KYT1>%62i%Hs>R` zrjBPVYVO?6#FzTsFPiPv+4Z7daSA5@BT3_d^l1k40^sZ@8*3-^-Bnww-Ddff{gWUv zmJ)UqchQEXbM}nO#N$9QwKf|`Am%#Ca)N5k z&$mM?t$;iAIo8x*$?rFIB0EuF_xF5Bh-&; zqfh!&)q`Z>s_ZiWcNt+qbi{Xx|J8BnM$3)dhCBQD!f2KRC%3EzqYYd*$**d)J$pD9wxF1Yj`?MOn)8 zsagUXvfkpG8+{YeL+8#nGNXs_V(-^UeW-}70HS9AtOJOta~__}O^MjL(B9Pk6d%d4 zjWuN=OLYqDCA7^QPlGAF)t3{6<5+9xJ{svMtfos+u^E*{qYbh8(vp%Er* z-8bE3o!x>&;QB|DiD(4a%`fx(E9RXK(Jg864`NV*2!;+4*E23_=V#-0Wv~y46~+{M9xOoRGu-I zJh^hUcxShN17w0X8KF|jK0%*FhrESz;5|^b*qz^OvLt|6i#aut^cud!9G@lI8PrjiiP`=cZT0r{BZ9BQ36{-)An~>dTHeAa>@D{9Uv05l=hTM{gftAj zv5ArXCvp0J#Z8G=-)L{Rdbj0c9u^#(vd=;0jrXP4-!`>BY0sSACi;J$LZFaTJZGI; zZS-()1&7-()TSUd0Z>VCwC=2V4Uj64`$|!5mqaO-gD~?QQ#oyyG>CxulxRdKWek8; zC~IK0BEQpy-yw&VPi`Uv6gNs3b*GKj{K&ib9TJ}@%%9R|cen4eFKnJG8^xT~%*U$E z4y+yW#|}4xOy2T4Gs^gGA$s$^dubIm+VlJ-fAi8c-hbSg4yjBI?h`d@zP@?8ucA;1xp?%LgYtSira=nf*3_komwwOdppTw z{JGGY%s3`aCI6!|b1Y0YHb@-TkGZ?8&!Rfc`#f^rD8NS&*-LQj?|VU-+k1=TxeQr4 zM0Mm%&}bYo$vkz+xxuAq_OMETH&Od}$J%w$_j_8CkA?i^j=q1Z{Rw+|;|wthJxLzX zan`dXriQ1lGP2ZQ%17pix{o6p5N2WhA6{ePBR45qQU8fyuZQ3CWCLMh`ne?XL|yDy zkk+J9%qNTZ07S=todyx9q*W~Qoip=tFMr24=bJm^vVbr06QjZo-mjv0Jz-ec>X?C- zN&uI0x>J%EfTi47ncS7pER$J4F)247fz z2#WcgS|K==WvWvjM$M}!{*DuYpm`xIwU-BFy?w)iO>WwS6D>PMMxv=1fAcfdeqg^H z*0hG{qNzKML-Dheh(axvYAJ)E;eaav5l*a@o!8PC*FU?#VUt}w9yoUOhzd9(YIe79 z=@x%_1|z#>d)CGE;syT%F7#MZ#)6}X-{DmIhH3V~zxcrySFQcg8LhVFhi9hGoa5UU z9VHB>j8X1M_IET%cV4&G)0c#F%jOQYn&5!V=KMBy@T^6_bvO8C7tOpv)W>?ME*7H% zI9BdY<>ACC)qYAcKHjx(k`Ib^`OWt<@yTK84|TWrYoDme<0J>xDQ~fF{ce*?E#AT; zhwl}cW!^QnTb`XgmUw{8G_p`U0{6;=I>p)v@^MjUT=G~q2(X_C?4I-PNY1+)N z(`F{kLhGNlFf~w{UB+S^a?kADbj22{ermlqH~Id4C{r=xD4{w}5m3=_+(srF`+%F0 zaAv&)*G042LmV=i1lZ^NI(e`;WO` zq3wEiS(p2nuG8_|U)nS0E*8Z^3(mEc04iT5Bdl4n-4c~>POx;@iPxP({|VV{MpK(C z8D3?m%~sbhlVU{Wbh0gfaJK=|(=O)Cu37EL2x4-WtIX{7n?u%7zi$0B-?M+ogL5c5 z?H3*Gg{vO6T+nRu0OPR`VWe$5nCSY*T44TU&&_)lszs@c&@4uj83+Ao#;h@{)G7K) z2e>3MYYMNLyTX<*@(N}IjG1G*gc^ZBDfMWeY>;r|UjIjBsX3J;ZB_e;RU0I>7EH|(!p)+uUU z7M;o(hm7F_YhnrT%?rO{c@+cx$j%qZ3DF;@?@l?45H$P&n7Gi$Zo*wd(K1DwB@Le` z_HZjO2hHqL!kv=X1r64Y*8=<|!B67MUw6h4pL<24ScZEVE+|e;fuo2E#YeR^G)}jd ze)DOIg2n?6Y1{lDW0yJ0w|Vc^YnB{D*dI2&*_5AhKPR{}GPCTv)aW`L{dZkl0V zbN7pZCd~WX#ev3i%)6Lq&-?wlZy&OoFPtYH(`R6U(uAc-aURTUpBx&n90z*VDfq#E zev`H`PV(BQ2{KfsL==|;SXVOa^q~0rYKEXl1aQ?NvhdzR!>iBadB_#uBc}vs{qsq*rj=fxG ze59fS{vT)3R>PajVi9>;l$j zP2bXu`@rdM5&?4zWP}28Bejxh(cWA%lwve|BOUi8=lWExOke2Ep;M}$)nJ3cku~CH zd{=4sKlDFwaxd{tp1ts9P{fb=OkIXuMAOD-P^`l)GL*aCG1nv_Q(d(9Hs1rZ;pSON zgAzQ}kn@N-PwqRs^3KxGyRZJFt%?z-Hjr^Fo@g%6cgmQuF84j-0}&_OO}BWWr5EWj zsTkJ8ZDRn*q-7S|@{lDFJNFo0&g(zVq^&uwf^gR({?iw%$(zx3x!u0?Fb|UZc}3jQ zp}H7d%5%Uue~NYNsblUeJ8HKI=@>i~IHo!y2NR?!l*RTnS2T~^Mrnm#JT z$YO%jEQ6mv|4Cc3&`FfdA}Jqvo;N}D_Eqa1vwW$PZDQW8%g03WByk2zOEx^=FrYX` z6%2tNXk=!Pn_iaDH(fp zc_88(_%3_aWgWgJM1khirWrY#nm3Zpqk2{PF77cYIl^(lq9a47MkKK%2oO2**e5H{ zcG(U0T0U`7_|8UgL3$K^Ubu3Hg~F)-sC_U|$N&v9V`}z?Dj0O4crEutwTMo|+{JmU ztWKw?vXIi~8wlBp08}Tq%1KNTZ+d|P6V;(ITGTPFBv>?g6jax3`8&I5hOcV!BE}DTecNm#()w^Tvpk_xt65PBE`XeuJ_V z*?a;UG^G?3ZOP1n0CaM+%E$DjPuglIP7@PDXNd$O4Clg`V2-ltb$vsly?FgzUKWrs z(KgXB&j9P{I}TV4(^|xY?eAc6+5!az(V8GN3zy-~chXix|0tWOk3(O;9IfJ%)-z@= zu-dGZWEYsg!=5v{gm5Rt+`dfJ^nxh#M7#{x5l$^fO|a zNYjitb`y+%(QVWbXClbgK_yNKrgTnLwT4+0Ksk{!z`sES z{eiGKG{Y(=gMft(@Db(*kE^4_G5I=ETITGS;*Jnj&G|*?MP?2_7liAr1mn zdG4>`q^+ky9x)aN2r5sCQY8SRzmDlknY1-V%0y)jg>0D*2QbWmc5f~kfN8EKBkM-K zrp9^e%q{;PYtq)3fq?UU@Q@*S<*AB_L^@rStb_*734=C3Glh^#-_gv{Z0I=3xtbu% z=I$uHL-7~nbuoS@YQ3f0i+t>iGrD1nDY~uhb;b*A8PADBQ zVgs{2wtX>tCtZe8omw#4LPkRqba65gU+qa-8F09OqKyD5kUc1%m~Da>%6%cAJQJCc z0jj?k{%rndE*Q7p&$HhqZIuBk0T){Ulo{$n$xWUQ%oI>b9#C;vNq16H!|2}$QnS_N z&dQ{%Its0-j^W$BV8{>$P!2FM;9yOj^u$<`mrSR%0F~Ls$55u`ov!-Ccb$sIup89S zj6ma%KrW9dRQWbEI#K%_*k%xbV+8j6+TVg80^@=`KDNjm@lH3k3MZRzy9 z7GgCHWVNu(-fz=1M(%<0Jpt_uiEbT+hVNfM z`yi37*j_d=SuFEwAoYY58uMYjSD<|i+x1eJ2HXY$D9*jJ&Z*sXL^`BpA0 zDm0;!pll@d9CWzFoRH0)VCFzmu84>ST%N92AWfqb4Z%C^4@pLc0`RR3qu#;cK9uxg z4^Nb@PQ%U!(H3zimjRBI%BBUX*tWFueQ^3BJX=bE`$kyIz;yg=l5H z1b?y4C_2L?!%*9q^QV9cd}RQYq8WoTz(`48j8(9$l>v);!TI+}emV5byKufvg1xwO zSTTG*OtTO!AnAU{&7bf9NlD^&uw79MJuX^XCz$!UdGezGrmP85~1p zF8ZFZJDV~TbFrWG`rNt59|6D~gmal&QZ~E}e4%>y|K+g!tps!P;y%I6*j`#bq5%5o z^@Il)1m@d#tzg?3@E^6X4zl09EKx8lxd+}=2ZX9m+6pH+8V>T503ROIiTEaC_^QP znMue26&m$s0F`E0WKWp6^Z>=MeugcuulJ2eB+6wb7S62*`gngb%~_6Kcz-78f%8+i zR@V+KDe=wtlK)@x@jq%|o%a%HZvTDY&yLoRi_9qL037h)`LLr`IrL^ua8pTX=>WyN zUZ7+EenODR3uH`c@;v_}v>u9QD9^t;62>$xpge{-qpMv}ljl0WazF_tkK`vkv(^X8 z*A6Q!4V6ENhn3_*pS;cQ&jr&dIRA8%js-4Fo9h!(V5U+l<3y=}v<-!5Gw`H%DopJM zwC8Ea^nrfIU=C2JAV6{dQ^2KOQc<#j>G@FJKA^LYgmsDos5EnNGCq{n6gUVdHtC)4 zd?@K%O?{|@>+hucz>m5M{nx1d{TGM?p-=8%lHb%!LZm&iRwc<@T3$W?wtts(4d9q| zYKW?4WO5Uu1NK6|PCkK?tkIO`10x|Yiz-ez_nGsdf}=b$Fv<(|q1^W*6Xi)5N=eMk zFjQ!iX9}``;6qis0)9@Os?ZEK`CKo$CO3b@$axVG0QY}zEgtf&x8qJcW2ldM34GQI zE25p|53u5q80vE%6LL;Dq1h0VCkUJB6`K@LjBj!zjGdEm*+AdqrClG4jq>Ef;Q>&r z8~}RIt_W3{+aCaNG_)@SDWJ+wDYEM{yaXwVq2dK!4THyjJZ8Mr%m7y`-F4Ipb#_XmK#!2o= zI|feh7Qv4I0Wn={sMqO47Ujj!p$a4nl~j-oT>m6vhK_{!cMe0P(`**tjG^`%W}gm~ z2#@llvO$E&lUe~6P>ebc`>rxH21oE21=r-|X42N-iNqZO?Z20-N*6>RRjVl8r$eO! zWdpwsl~!UXlVhkfRfK&?xyk49GErV4+J_31MqGwUy9)K5Fjf?GC;?EBMSB!b3Euas z(y5GaZ#++rI;f~SMVnOw6sN!j14`E^H)T$FB3L#sgxr)1s01wmhT-U89VXs+5+BOX zP|VE)Eh?AiJ6FFBmD&^5M=qe!feaOzCUXJBr&p>B^`UT?8;?sorwIp_UW z7(p*YHjpdj+B#G^3{ceMO-Q7@^|5o$DI7dV$9V)mJT9ot+oE--lq||qda(?ZjAke! z2vEZI2Q{rw@P)&&73kD%)=`AJ`y@#G3=}>7e8WsSR)ypJz zStpxyCk0%4G{G78Vi_u;e8mt=9;cM^QC?s!&n+mE`%vz-!^(yagnj&uNt@s{4`EGq zM40UC4>I>)GLhX=ZsFUAii){mH<%aYrC8ZOdDc3PpjFz7=g<^eBf3q5j=>}++J?U~+DGx7SF>_<`q~$A8a11Ia$RtYBoXGbX?6)?` z$t_PMvE=7@CgbnaBY_K%3^kE*VN-cZ08T0Q7{AKUm~)gyc}%2tP$BK3H)$)>#4K~m zLo40Z2c5eQmZghJN~GWc_Q3+XJX{CLV3x^~!-Z~5c_Lai$ZR7Hqe@3sYzpNoDY1M- z3f{6rIr@-aWIESnwi8$^NSKP-Pa!NDI;^BHfyHiY&Aqc-#e+E5AdInHAIhK0Q<}8Z zWhlOmy-Futp3FG*uglmfqQEXxhYFZ1!wjPG5MeWrMPWZ%lS17*8D+8$gx}Th`vUkp5170i z-nVCN4sZ02vp)0z?7zf_7;VcAQb@{8UZ|-Nm2#p*MKM%VKONL4Uoi~TNn3{}%8e6RNwO7< zEE@=)$?u;4Fs7&^R2@Ju0}wPAich_;{U09X1^1Kp9|sl8WxPpSarRv}NgROiJEm6I zH>4-QDXl4*kUeZjN7IBxc~Xswt=LQij~rGhuv1a`VN6`I81}Q3lTdiAM>d=Z%cNg{ z{f;i=D=s4$Dxp>nWd@l%Hh>C=@dAUMcTn0BV61KI_v`bErjh} z70xrMk=2m((?eOw1UlJt7!i>NCtWb)+~RT2k#hlL&QNqffr^DZ9AOidPQs|o*$2~%7*hzsx#C?s zsOa3{v6xA4hH|H!bA};gC3O(V6_u8kP{VL006n;!|#eY(o9eQeK2@PA@?7Q z%wedol-o~*g|dG7>QCCLxUTM_IaDeFvYGnOQ0Om}V~W#X%K0>%R7sAHDcZIgY89B; tleQYUdBS%#44hLT_)GrJuf(LS{|{c9U4Am~c;)~A002ovPDHLkV1kP<=pg_A literal 0 HcmV?d00001 From f810b078ec94ce74943e3a9fe22a077a08a8b85c Mon Sep 17 00:00:00 2001 From: Manuel Buil Date: Wed, 19 Jan 2022 14:02:19 +0100 Subject: [PATCH 27/33] Update network docs Signed-off-by: Manuel Buil --- .../en/faq/networking/cni-providers/_index.md | 47 +++++++++++------- static/img/rancher/cilium-logo.png | Bin 0 -> 13971 bytes 2 files changed, 30 insertions(+), 17 deletions(-) create mode 100644 static/img/rancher/cilium-logo.png diff --git a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md index 498e63ad5f1..97bcadd763f 100644 --- a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md +++ b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md @@ -16,7 +16,7 @@ For more information visit [CNI GitHub project](https://github.com/containernetw ### What Network Models are Used in CNI? -CNI network providers implement their network fabric using either an encapsulated network model such as Virtual Extensible Lan ([VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan)) or an unencapsulated network model such as Border Gateway Protocol ([BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol)). +CNI network providers implement their network fabric using either an encapsulated network model such as Virtual Extensible Lan ([VXLAN](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan)) or an unencapsulated network model such as Border Gateway Protocol ([BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol)). #### What is an Encapsulated Network? @@ -26,7 +26,7 @@ In simple terms, this network model generates a kind of network bridge extended This network model is used when an extended L2 bridge is preferred. This network model is sensitive to L3 network latencies of the Kubernetes workers. If datacenters are in distinct geolocations, be sure to have low latencies between them to avoid eventual network segmentation. -CNI network providers using this network model include Flannel, Canal, and Weave. +CNI network providers using this network model include Flannel, Canal, Weave or Cilium. Calico by default is not using this model but it could be configured. ![Encapsulated Network]({{}}/img/rancher/encapsulated-network.png) @@ -38,13 +38,13 @@ In simple terms, this network model generates a kind of network router extended This network model is used when a routed L3 network is preferred. This mode dynamically updates routes at the OS level for Kubernetes workers. It's less sensitive to latency. -CNI network providers using this network model include Calico and Romana. +CNI network providers using this network model include Calico. Cilium can also be configured with this model although it is not the default mode. ![Unencapsulated Network]({{}}/img/rancher/unencapsulated-network.png) ### What CNI Providers are Provided by Rancher? -Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave. You can choose your CNI network provider when you create new Kubernetes clusters from Rancher. +Out-of-the-box, Rancher provides the following CNI network providers for RKE Kubernetes clusters: Canal, Flannel and Weave. For RKE2 Kubernetes clusters: Canal, Calico and Cilium. You can choose your CNI network provider when you create new Kubernetes clusters from Rancher. #### Canal @@ -64,23 +64,25 @@ For more information, see the [Canal GitHub Page.](https://github.com/projectcal ![Flannel Logo]({{}}/img/rancher/flannel-logo.png) -Flannel is a simple and easy way to configure L3 network fabric designed for Kubernetes. Flannel runs a single binary agent named flanneld on each host, which is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms, with the default encapsulation being [VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan). +Flannel is a simple and easy way to configure L3 network fabric designed for Kubernetes. Flannel runs a single binary agent named flanneld on each host, which is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms, with the default encapsulation being [VXLAN](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan). -Encapsulated traffic is unencrypted by default. Therefore, flannel provides an experimental backend for encryption, [IPSec](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#ipsec), which makes use of [strongSwan](https://www.strongswan.org/) to establish encrypted IPSec tunnels between Kubernetes workers. +Encapsulated traffic is unencrypted by default. Flannel provides two solutions for encryption: +* [IPSec](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#ipsec), which makes use of [strongSwan](https://www.strongswan.org/) to establish encrypted IPSec tunnels between Kubernetes workers. It is considered experimental +* [Wireguard](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#wireguard), which is a more performing alternative to strongswan Kubernetes workers should open UDP port `8472` (VXLAN) and TCP port `9099` (healthcheck). See [the port requirements for user clusters]({{}}/rancher/v2.6/en/cluster-provisioning/node-requirements/#networking-requirements) for more details. ![Flannel Diagram]({{}}/img/rancher/flannel-diagram.png) -For more information, see the [Flannel GitHub Page](https://github.com/coreos/flannel). +For more information, see the [Flannel GitHub Page](https://github.com/flannel-io/flannel). #### Calico ![Calico Logo]({{}}/img/rancher/calico-logo.png) -Calico enables networking and network policy in Kubernetes clusters across the cloud. Calico uses a pure, unencapsulated IP network fabric and policy engine to provide networking for your Kubernetes workloads. Workloads are able to communicate over both cloud infrastructure and on-prem using BGP. +Calico enables networking and network policy in Kubernetes clusters across the cloud. By default, Calico uses a pure, unencapsulated IP network fabric and policy engine to provide networking for your Kubernetes workloads. Workloads are able to communicate over both cloud infrastructure and on-prem using BGP. -Calico also provides a stateless IP-in-IP encapsulation mode that can be used, if necessary. Calico also offers policy isolation, allowing you to secure and govern your Kubernetes workloads using advanced ingress and egress policies. +Calico also provides a stateless IP-in-IP or VXLAN encapsulation mode that can be used, if necessary. Calico also offers policy isolation, allowing you to secure and govern your Kubernetes workloads using advanced ingress and egress policies. Kubernetes workers should open TCP port `179` (BGP). See [the port requirements for user clusters]({{}}/rancher/v2.6/en/cluster-provisioning/node-requirements/#networking-requirements) for more details. @@ -110,10 +112,11 @@ The following table summarizes the different features available for each CNI net | Provider | Network Model | Route Distribution | Network Policies | Mesh | External Datastore | Encryption | Ingress/Egress Policies | | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | -| Canal | Encapsulated (VXLAN) | No | Yes | No | K8S API | No | Yes | -| Flannel | Encapsulated (VXLAN) | No | No | No | K8S API | No | No | -| Calico | Encapsulated (VXLAN,IPIP) OR Unencapsulated | Yes | Yes | Yes | Etcd and K8S API | No | Yes | +| Canal | Encapsulated (VXLAN) | No | Yes | No | K8S API | Yes | Yes | +| Flannel | Encapsulated (VXLAN) | No | No | No | K8S API | Yes | No | +| Calico | Encapsulated (VXLAN,IPIP) OR Unencapsulated | Yes | Yes | Yes | Etcd and K8S API | Yes | Yes | | Weave | Encapsulated | Yes | Yes | Yes | No | Yes | Yes | +| Cilium | Encapsulated (VXLAN) | Yes | Yes | Yes | Etcd and K8S API | Yes | Yes | - Network Model: Encapsulated or unencapsulated. For more information, see [What Network Models are Used in CNI?](#what-network-models-are-used-in-cni) @@ -129,16 +132,26 @@ The following table summarizes the different features available for each CNI net - Ingress/Egress Policies: This feature allows you to manage routing control for both Kubernetes and non-Kubernetes communications. + +### Cilium + +![Cilium Logo]({{}}/img/rancher/cilium-logo.png) + +Cilium enables networking and network policies (L3, L4 and L7) in Kubernetes. Cilium by default uses eBPF technologies to route packets inside the node and vxlan to send packets to other nodes, unencapsulated techniques can also be configured. + +Cilium recommends kernel versions greater than 5.2 to be able to leverage the full potential of eBPF. Kubernetes workers should open TCP port `8472` for VXLAN and TCP port `4140` for health checks. Besides ICMP 8/0 must be enabled for health checks too. Fro more information check [Cilium System Requirements](https://docs.cilium.io/en/latest/operations/system_requirements/#firewall-requirements) + #### CNI Community Popularity -The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in January 2020. +The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in January 2022. | Provider | Project | Stars | Forks | Contributors | | ---- | ---- | ---- | ---- | ---- | -| Canal | https://github.com/projectcalico/canal | 614 | 89 | 19 | -| flannel | https://github.com/coreos/flannel | 4977 | 1.4k | 140 | -| Calico | https://github.com/projectcalico/calico | 1534 | 429 | 135 | -| Weave | https://github.com/weaveworks/weave/ | 5737 | 559 | 73 | +| Canal | https://github.com/projectcalico/canal | 679 | 100 | 21 | +| flannel | https://github.com/flannel-io/flannel | 7k | 2.5k | 185 | +| Calico | https://github.com/projectcalico/calico | 3.1k | 741 | 224 | +| Weave | https://github.com/weaveworks/weave/ | 6.2k | 635 | 84 | +| Cilium | https://github.com/cilium/cilium | 10.6k | 1.3k | 352 |
diff --git a/static/img/rancher/cilium-logo.png b/static/img/rancher/cilium-logo.png new file mode 100644 index 0000000000000000000000000000000000000000..681a0b3c530e7049adda19da7249ff9f9de74075 GIT binary patch literal 13971 zcmV;EHf+g>P)p54 z#~yUNM#!1U z3QPdhBcgem*EXf^vWq5bwM*wr>1(LxSqjYP{_^tj0U_rT)t5#{6sZ!~h4ov`k4a}` z)g7t$T2T^!sUR5`3ZEm_15;NE7ydJrSuUSjXMgs~68qoYDAsQ4myfpumg)n*&I>tT zm1U-aRH3qs5n~&jV~S)HGa{mkgYY}sCJCIJw##y;9?(U0npem#eVJ4l-1i+ zZ2t+{F-hwhn)$7Zd_Gp}et&CU(rTIRGkw1rIdbGcMI9*)pbU*`qN6><0#vp#fSH9o zHx)sxdWWfkjQUR3!Ug9TWy1~QtS^q9D}ZQdz-O=Z(suyWwsm8C{uTIAor)J!7aTgG ztqK^kJtl4EIa>`#MRJI1@>G>z6E1Q-$^`61M`ro`v*mJDQfL%Z|LdjU_JuQiEcM>F zapOuARQf1}Q2lIZEYH<3DMx6g5igpH8}7yhxRMH7UtezmnC=jxvWGU-SpR;M0Z7+S zA6P`@>SY)U*{PSjKXLE*R=qz8+yP=|UQ82ZXtIZyML*@@n6#s>TqRHX#OPo)GMW)Nn`yZ*KZo?2b;TWC|QKE%ClM+DUh>p7Kqx(6QX#|;|4Vi*v zjvJRuwEpjF=L(=1%0+p-grR&u>&gI$0~uKto%=ghjI%^>6r#(ejDl_|-aM)-tg9vI zFqCQ(e^$~SW)A*KRnCxZ17OMnc6x<1btn@Y+dIRIYi&e7kym_4UZ*QE-`${s2 zKwvA*ha$`+n#&Bhiow~h_T~8i*z04QkC{Ud*pY2E76p@%z6X8D=k>H%VlvPl2UI$1ra?qGyz@ zrz4!I9FuZ57->#46>l>KC#jO#avKPXn?$kk$M>9Xxy%!$jQg0&?_>Go1XfLlWl!EW z%913LA5;);4UKa*jO@}is{a!RVAW_tH9;9Sb>seqMQPB{dC=ipqszj6lYM6VryIo! z3c3@=LVllUDDy7(R$W5_fXDw%8SCM1Rr>%1CfG51Wo2a;dHT*PZcC@rI++9`1~3C7 zdYy!#5tAKs*v^CqXbLe+r?8Q-5~slPDR;KnBZC{cO1X^_hOheC*h9CQqbBUoebWFFT`q7sg3#&t7m%*L?d{tFd9&yT>VA&V!@7G^d zYkmG|i4Pp2U)0MAo&hNrpP~-y{yVfyw~qBye}V~KfTc6o@EF4?MWfDBvhCXd(gIC3 ziX$x+A!FkFx95L;cc=J1Q;KPwJH>fQ8?> zhK59f-m>nWj07h1UGSs8P6H84aL$c*j8W$qbyJD!086!h?Q37txye;CA2}bKkWGbV z;BvU5il8IjQLT>D>0AKOThPG;1hhC1;S+YzOGkv z=)3rBPrp`z3@EZm9~?s+o-=@I032I)80)B1h6VsorKa7?H5OE=D^*KM4I`dH>oj1I zyTs19?e_WBo~PUUblbW+FOxaIn{bC%Kknbx6#Ivp&zJa3a!o48R}xL%-mRl#$FUWi zaB|KY#osqxQrq`(#~AIlbJ6&|hN(Pj16ZFl)GBHX4J)G~^W-GStGx}}u@&e*vv=mq z3#ZvXJASL(^Zpw9_=&a9*4RDpj^8tl`Rx3@)?JRZz*5c~3z}3JI>qeo zv!LAM{TD{~giH5xm#2WU-~YrZ0WVI(nEW$wkT+hzwZI&IQ>o<{k?}Yy6%q0mRS-9{V{kqNltIEs9xXJKEZZW za~j3DT9fyg)t9)c&Mqe32cPd-H^vejX4wBN3?n}zp7V|?#}nXG5fEis)OWhgE1hF3 ztZ5fm>d>fBqXsBMa=Il-q2J9c{2^V@jr$LXcIIF@gk_74o90;E|BwL>Wu@bUg&bg= zSmVC;aLc^)-u%NF`-WTRTB1(^Y&eGi3NPyOyn{Z~{>2YdE`WN4(v;p~LwV7X0__;j zUdQ%USQ208WoY^2oJYau)9i)6yk)Y(G88auqb$V!#xA4FKLcQ`zj&h07lU)1t5Fk` zbD}Z(iP?Pr3v@gXPWTX4)7@Ba?|rV*qUOwFBmx%7OcY=oW36#n3)@pKF1H#Q>ctK| zR8vzkkeJ#T@cj%6s->6LS^s*Zg!oflQrHF4G2ZNnJ{MLktnL^uI?fjCs}1)FdKf0C zpJk5;S!MsSv6f@%kZlkklzA8ya)3jBM?gu~>{lyBTP+Q~!KrV9i7m`w9a_JcJQa;Y z`ofwZV2I_SSz5JWzSaHS8W$8UD-jU&u)tziEAw4gAJ$#-%}Z^GW8BFkjCY8&QM z=W1QOD@qGu0)b7q4^k<*ig(nf*jp~0V1350k?1ddE&wd7vD-Yo>;kJ{s!vdT4791d zEUsN-8ZPbtLhURx`yFV10`*;JTX1Eo{l@Qa^8@I88KBliulK(9{`%kFW?#8vmTv$i zFowe%)z#GlkU4Zf69pvl{W!1!6;wx&e4D$@-073;4{sYq{2~z?RZ{jj?PDyN^Im0t z0w(m9Ig>0w?8HH*bBuU{5|qXRnu;|}D#DkN%#U{nThmS3j=x@JaiDmg2ZfZa-e1G` zOKSre4cqP~uk`^0I(ILOfXO4(2MJ-~tdWW@Lg#*xzwP%(K^w@tL*H*U4EY z2ykqaC;s-Eo6ol!>-lqrzCU{2dFKs`utaeS29&eTQM!X_K|Ef#S>m1CapN2d!+b9E zXTkKoCn5NmN{|#7A(ZvQ@c%nkTx5wU4e+7nQ5gZFub^bCZYl`=>t*=w0fsl+oy*sO z2x*-HnZOe%J?C29D6lFuOkkCL*Zhf=a1Is*Lz?9+hM_)US*x-yDqRygG6hUbA)L@# z>;-B8hxxz#;wFna-&}j2H*Y;sr###gvj6wok(;a;GpC6;FAS;CLC3+Lm`@yy4VH6_ zE~meV-`QpO8JM`3$OM?LJII0weVbR8*I0fNRvBv2xaC2>63pj&e=*8#Ynm)7=Tq#Iny7M%3Q#o7z-!GnU$-qo@F~C<_TUE_muE(Qx@byiDD>iORsISC4R*e zLw%UM$`EdoIp;zE6z4n5hU$ZlR-dQlV7shU8M>ZNVqi?^^0rC73Jf@=x)_zSMm8od z=lT_nn#D*%G!HxXUSm1tjtZG4c#+aCY_h+x0_Ykh6~l|7`2FFZTx&_j$vsJ@E(D*a z6TUwVzd0@FGu!BLd}r4IGuH`VGD*qIy<#cKql?E8A+u8I6H$@#a{(Zq3r_gwERcj{ z*j~S(#%`+Tg-$r0!y4C4b&?ZPrwC3}_=?O#gV{@>{=@?6;!9fWLy*wroN=mnOBjgF z*~u8HhkYl@fBx&0_FUMHaHoDqlC{VrH8cu~D;Ef$ISv0k0nZtf`65gNv+H~vSnAzd z&u8=}hE<5!`h1<>q%)D92f!+hp}cJ>Z4^rvxB3#sp90wHa$SCuO9nD^X%9@yN^$PW zyS858X-aG*fv>t?Ez{O;$~pbq!w8;qIci*zDGQyNZmhw^h%fHchoa-oFab?QoAjL7 zcHe>t7EP(hnCkR;K7#X@tI}m4xek>9)=TYU`o!bH2%D78=vo;w7huZO%(C425VjV< z$^DkS5Hnk+p>xieFfN~Wf|jayX0hjlf88DdjJV$4{Bnie3gs{Yh~EMIsgMA?k&Y&o zP0|`QX{)S!mo%&volDH#KxnVB+n`F`bz`-~0>{hS`6~)sK(Xa5-1^=&AFLW>q0S{7 z|4BH8@|f5B>@!b9u@?6o%W}mkgZ3NdSTaUB9@2bKAj@JX7l6axdDBHAIIwDR0UigI zc3_Ez2#(Ilb@ok~?M7(Lez5%((o6*eRP`w6vKJZUgHP7-@&KCzcG*jK=gc7sJi>CD zFP-4KAuxwYhQMb#R^ZFYD2M$&gsqs@IwER3nM#ATK1aKUan^kioFIa-(!!$$+UK*W+kd3%;5#(@{@n=^eaT_CkSoyf2k_>> z=f_1~?1k&bSgxsK9BMWl!#mru&z{2$xj}1Yqff+r8}^@{i=l3+ETUTCt1^BoYCk4P z%;K~_u4zF?D?0^^o`gBr*r}l97|be*%?{| z>-_rsu@-{kkLe6Z#C9rV`QMKkW6z&5S@f+=;mBnopeA9TAG61Vrs)HxQd1%UtXI$F z(boC(Nn(KT1@!eS)uC*frNMy{x9aE-Is-%lwH=6l@S@%;-mzQ1cabC-e1H5z>r|y2Fj0+e_oEBO`=UQ!!h5k~r>Lk175Q64zY7&@ zx8^BI0T0olp1Aq~OXANxGGxe*JQY{O(4Pu1R3(pN3biIm71smPhh;1uvz;)jt;v3= z>t;rSicSKv0K?CJ`{pffT{+XaCn1zCBVei!N55PQ%<_9+m_HJ8CTF2)*W3lQf2^vp zm)6yZ`~D2h6$MlSf`Nf-P~uF-2t{$fUNJf_cNeCj^JOcOWpkxhWLdJJoa0lS4_h@w zqw`;G1D0AT-m$B`G2ilZC^3(>#;>3=-?2b%&BjZ`Ogpl~EyIQlGn2_=9tr8WQj#Yw z7;o|5sFpBLHUAp}E)Qtc_KRc2S&jba@1;Z{F@QMezGU=GvnN?Pj*S&Y%Y&&xa`RnueTeZlvs#x4%11# z0KoG^DPZNKN%lXfm^SJ^xlVeu-?t(xwxPCGOh#L}$KXgETY$f|aJ?JR=dn2&Z>!UBQeU7*fEY&{dVp-{wu>APwvDU1qKH>8{@Er?D!5t1d zJ`8y=YfYuCq>@H%6)@`Ar_vDh1$i z>r{K&lHdLr+Vzv_#0abx!uckm$DxdHmw3MHIkL;duxhIWk3~GhWM# zL5Zd!DC3kQD+zHp{!a9@o&x6JnWx&PIf?I_Nn@9B+f@5fG(QCyLBhPxXr7D8O&@3M zT{hdr3KvzTKvaFqSSU!~!s7^(M(FV7s0QmfE#&t`I>|f^L1!NngrHhoJIOv-$wzgk z1c|_9qcqQAry|oZ)!eolK|&C~gc5=?Zx!f77?9MfoO4h=u-{XuRDR3^ug+w>q5rRo zUX=rhbyq7kj(WA!wDHb-HK$2EJ$Ae=1q8?NC`6ul=bUp6o^!i6*FhLl zg)y`+pTYx!1`Ub=hYB<44D5~lNYJP%1zpqWjEJF+YE|vDsNb2-AA=87LjZ+Z*W2tq zny_SL^jJ$4?TP+W@H&35y)$Zzg$ZZjm_Gt0E+sk4GO@m1MHWyQxr|yAoNIlI2tLlU zdXygztMd>(-LYLJBWR)@u}AEnTeddY-}&iu?Y8p1CTnUv-@(WVOq!=%ybyOztaXa& zc%|I$WCt;mX#Q~d)(O(Nnr_5nz`P&^DXiyA~Tf<APr zvNAL59YnZ6;@srZrcbr+equTS4YckIsCKiUasdTkqU+Ih;eE@#W~;HuXA+J=WHktP z3>UVR<(IV-ciC-me*UoxiT)BJEeMXd;3PkS6LTP#fFWmza~)hXa{;i^J{LTjNu!^)bTFSg<)0B)i|5|r z6HwtiS4kfFn61g{_Zlbd7o;4D;e=1H7j^oKdVAZ!W{a9L&)hW=z%+ZNuE%4pT|F%v zgW>YO{JUmrT2sB~puYzlTtY-t9CUh;-G3{Xsdt>J%^EIG=SvO?zfn;A14s~8O!hhA zm}aRWb@V7=zQa1)jSWrqnkC=!*oL$W10nnDXUyFJVgmQOAlpLCBD~*q-A^3TDFilH zwVC{~^LG6@X)76)zEcz}SKd3FuoD3j`MkS_HE{x%xOF^}h|9r2_l4V7$Ic|9JNd93 z6!b}xtbgVl?`kGTCziSJBVzt)`~c%soP9J#4*r*&*ZVo zIy^JDrGsB*1vu#S^$lV<%X3+Z*RhF4r{+obySp4myoi&IwOrU>KYT1>%62i%Hs>R` zrjBPVYVO?6#FzTsFPiPv+4Z7daSA5@BT3_d^l1k40^sZ@8*3-^-Bnww-Ddff{gWUv zmJ)UqchQEXbM}nO#N$9QwKf|`Am%#Ca)N5k z&$mM?t$;iAIo8x*$?rFIB0EuF_xF5Bh-&; zqfh!&)q`Z>s_ZiWcNt+qbi{Xx|J8BnM$3)dhCBQD!f2KRC%3EzqYYd*$**d)J$pD9wxF1Yj`?MOn)8 zsagUXvfkpG8+{YeL+8#nGNXs_V(-^UeW-}70HS9AtOJOta~__}O^MjL(B9Pk6d%d4 zjWuN=OLYqDCA7^QPlGAF)t3{6<5+9xJ{svMtfos+u^E*{qYbh8(vp%Er* z-8bE3o!x>&;QB|DiD(4a%`fx(E9RXK(Jg864`NV*2!;+4*E23_=V#-0Wv~y46~+{M9xOoRGu-I zJh^hUcxShN17w0X8KF|jK0%*FhrESz;5|^b*qz^OvLt|6i#aut^cud!9G@lI8PrjiiP`=cZT0r{BZ9BQ36{-)An~>dTHeAa>@D{9Uv05l=hTM{gftAj zv5ArXCvp0J#Z8G=-)L{Rdbj0c9u^#(vd=;0jrXP4-!`>BY0sSACi;J$LZFaTJZGI; zZS-()1&7-()TSUd0Z>VCwC=2V4Uj64`$|!5mqaO-gD~?QQ#oyyG>CxulxRdKWek8; zC~IK0BEQpy-yw&VPi`Uv6gNs3b*GKj{K&ib9TJ}@%%9R|cen4eFKnJG8^xT~%*U$E z4y+yW#|}4xOy2T4Gs^gGA$s$^dubIm+VlJ-fAi8c-hbSg4yjBI?h`d@zP@?8ucA;1xp?%LgYtSira=nf*3_komwwOdppTw z{JGGY%s3`aCI6!|b1Y0YHb@-TkGZ?8&!Rfc`#f^rD8NS&*-LQj?|VU-+k1=TxeQr4 zM0Mm%&}bYo$vkz+xxuAq_OMETH&Od}$J%w$_j_8CkA?i^j=q1Z{Rw+|;|wthJxLzX zan`dXriQ1lGP2ZQ%17pix{o6p5N2WhA6{ePBR45qQU8fyuZQ3CWCLMh`ne?XL|yDy zkk+J9%qNTZ07S=todyx9q*W~Qoip=tFMr24=bJm^vVbr06QjZo-mjv0Jz-ec>X?C- zN&uI0x>J%EfTi47ncS7pER$J4F)247fz z2#WcgS|K==WvWvjM$M}!{*DuYpm`xIwU-BFy?w)iO>WwS6D>PMMxv=1fAcfdeqg^H z*0hG{qNzKML-Dheh(axvYAJ)E;eaav5l*a@o!8PC*FU?#VUt}w9yoUOhzd9(YIe79 z=@x%_1|z#>d)CGE;syT%F7#MZ#)6}X-{DmIhH3V~zxcrySFQcg8LhVFhi9hGoa5UU z9VHB>j8X1M_IET%cV4&G)0c#F%jOQYn&5!V=KMBy@T^6_bvO8C7tOpv)W>?ME*7H% zI9BdY<>ACC)qYAcKHjx(k`Ib^`OWt<@yTK84|TWrYoDme<0J>xDQ~fF{ce*?E#AT; zhwl}cW!^QnTb`XgmUw{8G_p`U0{6;=I>p)v@^MjUT=G~q2(X_C?4I-PNY1+)N z(`F{kLhGNlFf~w{UB+S^a?kADbj22{ermlqH~Id4C{r=xD4{w}5m3=_+(srF`+%F0 zaAv&)*G042LmV=i1lZ^NI(e`;WO` zq3wEiS(p2nuG8_|U)nS0E*8Z^3(mEc04iT5Bdl4n-4c~>POx;@iPxP({|VV{MpK(C z8D3?m%~sbhlVU{Wbh0gfaJK=|(=O)Cu37EL2x4-WtIX{7n?u%7zi$0B-?M+ogL5c5 z?H3*Gg{vO6T+nRu0OPR`VWe$5nCSY*T44TU&&_)lszs@c&@4uj83+Ao#;h@{)G7K) z2e>3MYYMNLyTX<*@(N}IjG1G*gc^ZBDfMWeY>;r|UjIjBsX3J;ZB_e;RU0I>7EH|(!p)+uUU z7M;o(hm7F_YhnrT%?rO{c@+cx$j%qZ3DF;@?@l?45H$P&n7Gi$Zo*wd(K1DwB@Le` z_HZjO2hHqL!kv=X1r64Y*8=<|!B67MUw6h4pL<24ScZEVE+|e;fuo2E#YeR^G)}jd ze)DOIg2n?6Y1{lDW0yJ0w|Vc^YnB{D*dI2&*_5AhKPR{}GPCTv)aW`L{dZkl0V zbN7pZCd~WX#ev3i%)6Lq&-?wlZy&OoFPtYH(`R6U(uAc-aURTUpBx&n90z*VDfq#E zev`H`PV(BQ2{KfsL==|;SXVOa^q~0rYKEXl1aQ?NvhdzR!>iBadB_#uBc}vs{qsq*rj=fxG ze59fS{vT)3R>PajVi9>;l$j zP2bXu`@rdM5&?4zWP}28Bejxh(cWA%lwve|BOUi8=lWExOke2Ep;M}$)nJ3cku~CH zd{=4sKlDFwaxd{tp1ts9P{fb=OkIXuMAOD-P^`l)GL*aCG1nv_Q(d(9Hs1rZ;pSON zgAzQ}kn@N-PwqRs^3KxGyRZJFt%?z-Hjr^Fo@g%6cgmQuF84j-0}&_OO}BWWr5EWj zsTkJ8ZDRn*q-7S|@{lDFJNFo0&g(zVq^&uwf^gR({?iw%$(zx3x!u0?Fb|UZc}3jQ zp}H7d%5%Uue~NYNsblUeJ8HKI=@>i~IHo!y2NR?!l*RTnS2T~^Mrnm#JT z$YO%jEQ6mv|4Cc3&`FfdA}Jqvo;N}D_Eqa1vwW$PZDQW8%g03WByk2zOEx^=FrYX` z6%2tNXk=!Pn_iaDH(fp zc_88(_%3_aWgWgJM1khirWrY#nm3Zpqk2{PF77cYIl^(lq9a47MkKK%2oO2**e5H{ zcG(U0T0U`7_|8UgL3$K^Ubu3Hg~F)-sC_U|$N&v9V`}z?Dj0O4crEutwTMo|+{JmU ztWKw?vXIi~8wlBp08}Tq%1KNTZ+d|P6V;(ITGTPFBv>?g6jax3`8&I5hOcV!BE}DTecNm#()w^Tvpk_xt65PBE`XeuJ_V z*?a;UG^G?3ZOP1n0CaM+%E$DjPuglIP7@PDXNd$O4Clg`V2-ltb$vsly?FgzUKWrs z(KgXB&j9P{I}TV4(^|xY?eAc6+5!az(V8GN3zy-~chXix|0tWOk3(O;9IfJ%)-z@= zu-dGZWEYsg!=5v{gm5Rt+`dfJ^nxh#M7#{x5l$^fO|a zNYjitb`y+%(QVWbXClbgK_yNKrgTnLwT4+0Ksk{!z`sES z{eiGKG{Y(=gMft(@Db(*kE^4_G5I=ETITGS;*Jnj&G|*?MP?2_7liAr1mn zdG4>`q^+ky9x)aN2r5sCQY8SRzmDlknY1-V%0y)jg>0D*2QbWmc5f~kfN8EKBkM-K zrp9^e%q{;PYtq)3fq?UU@Q@*S<*AB_L^@rStb_*734=C3Glh^#-_gv{Z0I=3xtbu% z=I$uHL-7~nbuoS@YQ3f0i+t>iGrD1nDY~uhb;b*A8PADBQ zVgs{2wtX>tCtZe8omw#4LPkRqba65gU+qa-8F09OqKyD5kUc1%m~Da>%6%cAJQJCc z0jj?k{%rndE*Q7p&$HhqZIuBk0T){Ulo{$n$xWUQ%oI>b9#C;vNq16H!|2}$QnS_N z&dQ{%Its0-j^W$BV8{>$P!2FM;9yOj^u$<`mrSR%0F~Ls$55u`ov!-Ccb$sIup89S zj6ma%KrW9dRQWbEI#K%_*k%xbV+8j6+TVg80^@=`KDNjm@lH3k3MZRzy9 z7GgCHWVNu(-fz=1M(%<0Jpt_uiEbT+hVNfM z`yi37*j_d=SuFEwAoYY58uMYjSD<|i+x1eJ2HXY$D9*jJ&Z*sXL^`BpA0 zDm0;!pll@d9CWzFoRH0)VCFzmu84>ST%N92AWfqb4Z%C^4@pLc0`RR3qu#;cK9uxg z4^Nb@PQ%U!(H3zimjRBI%BBUX*tWFueQ^3BJX=bE`$kyIz;yg=l5H z1b?y4C_2L?!%*9q^QV9cd}RQYq8WoTz(`48j8(9$l>v);!TI+}emV5byKufvg1xwO zSTTG*OtTO!AnAU{&7bf9NlD^&uw79MJuX^XCz$!UdGezGrmP85~1p zF8ZFZJDV~TbFrWG`rNt59|6D~gmal&QZ~E}e4%>y|K+g!tps!P;y%I6*j`#bq5%5o z^@Il)1m@d#tzg?3@E^6X4zl09EKx8lxd+}=2ZX9m+6pH+8V>T503ROIiTEaC_^QP znMue26&m$s0F`E0WKWp6^Z>=MeugcuulJ2eB+6wb7S62*`gngb%~_6Kcz-78f%8+i zR@V+KDe=wtlK)@x@jq%|o%a%HZvTDY&yLoRi_9qL037h)`LLr`IrL^ua8pTX=>WyN zUZ7+EenODR3uH`c@;v_}v>u9QD9^t;62>$xpge{-qpMv}ljl0WazF_tkK`vkv(^X8 z*A6Q!4V6ENhn3_*pS;cQ&jr&dIRA8%js-4Fo9h!(V5U+l<3y=}v<-!5Gw`H%DopJM zwC8Ea^nrfIU=C2JAV6{dQ^2KOQc<#j>G@FJKA^LYgmsDos5EnNGCq{n6gUVdHtC)4 zd?@K%O?{|@>+hucz>m5M{nx1d{TGM?p-=8%lHb%!LZm&iRwc<@T3$W?wtts(4d9q| zYKW?4WO5Uu1NK6|PCkK?tkIO`10x|Yiz-ez_nGsdf}=b$Fv<(|q1^W*6Xi)5N=eMk zFjQ!iX9}``;6qis0)9@Os?ZEK`CKo$CO3b@$axVG0QY}zEgtf&x8qJcW2ldM34GQI zE25p|53u5q80vE%6LL;Dq1h0VCkUJB6`K@LjBj!zjGdEm*+AdqrClG4jq>Ef;Q>&r z8~}RIt_W3{+aCaNG_)@SDWJ+wDYEM{yaXwVq2dK!4THyjJZ8Mr%m7y`-F4Ipb#_XmK#!2o= zI|feh7Qv4I0Wn={sMqO47Ujj!p$a4nl~j-oT>m6vhK_{!cMe0P(`**tjG^`%W}gm~ z2#@llvO$E&lUe~6P>ebc`>rxH21oE21=r-|X42N-iNqZO?Z20-N*6>RRjVl8r$eO! zWdpwsl~!UXlVhkfRfK&?xyk49GErV4+J_31MqGwUy9)K5Fjf?GC;?EBMSB!b3Euas z(y5GaZ#++rI;f~SMVnOw6sN!j14`E^H)T$FB3L#sgxr)1s01wmhT-U89VXs+5+BOX zP|VE)Eh?AiJ6FFBmD&^5M=qe!feaOzCUXJBr&p>B^`UT?8;?sorwIp_UW z7(p*YHjpdj+B#G^3{ceMO-Q7@^|5o$DI7dV$9V)mJT9ot+oE--lq||qda(?ZjAke! z2vEZI2Q{rw@P)&&73kD%)=`AJ`y@#G3=}>7e8WsSR)ypJz zStpxyCk0%4G{G78Vi_u;e8mt=9;cM^QC?s!&n+mE`%vz-!^(yagnj&uNt@s{4`EGq zM40UC4>I>)GLhX=ZsFUAii){mH<%aYrC8ZOdDc3PpjFz7=g<^eBf3q5j=>}++J?U~+DGx7SF>_<`q~$A8a11Ia$RtYBoXGbX?6)?` z$t_PMvE=7@CgbnaBY_K%3^kE*VN-cZ08T0Q7{AKUm~)gyc}%2tP$BK3H)$)>#4K~m zLo40Z2c5eQmZghJN~GWc_Q3+XJX{CLV3x^~!-Z~5c_Lai$ZR7Hqe@3sYzpNoDY1M- z3f{6rIr@-aWIESnwi8$^NSKP-Pa!NDI;^BHfyHiY&Aqc-#e+E5AdInHAIhK0Q<}8Z zWhlOmy-Futp3FG*uglmfqQEXxhYFZ1!wjPG5MeWrMPWZ%lS17*8D+8$gx}Th`vUkp5170i z-nVCN4sZ02vp)0z?7zf_7;VcAQb@{8UZ|-Nm2#p*MKM%VKONL4Uoi~TNn3{}%8e6RNwO7< zEE@=)$?u;4Fs7&^R2@Ju0}wPAich_;{U09X1^1Kp9|sl8WxPpSarRv}NgROiJEm6I zH>4-QDXl4*kUeZjN7IBxc~Xswt=LQij~rGhuv1a`VN6`I81}Q3lTdiAM>d=Z%cNg{ z{f;i=D=s4$Dxp>nWd@l%Hh>C=@dAUMcTn0BV61KI_v`bErjh} z70xrMk=2m((?eOw1UlJt7!i>NCtWb)+~RT2k#hlL&QNqffr^DZ9AOidPQs|o*$2~%7*hzsx#C?s zsOa3{v6xA4hH|H!bA};gC3O(V6_u8kP{VL006n;!|#eY(o9eQeK2@PA@?7Q z%wedol-o~*g|dG7>QCCLxUTM_IaDeFvYGnOQ0Om}V~W#X%K0>%R7sAHDcZIgY89B; tleQYUdBS%#44hLT_)GrJuf(LS{|{c9U4Am~c;)~A002ovPDHLkV1kP<=pg_A literal 0 HcmV?d00001 From c15087e7aa3a24e047fc55b497149917017a1378 Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Wed, 19 Jan 2022 15:45:42 +0000 Subject: [PATCH 28/33] Updated, edited sections for clarity --- .../en/faq/networking/cni-providers/_index.md | 71 +++++++++++-------- 1 file changed, 40 insertions(+), 31 deletions(-) diff --git a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md index fc24cba8085..7894b6c6745 100644 --- a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md +++ b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md @@ -14,11 +14,11 @@ Kubernetes uses CNI as an interface between network providers and Kubernetes pod For more information visit [CNI GitHub project](https://github.com/containernetworking/cni). -### What Network Models are Used in CNI? +## What Network Models are Used in CNI? CNI network providers implement their network fabric using either an encapsulated network model such as Virtual Extensible Lan ([VXLAN](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan)) or an unencapsulated network model such as Border Gateway Protocol ([BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol)). -#### What is an Encapsulated Network? +### What is an Encapsulated Network? This network model provides a logical Layer 2 (L2) network encapsulated over the existing Layer 3 (L3) network topology that spans the Kubernetes cluster nodes. With this model you have an isolated L2 network for containers without needing routing distribution, all at the cost of minimal overhead in terms of processing and increased IP package size, which comes from an IP header generated by overlay encapsulation. Encapsulation information is distributed by UDP ports between Kubernetes workers, interchanging network control plane information about how MAC addresses can be reached. Common encapsulation used in this kind of network model is VXLAN, Internet Protocol Security (IPSec), and IP-in-IP. @@ -26,11 +26,11 @@ In simple terms, this network model generates a kind of network bridge extended This network model is used when an extended L2 bridge is preferred. This network model is sensitive to L3 network latencies of the Kubernetes workers. If datacenters are in distinct geolocations, be sure to have low latencies between them to avoid eventual network segmentation. -CNI network providers using this network model include Flannel, Canal, Weave or Cilium. Calico by default is not using this model but it could be configured. +CNI network providers using this network model include Flannel, Canal, Weave, and Cilium. By default, Calico is not using this model, but it can be configured to do so. ![Encapsulated Network]({{}}/img/rancher/encapsulated-network.png) -#### What is an Unencapsulated Network? +### What is an Unencapsulated Network? This network model provides an L3 network to route packets between containers. This model doesn't generate an isolated l2 network, nor generates overhead. These benefits come at the cost of Kubernetes workers having to manage any route distribution that's needed. Instead of using IP headers for encapsulation, this network model uses a network protocol between Kubernetes workers to distribute routing information to reach pods, such as [BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol). @@ -38,13 +38,17 @@ In simple terms, this network model generates a kind of network router extended This network model is used when a routed L3 network is preferred. This mode dynamically updates routes at the OS level for Kubernetes workers. It's less sensitive to latency. -CNI network providers using this network model include Calico. Cilium can also be configured with this model although it is not the default mode. +CNI network providers using this network model include Calico and Cilium. Cilium can also be configured with this model although it is not the default mode. ![Unencapsulated Network]({{}}/img/rancher/unencapsulated-network.png) -### What CNI Providers are Provided by Rancher? +## What CNI Providers are Provided by Rancher? -Out-of-the-box, Rancher provides the following CNI network providers for RKE Kubernetes clusters: Canal, Flannel and Weave. For RKE2 Kubernetes clusters: Canal, Calico and Cilium. You can choose your CNI network provider when you create new Kubernetes clusters from Rancher. +### RKE Kubernetes clusters + +Out-of-the-box, Rancher provides the following CNI network providers for RKE Kubernetes clusters: Canal, Flannel, and Weave. + +You can choose your CNI network provider when you create new Kubernetes clusters from Rancher. #### Canal @@ -67,8 +71,8 @@ For more information, see the [Canal GitHub Page.](https://github.com/projectcal Flannel is a simple and easy way to configure L3 network fabric designed for Kubernetes. Flannel runs a single binary agent named flanneld on each host, which is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms, with the default encapsulation being [VXLAN](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan). Encapsulated traffic is unencrypted by default. Flannel provides two solutions for encryption: -* [IPSec](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#ipsec), which makes use of [strongSwan](https://www.strongswan.org/) to establish encrypted IPSec tunnels between Kubernetes workers. It is considered experimental -* [Wireguard](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#wireguard), which is a more performing alternative to strongswan +* [IPSec](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#ipsec), which makes use of [strongSwan](https://www.strongswan.org/) to establish encrypted IPSec tunnels between Kubernetes workers. It is an experimental backend for encryption. +* [WireGuard](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#wireguard), which is a more faster-performing alternative to strongSwan. Kubernetes workers should open UDP port `8472` (VXLAN) and TCP port `9099` (healthcheck). See [the port requirements for user clusters]({{}}/rancher/v2.6/en/cluster-provisioning/node-requirements/#networking-requirements) for more details. @@ -76,6 +80,24 @@ Kubernetes workers should open UDP port `8472` (VXLAN) and TCP port `9099` (heal For more information, see the [Flannel GitHub Page](https://github.com/flannel-io/flannel). +#### Weave + +![Weave Logo]({{}}/img/rancher/weave-logo.png) + +Weave enables networking and network policy in Kubernetes clusters across the cloud. Additionally, it support encrypting traffic between the peers. + +Kubernetes workers should open TCP port `6783` (control port), UDP port `6783` and UDP port `6784` (data ports). See the [port requirements for user clusters]({{}}/rancher/v2.6/en/cluster-provisioning/node-requirements/#networking-requirements) for more details. + +For more information, see the following pages: + +- [Weave Net Official Site](https://www.weave.works/) + +### RKE2 Kubernetes clusters + +Out-of-the-box, Rancher provides the following CNI network providers for RKE2 Kubernetes clusters: [Canal](#canal) (see above section), Calico, and Cilium. + +You can choose your CNI network provider when you create new Kubernetes clusters from Rancher. + #### Calico ![Calico Logo]({{}}/img/rancher/calico-logo.png) @@ -93,20 +115,15 @@ For more information, see the following pages: - [Project Calico Official Site](https://www.projectcalico.org/) - [Project Calico GitHub Page](https://github.com/projectcalico/calico) +#### Cilium -#### Weave +![Cilium Logo]({{}}/img/rancher/cilium-logo.png) -![Weave Logo]({{}}/img/rancher/weave-logo.png) +Cilium enables networking and network policies (L3, L4 and L7) in Kubernetes. Cilium by default uses eBPF technologies to route packets inside the node and vxlan to send packets to other nodes, unencapsulated techniques can also be configured. -Weave enables networking and network policy in Kubernetes clusters across the cloud. Additionally, it support encrypting traffic between the peers. +Cilium recommends kernel versions greater than 5.2 to be able to leverage the full potential of eBPF. Kubernetes workers should open TCP port `8472` for VXLAN and TCP port `4140` for health checks. Besides ICMP 8/0 must be enabled for health checks too. Fro more information check [Cilium System Requirements](https://docs.cilium.io/en/latest/operations/system_requirements/#firewall-requirements) -Kubernetes workers should open TCP port `6783` (control port), UDP port `6783` and UDP port `6784` (data ports). See the [port requirements for user clusters]({{}}/rancher/v2.6/en/cluster-provisioning/node-requirements/#networking-requirements) for more details. - -For more information, see the following pages: - -- [Weave Net Official Site](https://www.weave.works/) - -### CNI Features by Provider +## CNI Features by Provider The following table summarizes the different features available for each CNI network provider provided by Rancher. @@ -132,34 +149,26 @@ The following table summarizes the different features available for each CNI net - Ingress/Egress Policies: This feature allows you to manage routing control for both Kubernetes and non-Kubernetes communications. -### Cilium - -![Cilium Logo]({{}}/img/rancher/cilium-logo.png) - -Cilium enables networking and network policies (L3, L4 and L7) in Kubernetes. Cilium by default uses eBPF technologies to route packets inside the node and vxlan to send packets to other nodes, unencapsulated techniques can also be configured. - -Cilium recommends kernel versions greater than 5.2 to be able to leverage the full potential of eBPF. Kubernetes workers should open TCP port `8472` for VXLAN and TCP port `4140` for health checks. Besides ICMP 8/0 must be enabled for health checks too. Fro more information check [Cilium System Requirements](https://docs.cilium.io/en/latest/operations/system_requirements/#firewall-requirements) - -#### CNI Community Popularity +## CNI Community Popularity The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in January 2022. | Provider | Project | Stars | Forks | Contributors | | ---- | ---- | ---- | ---- | ---- | | Canal | https://github.com/projectcalico/canal | 679 | 100 | 21 | -| flannel | https://github.com/flannel-io/flannel | 7k | 2.5k | 185 | +| Flannel | https://github.com/flannel-io/flannel | 7k | 2.5k | 185 | | Calico | https://github.com/projectcalico/calico | 3.1k | 741 | 224 | | Weave | https://github.com/weaveworks/weave/ | 6.2k | 635 | 84 | | Cilium | https://github.com/cilium/cilium | 10.6k | 1.3k | 352 |
-### Which CNI Provider Should I Use? +## Which CNI Provider Should I Use? It depends on your project needs. There are many different providers, which each have various features and options. There isn't one provider that meets everyone's needs. Canal is the default CNI network provider. We recommend it for most use cases. It provides encapsulated networking for containers with Flannel, while adding Calico network policies that can provide project/namespace isolation in terms of networking. -### How can I configure a CNI network provider? +## How can I configure a CNI network provider? Please see [Cluster Options]({{}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/options/) on how to configure a network provider for your cluster. For more advanced configuration options, please see how to configure your cluster using a [Config File]({{}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/options/#cluster-config-file) and the options for [Network Plug-ins]({{}}/rke/latest/en/config-options/add-ons/network-plugins/). From 30ca8c2c2b3f72cd612dee46d0cbb481562ceedb Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Wed, 19 Jan 2022 15:51:48 +0000 Subject: [PATCH 29/33] Reworded phrasing --- content/rancher/v2.6/en/faq/networking/cni-providers/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md index 701ae0847c4..815bb75a4b7 100644 --- a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md +++ b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md @@ -38,7 +38,7 @@ In simple terms, this network model generates a kind of network router extended This network model is used when a routed L3 network is preferred. This mode dynamically updates routes at the OS level for Kubernetes workers. It's less sensitive to latency. -CNI network providers using this network model include Calico and Cilium. Cilium can also be configured with this model although it is not the default mode. +CNI network providers using this network model include Calico and Cilium. Cilium may be configured with this model although it is not the default mode. ![Unencapsulated Network]({{}}/img/rancher/unencapsulated-network.png) From c3da4c35af1d6a6ac067f813e770c398d9ee747c Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Wed, 19 Jan 2022 15:56:20 +0000 Subject: [PATCH 30/33] Edited Cilium section --- .../rancher/v2.6/en/faq/networking/cni-providers/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md index 815bb75a4b7..50dbe3525f9 100644 --- a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md +++ b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md @@ -120,9 +120,9 @@ For more information, see the following pages: ![Cilium Logo]({{}}/img/rancher/cilium-logo.png) -Cilium enables networking and network policies (L3, L4 and L7) in Kubernetes. Cilium by default uses eBPF technologies to route packets inside the node and vxlan to send packets to other nodes, unencapsulated techniques can also be configured. +Cilium enables networking and network policies (L3, L4, and L7) in Kubernetes. By default, Cilium uses eBPF technologies to route packets inside the node and VXLAN to send packets to other nodes. Unencapsulated techniques can also be configured. -Cilium recommends kernel versions greater than 5.2 to be able to leverage the full potential of eBPF. Kubernetes workers should open TCP port `8472` for VXLAN and TCP port `4140` for health checks. Besides ICMP 8/0 must be enabled for health checks too. Fro more information check [Cilium System Requirements](https://docs.cilium.io/en/latest/operations/system_requirements/#firewall-requirements) +Cilium recommends kernel versions greater than 5.2 to be able to leverage the full potential of eBPF. Kubernetes workers should open TCP port `8472` for VXLAN and TCP port `4140` for health checks. In addition, ICMP 8/0 must be enabled for health checks. For more informationm check [Cilium System Requirements](https://docs.cilium.io/en/latest/operations/system_requirements/#firewall-requirements). ## CNI Features by Provider From ada7fe312adb1dbe6a23851a6cbc37508c5e2769 Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Wed, 19 Jan 2022 15:58:34 +0000 Subject: [PATCH 31/33] Fixed typo --- content/rancher/v2.6/en/faq/networking/cni-providers/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md index 50dbe3525f9..dce89e17c82 100644 --- a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md +++ b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md @@ -122,7 +122,7 @@ For more information, see the following pages: Cilium enables networking and network policies (L3, L4, and L7) in Kubernetes. By default, Cilium uses eBPF technologies to route packets inside the node and VXLAN to send packets to other nodes. Unencapsulated techniques can also be configured. -Cilium recommends kernel versions greater than 5.2 to be able to leverage the full potential of eBPF. Kubernetes workers should open TCP port `8472` for VXLAN and TCP port `4140` for health checks. In addition, ICMP 8/0 must be enabled for health checks. For more informationm check [Cilium System Requirements](https://docs.cilium.io/en/latest/operations/system_requirements/#firewall-requirements). +Cilium recommends kernel versions greater than 5.2 to be able to leverage the full potential of eBPF. Kubernetes workers should open TCP port `8472` for VXLAN and TCP port `4140` for health checks. In addition, ICMP 8/0 must be enabled for health checks. For more information, check [Cilium System Requirements](https://docs.cilium.io/en/latest/operations/system_requirements/#firewall-requirements). ## CNI Features by Provider From 9bbbb8b111db803da039491aba0b971610a15def Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Wed, 19 Jan 2022 16:02:26 +0000 Subject: [PATCH 32/33] Updated table --- .../v2.6/en/faq/networking/cni-providers/_index.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md index dce89e17c82..189c4d95a9b 100644 --- a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md +++ b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md @@ -130,11 +130,11 @@ The following table summarizes the different features available for each CNI net | Provider | Network Model | Route Distribution | Network Policies | Mesh | External Datastore | Encryption | Ingress/Egress Policies | | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | -| Canal | Encapsulated (VXLAN) | No | Yes | No | K8S API | Yes | Yes | -| Flannel | Encapsulated (VXLAN) | No | No | No | K8S API | Yes | No | -| Calico | Encapsulated (VXLAN,IPIP) OR Unencapsulated | Yes | Yes | Yes | Etcd and K8S API | Yes | Yes | +| Canal | Encapsulated (VXLAN) | No | Yes | No | K8s API | Yes | Yes | +| Flannel | Encapsulated (VXLAN) | No | No | No | K8s API | Yes | No | +| Calico | Encapsulated (VXLAN,IPIP) OR Unencapsulated | Yes | Yes | Yes | Etcd and K8s API | Yes | Yes | | Weave | Encapsulated | Yes | Yes | Yes | No | Yes | Yes | -| Cilium | Encapsulated (VXLAN) | Yes | Yes | Yes | Etcd and K8S API | Yes | Yes | +| Cilium | Encapsulated (VXLAN) | Yes | Yes | Yes | Etcd and K8s API | Yes | Yes | - Network Model: Encapsulated or unencapsulated. For more information, see [What Network Models are Used in CNI?](#what-network-models-are-used-in-cni) From f2b335d589c4f284873ab21786c9d86f0bd25404 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Wed, 19 Jan 2022 21:00:30 -0800 Subject: [PATCH 33/33] Apply 72223b7e to Rancher 2.6 docs Fix a typo --- .../configuration/servicemonitor-podmonitor/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.6/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md b/content/rancher/v2.6/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md index 20dc901da31..d2f848b6e67 100644 --- a/content/rancher/v2.6/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md +++ b/content/rancher/v2.6/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md @@ -26,6 +26,6 @@ For more information about how ServiceMonitors work, refer to the [Prometheus Op This pseudo-CRD maps to a section of the Prometheus custom resource configuration. It declaratively specifies how group of pods should be monitored. -When a PodMonitor is created, the Prometheus Operator updates the Prometheus scrape configuration to include the PodMonitor configuration. Then Prometheus begins scraping metrics from the endpoint defined in the ServiceMonitor. +When a PodMonitor is created, the Prometheus Operator updates the Prometheus scrape configuration to include the PodMonitor configuration. Then Prometheus begins scraping metrics from the endpoint defined in the PodMonitor. Any Pods in your cluster that match the labels located within the PodMonitor `selector` field will be monitored based on the `podMetricsEndpoints` specified on the PodMonitor. For more information on what fields can be specified, please look at the [spec](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#podmonitorspec) provided by Prometheus Operator.