Restore CIS docs to its original place

Signed-off-by: Guilherme Macedo <guilherme@gmacedo.com>
This commit is contained in:
Guilherme Macedo
2022-01-01 14:46:33 +01:00
parent 23381cc668
commit 80ffb7f8d8
5 changed files with 1 additions and 287 deletions
+1 -1
View File
@@ -46,7 +46,7 @@ The Benchmark provides recommendations of two types: Automated and Manual. We ru
When Rancher runs a CIS security scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests.
For details, refer to the section on [security scans.]({{<baseurl>}}/rancher/v2.6/en/security/cis-scans/)
For details, refer to the section on [security scans]({{<baseurl>}}/rancher/v2.6/en/cis-scans).
### SELinux RPM
@@ -1,98 +0,0 @@
---
title: Configuration
weight: 3
---
This configuration reference is intended to help you manage the custom resources created by the `rancher-cis-benchmark` application. These resources are used for performing CIS scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization.
To configure the custom resources, go to the **Cluster Dashboard** To configure the CIS scans,
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to configure CIS scans and click **Explore**.
1. In the left navigation bar, click **CIS Benchmark**.
### Scans
A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed.
When configuring a scan, you need to define the name of the scan profile that will be used with the `scanProfileName` directive.
An example ClusterScan custom resource is below:
```yaml
apiVersion: cis.cattle.io/v1
kind: ClusterScan
metadata:
name: rke-cis
spec:
scanProfileName: rke-profile-hardened
```
### Profiles
A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark.
> By default, a few ClusterScanProfiles are installed as part of the `rancher-cis-benchmark` chart. If a user edits these default benchmarks or profiles, the next chart update will reset them back. So it is advisable for users to not edit the default ClusterScanProfiles.
Users can clone the ClusterScanProfiles to create custom profiles.
Skipped tests are listed under the `skipTests` directive.
When you create a new profile, you will also need to give it a name.
An example `ClusterScanProfile` is below:
```yaml
apiVersion: cis.cattle.io/v1
kind: ClusterScanProfile
metadata:
annotations:
meta.helm.sh/release-name: clusterscan-operator
meta.helm.sh/release-namespace: cis-operator-system
labels:
app.kubernetes.io/managed-by: Helm
name: "<example-profile>"
spec:
benchmarkVersion: cis-1.5
skipTests:
- "1.1.20"
- "1.1.21"
```
### Benchmark Versions
A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark.
A `ClusterScanBenchmark` defines the CIS `BenchmarkVersion` name and test configurations. The `BenchmarkVersion` name is a parameter provided to the `kube-bench` tool.
By default, a few `BenchmarkVersion` names and test configurations are packaged as part of the CIS scan application. When this feature is enabled, these default BenchmarkVersions will be automatically installed and available for users to create a ClusterScanProfile.
> If the default BenchmarkVersions are edited, the next chart update will reset them back. Therefore we don't recommend editing the default ClusterScanBenchmarks.
A ClusterScanBenchmark consists of the fields:
- `ClusterProvider`: This is the cluster provider name for which this benchmark is applicable. For example: RKE, EKS, GKE, etc. Leave it empty if this benchmark can be run on any cluster type.
- `MinKubernetesVersion`: Specifies the cluster's minimum kubernetes version necessary to run this benchmark. Leave it empty if there is no dependency on a particular Kubernetes version.
- `MaxKubernetesVersion`: Specifies the cluster's maximum Kubernetes version necessary to run this benchmark. Leave it empty if there is no dependency on a particular k8s version.
An example `ClusterScanBenchmark` is below:
```yaml
apiVersion: cis.cattle.io/v1
kind: ClusterScanBenchmark
metadata:
annotations:
meta.helm.sh/release-name: clusterscan-operator
meta.helm.sh/release-namespace: cis-operator-system
creationTimestamp: "2020-08-28T18:18:07Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: cis-1.5
resourceVersion: "203878"
selfLink: /apis/cis.cattle.io/v1/clusterscanbenchmarks/cis-1.5
uid: 309e543e-9102-4091-be91-08d7af7fb7a7
spec:
clusterProvider: ""
minKubernetesVersion: 1.15.0
```
@@ -1,84 +0,0 @@
---
title: Creating a Custom Benchmark Version for Running a Cluster Scan
weight: 4
---
Each Benchmark Version defines a set of test configuration files that define the CIS tests to be run by the <a href="https://github.com/aquasecurity/kube-bench" target="_blank">kube-bench</a> tool.
The `rancher-cis-benchmark` application installs a few default Benchmark Versions which are listed under CIS Benchmark application menu.
But there could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them.
It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application.
When a cluster scan is run, you need to select a Profile which points to a specific Benchmark Version.
Follow all the steps below to add a custom Benchmark Version and run a scan using it.
1. [Prepare the Custom Benchmark Version ConfigMap](#1-prepare-the-custom-benchmark-version-configmap)
2. [Add a Custom Benchmark Version to a Cluster](#2-add-a-custom-benchmark-version-to-a-cluster)
3. [Create a New Profile for the Custom Benchmark Version](#3-create-a-new-profile-for-the-custom-benchmark-version)
4. [Run a Scan Using the Custom Benchmark Version](#4-run-a-scan-using-the-custom-benchmark-version)
### 1. Prepare the Custom Benchmark Version ConfigMap
To create a custom benchmark version, first you need to create a ConfigMap containing the benchmark version's config files and upload it to your Kubernetes cluster where you want to run the scan.
To prepare a custom benchmark version ConfigMap, suppose we want to add a custom Benchmark Version named `foo`.
1. Create a directory named `foo` and inside this directory, place all the config YAML files that the <a href="https://github.com/aquasecurity/kube-bench" target="_blank">kube-bench</a> tool looks for. For example, here are the config YAML files for a Generic CIS 1.5 Benchmark Version https://github.com/aquasecurity/kube-bench/tree/master/cfg/cis-1.5
1. Place the complete `config.yaml` file, which includes all the components that should be tested.
1. Add the Benchmark version name to the `target_mapping` section of the `config.yaml`:
```yaml
target_mapping:
"foo":
- "master"
- "node"
- "controlplane"
- "etcd"
- "policies"
```
1. Upload this directory to your Kubernetes Cluster by creating a ConfigMap:
```yaml
kubectl create configmap -n <namespace> foo --from-file=<path to directory foo>
```
### 2. Add a Custom Benchmark Version to a Cluster
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**.
1. In the left navigation bar, click **CIS Benchmark > Benchmark Version**.
1. Click **Create**.
1. Enter the **Name** and a description for your custom benchmark version.
1. Choose the cluster provider that your benchmark version applies to.
1. Choose the ConfigMap you have uploaded from the dropdown.
1. Add the minimum and maximum Kubernetes version limits applicable, if any.
1. Click **Create**.
### 3. Create a New Profile for the Custom Benchmark Version
To run a scan using your custom benchmark version, you need to add a new Profile pointing to this benchmark version.
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**.
1. In the left navigation bar, click **CIS Benchmark > Profile**.
1. Click **Create**.
1. Provide a **Name** and description. In this example, we name it `foo-profile`.
1. Choose the Benchmark Version from the dropdown.
1. Click **Create**.
### 4. Run a Scan Using the Custom Benchmark Version
Once the Profile pointing to your custom benchmark version `foo` has been created, you can create a new Scan to run the custom test configs in the Benchmark Version.
To run a scan,
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**.
1. In the left navigation bar, click **CIS Benchmark > Scan**.
1. Click **Create**.
1. Choose the new cluster scan profile.
1. Click **Create**.
**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears.
@@ -1,50 +0,0 @@
---
title: Roles-based Access Control
shortTitle: RBAC
weight: 3
---
This section describes the permissions required to use the rancher-cis-benchmark App.
The rancher-cis-benchmark is a cluster-admin only feature by default.
However, the `rancher-cis-benchmark` chart installs these two default `ClusterRoles`:
- cis-admin
- cis-view
In Rancher, only cluster owners and global administrators have `cis-admin` access by default.
Note: If you were using the `cis-edit` role added in Rancher v2.5 setup, it has now been removed since
Rancher v2.5.2 because it essentially is same as `cis-admin`. If you happen to create any clusterrolebindings
for `cis-edit`, please update them to use `cis-admin` ClusterRole instead.
# Cluster-Admin Access
Rancher CIS Scans is a cluster-admin only feature by default.
This means only the Rancher global admins, and the clusters cluster-owner can:
- Install/Uninstall the rancher-cis-benchmark App
- See the navigation links for CIS Benchmark CRDs - ClusterScanBenchmarks, ClusterScanProfiles, ClusterScans
- List the default ClusterScanBenchmarks and ClusterScanProfiles
- Create/Edit/Delete new ClusterScanProfiles
- Create/Edit/Delete a new ClusterScan to run the CIS scan on the cluster
- View and Download the ClusterScanReport created after the ClusterScan is complete
# Summary of Default Permissions for Kubernetes Default Roles
The rancher-cis-benchmark creates three `ClusterRoles` and adds the CIS Benchmark CRD access to the following default K8s `ClusterRoles`:
| ClusterRole created by chart | Default K8s ClusterRole | Permissions given with Role
| ------------------------------| ---------------------------| ---------------------------|
| `cis-admin` | `admin`| Ability to CRUD clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR
| `cis-view` | `view `| Ability to List(R) clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR
By default only cluster-owner role will have ability to manage and use `rancher-cis-benchmark` feature.
The other Rancher roles (cluster-member, project-owner, project-member) do not have any default permissions to manage and use rancher-cis-benchmark resources.
But if a cluster-owner wants to delegate access to other users, they can do so by creating ClusterRoleBindings between these users and the above CIS ClusterRoles manually.
There is no automatic role aggregation supported for the `rancher-cis-benchmark` ClusterRoles.
@@ -1,54 +0,0 @@
---
title: Skipped and Not Applicable Tests
weight: 3
---
This section lists the tests that are skipped in the permissive test profile for RKE.
> All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile.
# CIS Benchmark v1.5
### CIS Benchmark v1.5 Skipped Tests
| Number | Description | Reason for Skipping |
| ---------- | ------------- | --------- |
| 1.1.12 | Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership. |
| 1.2.6 | Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. |
| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. |
| 1.2.34 | Ensure that encryption providers are appropriately configured (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. |
| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Automated) | System level configurations are required before provisioning the cluster in order for this argument to be set to true. |
| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. |
| 5.1.5 | Ensure that default service accounts are not actively used. (Automated) | Kubernetes provides default service accounts to be used. |
| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 5.2.3 | Minimize the admission of containers wishing to share the host IPC namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 5.2.4 | Minimize the admission of containers wishing to share the host network namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 5.2.5 | Minimize the admission of containers with allowPrivilegeEscalation (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 5.3.2 | Ensure that all Namespaces have Network Policies defined (Automated) | Enabling Network Policies can prevent certain applications from communicating with each other. |
| 5.6.4 | The default namespace should not be used (Automated) | Kubernetes provides a default namespace. |
### CIS Benchmark v1.5 Not Applicable Tests
| Number | Description | Reason for being not applicable |
| ---------- | ------------- | --------- |
| 1.1.1 | Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. |
| 1.1.2 | Ensure that the API server pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. |
| 1.1.3 | Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
| 1.1.4 | Ensure that the controller manager pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
| 1.1.5 | Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
| 1.1.6 | Ensure that the scheduler pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
| 1.1.7 | Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. |
| 1.1.8 | Ensure that the etcd pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. |
| 1.1.13 | Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. |
| 1.1.14 | Ensure that the admin.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. |
| 1.1.15 | Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
| 1.1.16 | Ensure that the scheduler.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
| 1.1.17 | Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
| 1.1.18 | Ensure that the controller-manager.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
| 1.3.6 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. |
| 4.1.1 | Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. |
| 4.1.2 | Ensure that the kubelet service file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. |
| 4.1.9 | Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. |
| 4.1.10 | Ensure that the kubelet configuration file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. |
| 4.2.12 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. |