Merge branch 'master' into enhance-helm-cli-docs

This commit is contained in:
Catherine Luse
2022-03-31 16:00:07 -07:00
committed by GitHub
42 changed files with 2962 additions and 1999 deletions
@@ -32,3 +32,5 @@ There are a few config flags that must be the same in all server nodes:
## Existing clusters
If you have an existing cluster using the default embedded SQLite database, you can convert it to etcd by simply restarting your K3s server with the `--cluster-init` flag. Once you've done that, you'll be able to add additional instances as described above.
>**Important:** K3s v1.22.2 and newer support migration from SQLite to etcd. Older versions will create a new empty datastore if you add `--cluster-init` to an existing server.
@@ -35,7 +35,7 @@ K3s requires two or more server nodes for this HA configuration. See the [Instal
When running the `k3s server` command on these nodes, you must set the `datastore-endpoint` parameter so that K3s knows how to connect to the external datastore. The `token` parameter can also be used to set a deterministic token when adding nodes. When empty, this token will be generated automatically for further use.
For example, a command like the following could be used to install the K3s server with a MySQL database as the external datastore and [set a token]({{<baseurl>}}/k3s/latest/en/installation/install-options/server-config/#cluster-options}}):
For example, a command like the following could be used to install the K3s server with a MySQL database as the external datastore and [set a token]({{<baseurl>}}/k3s/latest/en/installation/install-options/server-config/#cluster-options):
```bash
curl -sfL https://get.k3s.io | sh -s - server \
@@ -72,7 +72,7 @@ If the first server node was started without the `--token` CLI flag or `K3S_TOKE
cat /var/lib/rancher/k3s/server/token
```
Additional server nodes can then be added [using the token]({{<baseurl>}}/k3s/latest/en/installation/install-options/server-config/#cluster-options}}):
Additional server nodes can then be added [using the token]({{<baseurl>}}/k3s/latest/en/installation/install-options/server-config/#cluster-options):
```bash
curl -sfL https://get.k3s.io | sh -s - server \
@@ -22,7 +22,7 @@ If you wish to use WireGuard as your flannel backend it may require additional k
### Custom CNI
Run K3s with `--flannel-backend=none` and install your CNI of choice. IP Forwarding should be enabled for Canal and Calico. Please reference the steps below.
Run K3s with `--flannel-backend=none` and install your CNI of choice. Most CNI plugins come with their own network policy engine, so it is recommended to set `--disable-network-policy` as well to avoid conflicts. IP Forwarding should be enabled for Canal and Calico. Please reference the steps below.
{{% tabs %}}
{{% tab "Canal" %}}
@@ -74,15 +74,24 @@ You should see that IP forwarding is set to true.
Dual-stack networking must be configured when the cluster is first created. It cannot be enabled on an existing single-stack cluster.
Dual-stack is supported on k3s v1.21 or above.
To enable dual-stack in k3s, you must provide valid dual-stack `cluster-cidr` and `service-cidr`, and set `disable-network-policy` on all server nodes. Both servers and agents must provide valid dual-stack `node-ip` settings. Node address auto-detection and network policy enforcement are not supported on dual-stack clusters when using the default flannel CNI. Besides, only vxlan backend is supported at the moment. This is an example of a valid configuration:
```
node-ip: 10.0.10.7,2a05:d012:c6f:4611:5c2:5602:eed2:898c
cluster-cidr: 10.42.0.0/16,2001:cafe:42:0::/56
service-cidr: 10.43.0.0/16,2001:cafe:42:1::/112
disable-network-policy: true
k3s server --node-ip 10.0.10.7,2a05:d012:c6f:4611:5c2:5602:eed2:898c --cluster-cidr 10.42.0.0/16,2001:cafe:42:0::/56 --service-cidr 10.43.0.0/16,2001:cafe:42:1::/112 --disable-network-policy
```
Note that you can choose whatever `cluster-cidr` and `service-cidr` value, however the `node-ip` values must correspond to the ip addresses of your main interface. Remember to allow ipv6 traffic if you are deploying in a public cloud.
If you are using a custom cni plugin, i.e. a cni plugin different from flannel, the previous configuration might not be enough to enable dual-stack in the cni plugin. Please check how to enable dual-stack in its documentation and verify if network policies can be enabled.
### IPv6 only installation
IPv6 only setup is supported on k3s v1.22 or above. As in dual-stack operation, IPv6 node addresses cannot be auto-detected; all nodes must have an explicitly configured IPv6 `node-ip`. This is an example of a valid configuration:
```
k3s server --node-ip 2a05:d012:c6f:4611:5c2:5602:eed2:898c --cluster-cidr 2001:cafe:42:0::/56 --service-cidr 2001:cafe:42:1::/112 --disable-network-policy
```
Note that you can specify only one IPv6 `cluster-cidr` value.
+1 -1
View File
@@ -5,7 +5,7 @@ weight: 90
This section describes the methodology and means of securing a K3s cluster. It's broken into 2 sections. These guides assume k3s is running with embedded etcd.
The documents below apply to both CIS 1.5 & 1.6.
The documents below apply to CIS Kubernetes Benchmark v1.6.
* [Hardening Guide](./hardening_guide/)
* [CIS Benchmark Self-Assessment Guide](./self_assessment/)
@@ -3,12 +3,12 @@ title: "CIS Hardening Guide"
weight: 80
---
This document provides prescriptive guidance for hardening a production installation of K3s. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS).
This document provides prescriptive guidance for hardening a production installation of K3s. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Internet Security (CIS).
K3s has a number of security mitigations applied and turned on by default and will pass a number of the Kubernetes CIS controls without modification. There are some notable exceptions to this that require manual intervention to fully comply with the CIS Benchmark:
1. K3s will not modify the host operating system. Any host-level modifications will need to be done manually.
2. Certain CIS policy controls for PodSecurityPolicies and NetworkPolicies will restrict the functionality of this cluster. You must opt into having K3s configure these by adding the appropriate options (enabling of admission plugins) to your command-line flags or configuration file as well as manually applying appropriate policies. Further detail in the sections below.
2. Certain CIS policy controls for `PodSecurityPolicies` and `NetworkPolicies` will restrict the functionality of the cluster. You must opt into having K3s configure these by adding the appropriate options (enabling of admission plugins) to your command-line flags or configuration file as well as manually applying appropriate policies. Further details are presented in the sections below.
The first section (1.1) of the CIS Benchmark concerns itself primarily with pod manifest permissions and ownership. K3s doesn't utilize these for the core components since everything is packaged into a single binary.
@@ -31,23 +31,24 @@ vm.panic_on_oom=0
vm.overcommit_memory=1
kernel.panic=10
kernel.panic_on_oops=1
kernel.keys.root_maxbytes=25000000
```
## Kubernetes Runtime Requirements
The runtime requirements to comply with the CIS Benchmark are centered around pod security (PSPs) and network policies. These are outlined in this section. K3s doesn't apply any default PSPs or network policies however K3s ships with a controller that is meant to apply a given set of network policies. By default, K3s runs with the "NodeRestriction" admission controller. To enable PSPs, add the following to the K3s start command: `--kube-apiserver-arg="enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount"`. This will have the effect of maintaining the "NodeRestriction" plugin as well as enabling the "PodSecurityPolicy".
The runtime requirements to comply with the CIS Benchmark are centered around pod security (PSPs), network policies and API Server auditing logs. These are outlined in this section. K3s doesn't apply any default PSPs or network policies. However, K3s ships with a controller that is meant to apply a given set of network policies. By default, K3s runs with the `NodeRestriction` admission controller. To enable PSPs, add the following to the K3s start command: `--kube-apiserver-arg="enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount"`. This will have the effect of maintaining the `NodeRestriction` plugin as well as enabling the `PodSecurityPolicy`. The same happens with the API Server auditing logs, K3s doesn't enable them by default, so audit log configuration and audit policy must be created manually.
### PodSecurityPolicies
### Pod Security Policies
When PSPs are enabled, a policy can be applied to satisfy the necessary controls described in section 5.2 of the CIS Benchmark.
Here's an example of a compliant PSP.
Here is an example of a compliant PSP.
```yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: cis1.5-compliant-psp
name: restricted-psp
spec:
privileged: false # CIS - 5.2.1
allowPrivilegeEscalation: false # CIS - 5.2.5
@@ -59,7 +60,9 @@ spec:
- 'projected'
- 'secret'
- 'downwardAPI'
- 'csi'
- 'persistentVolumeClaim'
- 'ephemeral'
hostNetwork: false # CIS - 5.2.4
hostIPC: false # CIS - 5.2.3
hostPID: false # CIS - 5.2.2
@@ -80,7 +83,7 @@ spec:
readOnlyRootFilesystem: false
```
Before the above PSP to be effective, we need to create a couple ClusterRoles and ClusterRole. We also need to include a "system unrestricted policy" which is needed for system-level pods that require additional privileges.
For the above PSP to be effective, we need to create a ClusterRole and a ClusterRoleBinding. We also need to include a "system unrestricted policy" which is needed for system-level pods that require additional privileges.
These can be combined with the PSP yaml above and NetworkPolicy yaml below into a single file and placed in the `/var/lib/rancher/k3s/server/manifests` directory. Below is an example of a `policy.yaml` file.
@@ -88,7 +91,7 @@ These can be combined with the PSP yaml above and NetworkPolicy yaml below into
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: cis1.5-compliant-psp
name: restricted-psp
spec:
privileged: false
allowPrivilegeEscalation: false
@@ -100,7 +103,9 @@ spec:
- 'projected'
- 'secret'
- 'downwardAPI'
- 'csi'
- 'persistentVolumeClaim'
- 'ephemeral'
hostNetwork: false
hostIPC: false
hostPID: false
@@ -123,7 +128,7 @@ spec:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp:restricted
name: psp:restricted-psp
labels:
addonmanager.kubernetes.io/mode: EnsureExists
rules:
@@ -131,62 +136,23 @@ rules:
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- cis1.5-compliant-psp
- restricted-psp
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: default:restricted
name: default:restricted-psp
labels:
addonmanager.kubernetes.io/mode: EnsureExists
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp:restricted
name: psp:restricted-psp
subjects:
- kind: Group
name: system:authenticated
apiGroup: rbac.authorization.k8s.io
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: intra-namespace
namespace: kube-system
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: kube-system
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: intra-namespace
namespace: default
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: default
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: intra-namespace
namespace: kube-public
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: kube-public
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
@@ -253,6 +219,45 @@ subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: intra-namespace
namespace: kube-system
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: kube-system
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: intra-namespace
namespace: default
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: default
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: intra-namespace
namespace: kube-public
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: kube-public
```
> **Note:** The Kubernetes critical additions such as CNI, DNS, and Ingress are ran as pods in the `kube-system` namespace. Therefore, this namespace will have a policy that is less restrictive so that these components can run properly.
@@ -263,7 +268,7 @@ subjects:
CIS requires that all namespaces have a network policy applied that reasonably limits traffic into namespaces and pods.
Here's an example of a compliant network policy.
Here is an example of a compliant network policy.
```yaml
kind: NetworkPolicy
@@ -302,7 +307,7 @@ spec:
- Ingress
```
The metrics-server and Traefik ingress controller will be blocked by default if network policies are not created to allow access. Traefik v1 as packaged in K3s version 1.20 and below uses different labels than Traefik v2; ensure that you only use the sample yaml below that is associated with the version of Traefik present on your cluster.
The metrics-server and Traefik ingress controller will be blocked by default if network policies are not created to allow access. Traefik v1 as packaged in K3s version 1.20 and below uses different labels than Traefik v2. Ensure that you only use the sample yaml below that is associated with the version of Traefik present on your cluster.
```yaml
apiVersion: networking.k8s.io/v1
@@ -366,24 +371,67 @@ spec:
> **Note:** Operators must manage network policies as normal for additional namespaces that are created.
### API Server audit configuration
CIS requirements 1.2.22 to 1.2.25 are related to configuring audit logs for the API Server. K3s doesn't create by default the log directory and audit policy, as auditing requirements are specific to each user's policies and environment.
The log directory, ideally, must be created before starting K3s. A restrictive access permission is recommended to avoid leaking potential sensitive information.
```bash
sudo mkdir -p -m 700 /var/lib/rancher/k3s/server/logs
```
A starter audit policy to log request metadata is provided below. The policy should be written to a file named `audit.yaml` in `/var/lib/rancher/k3s/server` directory. Detailed information about policy configuration for the API server can be found in the Kubernetes [documentation](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/).
```yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
```
Both configurations must be passed as arguments to the API Server as:
```bash
--kube-apiserver-arg='audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log'
--kube-apiserver-arg='audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml'
```
If the configurations are created after K3s is installed, they must be added to K3s' systemd service in `/etc/systemd/system/k3s.service`.
```bash
ExecStart=/usr/local/bin/k3s \
server \
'--kube-apiserver-arg=audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log' \
'--kube-apiserver-arg=audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml' \
```
K3s must be restarted to load the new configuration.
```bash
sudo systemctl daemon-reload
sudo systemctl restart k3s.service
```
Additional information about CIS requirements 1.2.22 to 1.2.25 is presented below.
## Known Issues
The following are controls that K3s currently does not pass by default. Each gap will be explained, along with a note clarifying whether it can be passed through manual operator intervention, or if it will be addressed in a future release of K3s.
### Control 1.2.15
Ensure that the admission control plugin `NamespaceLifecycle` is set.
<details>
<summary>Rationale</summary>
Setting admission control policy to NamespaceLifecycle ensures that objects cannot be created in non-existent namespaces, and that namespaces undergoing termination are not used for creating the new objects. This is recommended to enforce the integrity of the namespace termination process and also for the availability of the newer objects.
Setting admission control policy to `NamespaceLifecycle` ensures that objects cannot be created in non-existent namespaces, and that namespaces undergoing termination are not used for creating the new objects. This is recommended to enforce the integrity of the namespace termination process and also for the availability of the newer objects.
This can be remediated by passing this argument as a value to the `enable-admission-plugins=` and pass that to `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below.
</details>
### Control 1.2.16 (mentioned above)
### Control 1.2.16
Ensure that the admission control plugin `PodSecurityPolicy` is set.
<details>
<summary>Rationale</summary>
A Pod Security Policy is a cluster-level resource that controls the actions that a pod can perform and what it has the ability to access. The PodSecurityPolicy objects define a set of conditions that a pod must run with in order to be accepted into the system. Pod Security Policies are comprised of settings and strategies that control the security features a pod has access to and hence this must be used to control pod access permissions.
A Pod Security Policy is a cluster-level resource that controls the actions that a pod can perform and what it has the ability to access. The `PodSecurityPolicy` objects define a set of conditions that a pod must run with in order to be accepted into the system. Pod Security Policies are comprised of settings and strategies that control the security features a pod has access to and hence this must be used to control pod access permissions.
This can be remediated by passing this argument as a value to the `enable-admission-plugins=` and pass that to `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below.
</details>
@@ -446,16 +494,18 @@ This can be remediated by passing this argument as a value to the `--kube-apiser
Ensure that the `--encryption-provider-config` argument is set as appropriate.
<details>
<summary>Rationale</summary>
Where `etcd` encryption is used, it is important to ensure that the appropriate set of encryption providers is used. Currently, the aescbc, kms and secretbox are likely to be appropriate options.
`etcd` is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted at rest to avoid any disclosures.
Detailed steps on how to configure secrets encryption in K3s are available in [Secrets Encryption](../secrets_encryption/).
</details>
### Control 1.2.34
Ensure that encryption providers are appropriately configured.
<details>
<summary>Rationale</summary>
`etcd` is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted at rest to avoid any disclosures.
Where `etcd` encryption is used, it is important to ensure that the appropriate set of encryption providers is used. Currently, the `aescbc`, `kms` and `secretbox` are likely to be appropriate options.
This can be remediated by passing a valid configuration to `k3s` as outlined above.
This can be remediated by passing a valid configuration to `k3s` as outlined above. Detailed steps on how to configure secrets encryption in K3s are available in [Secrets Encryption](../secrets_encryption/).
</details>
### Control 1.3.1
@@ -468,7 +518,7 @@ This can be remediated by passing this argument as a value to the `--kube-apiser
</details>
### Control 3.2.1
Ensure that a minimal audit policy is created (Scored)
Ensure that a minimal audit policy is created.
<details>
<summary>Rationale</summary>
Logging is an important detective control for all systems, to detect potential unauthorized access.
@@ -476,7 +526,6 @@ Logging is an important detective control for all systems, to detect potential u
This can be remediated by passing controls 1.2.22 - 1.2.25 and verifying their efficacy.
</details>
### Control 4.2.7
Ensure that the `--make-iptables-util-chains` argument is set to true.
<details>
@@ -487,24 +536,23 @@ This can be remediated by passing this argument as a value to the `--kube-apiser
</details>
### Control 5.1.5
Ensure that default service accounts are not actively used. (Scored)
Ensure that default service accounts are not actively used
<details>
<summary>Rationale</summary>
Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod.
Kubernetes provides a `default` service account which is used by cluster workloads where no specific service account is assigned to the pod.
Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account.
The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments.
</details>
The remediation for this is to update the `automountServiceAccountToken` field to `false` for the `default` service account in each namespace.
This can be remediated by updating the `automountServiceAccountToken` field to `false` for the `default` service account in each namespace.
For `default` service accounts in the built-in namespaces (`kube-system`, `kube-public`, `kube-node-lease`, and `default`), K3s does not automatically do this. You can manually update this field on these service accounts to pass the control.
</details>
## Control Plane Execution and Arguments
Listed below are the K3s control plane components and the arguments they're given at start, by default. Commented to their right is the CIS 1.5 control that they satisfy.
Listed below are the K3s control plane components and the arguments they are given at start, by default. Commented to their right is the CIS 1.6 control that they satisfy.
```bash
kube-apiserver
@@ -604,13 +652,14 @@ kubelet
--tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key # 4.2.10
```
The command below is an example of how the outlined remediations can be applied.
The command below is an example of how the outlined remediations can be applied to harden K3s.
```bash
k3s server \
--protect-kernel-defaults=true \
--secrets-encryption=true \
--kube-apiserver-arg='audit-log-path=/var/lib/rancher/k3s/server/logs/audit-log' \
--kube-apiserver-arg='audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log' \
--kube-apiserver-arg='audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml' \
--kube-apiserver-arg='audit-log-maxage=30' \
--kube-apiserver-arg='audit-log-maxbackup=10' \
--kube-apiserver-arg='audit-log-maxsize=100' \
@@ -625,4 +674,4 @@ k3s server \
## Conclusion
If you have followed this guide, your K3s cluster will be configured to comply with the CIS Kubernetes Benchmark. You can review the [CIS Benchmark Self-Assessment Guide](../self_assessment/) to understand the expectations of each of the benchmarks and how you can do the same on your cluster.
If you have followed this guide, your K3s cluster will be configured to comply with the CIS Kubernetes Benchmark. You can review the [CIS Benchmark Self-Assessment Guide](../self_assessment/) to understand the expectations of each of the benchmark's checks and how you can do the same on your cluster.
File diff suppressed because it is too large Load Diff
@@ -27,25 +27,6 @@ This section covers the following topics:
- [Configuring global permissions for groups](#configuring-global-permissions-for-groups)
- [Refreshing group memberships](#refreshing-group-memberships)
### List of `restricted-admin` Permissions
The `restricted-admin` permissions are as follows:
- Has full admin access to all downstream clusters managed by Rancher.
- Has very limited access to the local Kubernetes cluster. Can access Rancher custom resource definitions, but has no access to any Kubernetes native types.
- Can add other users and assign them to clusters outside of the local cluster.
- Can create other restricted admins.
- Cannot grant any permissions in the local cluster they don't currently have. (This is how Kubernetes normally operates)
### Changing Global Administrators to Restricted Admins
If Rancher already has a global administrator, they should change all global administrators over to the new `restricted-admin` role.
This can be done through **Security > Users** and moving any Administrator role over to Restricted Administrator.
Signed-in users can change themselves over to the `restricted-admin` if they wish, but they should only do that as the last step, otherwise they won't have the permissions to do so.
# Global Permission Assignment
Global permissions for local users are assigned differently than users who log in to Rancher using external authentication.
@@ -3,7 +3,7 @@ title: Opening Ports with firewalld
weight: 1
---
> We recommend disabling firewalld. For Kubernetes 1.19, firewalld must be turned off.
> We recommend disabling firewalld. For Kubernetes 1.19.x and higher, firewalld must be turned off.
Some distributions of Linux [derived from RHEL,](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Rebuilds) including Oracle Linux, may have default firewall rules that block communication with Helm.
@@ -25,36 +25,87 @@ If your organization uses Keycloak Identity Provider (IdP) for user authenticati
><sup>1</sup>: Optionally, you can enable either one or both of these settings.
><sup>2</sup>: Rancher SAML metadata won't be generated until a SAML provider is configured and saved.
{{< img "/img/rancher/keycloak/keycloak-saml-client-configuration.png" "">}}
- In the new SAML client, create Mappers to expose the users fields
- Add all "Builtin Protocol Mappers"
{{< img "/img/rancher/keycloak/keycloak-saml-client-builtin-mappers.png" "">}}
- Create a new "Group list" mapper to map the member attribute to a user's groups
{{< img "/img/rancher/keycloak/keycloak-saml-client-group-mapper.png" "">}}
- Export a `metadata.xml` file from your Keycloak client:
From the `Installation` tab, choose the `SAML Metadata IDPSSODescriptor` format option and download your file.
>**Note**
> Keycloak versions 6.0.0 and up no longer provide the IDP metadata under the `Installation` tab.
> You can still get the XML from the following url:
>
> `https://{KEYCLOAK-URL}/auth/realms/{REALM-NAME}/protocol/saml/descriptor`
>
> The XML obtained from this URL contains `EntitiesDescriptor` as the root element. Rancher expects the root element to be `EntityDescriptor` rather than `EntitiesDescriptor`. So before passing this XML to Rancher, follow these steps to adjust it:
>
> * Copy all the attributes from `EntitiesDescriptor` to the `EntityDescriptor` that are not present.
> * Remove the `<EntitiesDescriptor>` tag from the beginning.
> * Remove the `</EntitiesDescriptor>` from the end of the xml.
>
> You are left with something similar as the example below:
>
> ```
> <EntityDescriptor xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" entityID="https://{KEYCLOAK-URL}/auth/realms/{REALM-NAME}">
> ....
> </EntityDescriptor>
> ```
{{< img "/img/rancher/keycloak/keycloak-saml-client-group-mapper.png" "">}}
## Getting the IDP Metadata
{{% tabs %}}
{{% tab "Keycloak 5 and earlier" %}}
To get the IDP metadata, export a `metadata.xml` file from your Keycloak client.
From the **Installation** tab, choose the **SAML Metadata IDPSSODescriptor** format option and download your file.
{{% /tab %}}
{{% tab "Keycloak 6-13" %}}
1. From the **Configure** section, click the **Realm Settings** tab.
1. Click the **General** tab.
1. From the **Endpoints** field, click **SAML 2.0 Identity Provider Metadata**.
Verify the IDP metadata contains the following attributes:
```
xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
xmlns:ds="http://www.w3.org/2000/09/xmldsig#"
```
Some browsers, such as Firefox, may render/process the document such that the contents appear to have been modified, and some attributes appear to be missing. In this situation, use the raw response data that can be found using your browser.
The following is an example process for Firefox, but will vary slightly for other browsers:
1. Press **F12** to access the developer console.
1. Click the **Network** tab.
1. From the table, click the row containing `descriptor`.
1. From the details pane, click the **Response** tab.
1. Copy the raw response data.
The XML obtained contains `EntitiesDescriptor` as the root element. Rancher expects the root element to be `EntityDescriptor` rather than `EntitiesDescriptor`. So before passing this XML to Rancher, follow these steps to adjust it:
1. Copy all the attributes from `EntitiesDescriptor` to the `EntityDescriptor` that are not present.
1. Remove the `<EntitiesDescriptor>` tag from the beginning.
1. Remove the `</EntitiesDescriptor>` from the end of the xml.
You are left with something similar as the example below:
```
<EntityDescriptor xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" entityID="https://{KEYCLOAK-URL}/auth/realms/{REALM-NAME}">
....
</EntityDescriptor>
```
{{% /tab %}}
{{% tab "Keycloak 14+" %}}
1. From the **Configure** section, click the **Realm Settings** tab.
1. Click the **General** tab.
1. From the **Endpoints** field, click **SAML 2.0 Identity Provider Metadata**.
Verify the IDP metadata contains the following attributes:
```
xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
xmlns:ds="http://www.w3.org/2000/09/xmldsig#"
```
Some browsers, such as Firefox, may render/process the document such that the contents appear to have been modified, and some attributes appear to be missing. In this situation, use the raw response data that can be found using your browser.
The following is an example process for Firefox, but will vary slightly for other browsers:
1. Press **F12** to access the developer console.
1. Click the **Network** tab.
1. From the table, click the row containing `descriptor`.
1. From the details pane, click the **Response** tab.
1. Copy the raw response data.
{{% /tab %}}
{{% /tabs %}}
## Configuring Keycloak in Rancher
@@ -19,3 +19,22 @@ Certificates can be rotated for the following services:
- kube-scheduler
- kube-controller-manager
### Certificate Rotation
Rancher launched Kubernetes clusters have the ability to rotate the auto-generated certificates through the UI.
1. In the **Global** view, navigate to the cluster that you want to rotate certificates.
2. Select **⋮ > Rotate Certificates**.
3. Select which certificates that you want to rotate.
* Rotate all Service certificates (keep the same CA)
* Rotate an individual service and choose one of the services from the drop-down menu
4. Click **Save**.
**Results:** The selected certificates will be rotated and the related services will be restarted to start using the new certificate.
> **Note:** Even though the RKE CLI can use custom certificates for the Kubernetes cluster components, Rancher currently doesn't allow the ability to upload these in Rancher launched Kubernetes clusters.
@@ -13,7 +13,10 @@ This page covers how to install the Cloud Provider Interface (CPI) and Cloud Sto
# Prerequisites
The vSphere version must be 7.0u1 or higher.
The vSphere versions supported:
* 6.7u3
* 7.0u1 or higher.
The Kubernetes version must be 1.19 or higher.
@@ -27,7 +27,7 @@ The capability to access a downstream cluster without Rancher depends on the typ
- **Registered clusters:** The cluster will be unaffected and you can access the cluster using the same methods that you did before the cluster was registered into Rancher.
- **Hosted Kubernetes clusters:** If you created the cluster in a cloud-hosted Kubernetes provider such as EKS, GKE, or AKS, you can continue to manage the cluster using your provider's cloud credentials.
- **RKE clusters:** To access an [RKE cluster,]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/) the cluster must have the [authorized cluster endpoint]({{<baseurl>}}/rancher/v2.5/en/overview/architecture/#4-authorized-cluster-endpoint) enabled, and you must have already downloaded the cluster's kubeconfig file from the Rancher UI. (The authorized cluster endpoint is enabled by default for RKE clusters.) With this endpoint, you can access your cluster with kubectl directly instead of communicating through the Rancher server's [authentication proxy.]({{<baseurl>}}/rancher/v2.5/en/overview/architecture/#1-the-authentication-proxy) For instructions on how to configure kubectl to use the authorized cluster endpoint, refer to the section about directly accessing clusters with [kubectl and the kubeconfig file.]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) These clusters will use a snapshot of the authentication as it was configured when Rancher was removed.
- **RKE clusters:** Please note that you will no longer be able to manage the individual Kubernetes components or perform any upgrades on them after the deletion of the Rancher server. However, you can still access the cluster to manage your workloads. To access an [RKE cluster,]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/) the cluster must have the [authorized cluster endpoint]({{<baseurl>}}/rancher/v2.5/en/overview/architecture/#4-authorized-cluster-endpoint) enabled, and you must have already downloaded the cluster's kubeconfig file from the Rancher UI. (The authorized cluster endpoint is enabled by default for RKE clusters.) With this endpoint, you can access your cluster with kubectl directly instead of communicating through the Rancher server's [authentication proxy.]({{<baseurl>}}/rancher/v2.5/en/overview/architecture/#1-the-authentication-proxy) For instructions on how to configure kubectl to use the authorized cluster endpoint, refer to the section about directly accessing clusters with [kubectl and the kubeconfig file.]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) These clusters will use a snapshot of the authentication as it was configured when Rancher was removed.
### What if I don't want Rancher anymore?
@@ -6,7 +6,7 @@ aliases:
- /rancher/v2.5/en/installation/k8s-install/
- /rancher/v2.5/en/installation/k8s-install/helm-rancher
- /rancher/v2.5/en/installation/k8s-install/kubernetes-rke
- /rancher/v2.5/en/installation/ha-server-install
- /rancher/v2.5/en/installation/ha-server-install
- /rancher/v2.5/en/installation/install-rancher-on-k8s/install
- /rancher/v2.x/en/installation/install-rancher-on-k8s/
---
@@ -24,7 +24,7 @@ In this section, you'll learn how to deploy Rancher on a Kubernetes cluster usin
### Kubernetes Cluster
Set up the Rancher server's local Kubernetes cluster.
Set up the Rancher server's local Kubernetes cluster.
Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS.
@@ -113,7 +113,7 @@ There are three recommended options for the source of the certificate used for T
### 4. Install cert-manager
> You should skip this step if you are bringing your own certificate files (option `ingress.tls.source=secret`), or if you use [TLS termination on an external load balancer]({{<baseurl>}}/rancher/v2.5/en/installation/install-rancher-on-k8s/chart-options/#external-tls-termination).
> You should skip this step if you are bringing your own certificate files (option `ingress.tls.source=secret`), or if you use [TLS termination on an external load balancer]({{<baseurl>}}/rancher/v2.5/en/installation/install-rancher-on-k8s/chart-options/#external-tls-termination).
This step is only required to use certificates issued by Rancher's generated CA (`ingress.tls.source=rancher`) or to request Let's Encrypt issued certificates (`ingress.tls.source=letsEncrypt`).
@@ -157,6 +157,8 @@ cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m
The exact command to install Rancher differs depending on the certificate configuration.
However, irrespective of the certificate configuration, the name of the Rancher installation in the `cattle-system` namespace should always be `rancher`.
{{% tabs %}}
{{% tab "Rancher-generated Certificates" %}}
@@ -168,7 +170,7 @@ Because `rancher` is the default option for `ingress.tls.source`, we are not spe
- Set `hostname` to the DNS record that resolves to your load balancer.
- Set `replicas` to the number of replicas to use for the Rancher Deployment. This defaults to 3; if you have less than 3 nodes in your cluster you should reduce it accordingly.
- To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6`.
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
```
helm install rancher rancher-<CHART_REPO>/rancher \
@@ -200,7 +202,7 @@ In the following command,
- Set `letsEncrypt.email` to the email address used for communication about your certificate (for example, expiry notices).
- Set `letsEncrypt.ingress.class` to whatever your ingress controller is, e.g., `traefik`, `nginx`, `haproxy`, etc.
- To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6`.
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
```
helm install rancher rancher-<CHART_REPO>/rancher \
@@ -234,7 +236,7 @@ Although an entry in the `Subject Alternative Names` is technically required, ha
- Set `replicas` to the number of replicas to use for the Rancher Deployment. This defaults to 3; if you have less than 3 nodes in your cluster you should reduce it accordingly.
- Set `ingress.tls.source` to `secret`.
- To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6`.
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
```
helm install rancher rancher-<CHART_REPO>/rancher \
@@ -5,7 +5,7 @@ aliases:
- /rancher/v2.x/en/installation/resources/advanced/firewall/
---
> We recommend disabling firewalld. For Kubernetes 1.19, firewalld must be turned off.
> We recommend disabling firewalld. For Kubernetes 1.19.x and higher, firewalld must be turned off.
Some distributions of Linux [derived from RHEL,](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Rebuilds) including Oracle Linux, may have default firewall rules that block communication with Helm.
@@ -13,6 +13,7 @@ A summary of the steps is as follows:
2. Create or update the `tls-ca` Kubernetes secret resource with the root CA certificate (only required when using a private CA).
3. Update the Rancher installation using the Helm CLI.
4. Reconfigure the Rancher agents to trust the new CA certificate.
5. Select Force Update of Fleet clusters to connect fleet-agent to Rancher.
The details of these instructions are below.
@@ -145,3 +146,12 @@ First, generate the agent definitions as described here: https://gist.github.com
Then, connect to a controlplane node of the downstream cluster via SSH, create a Kubeconfig and apply the definitions:
https://gist.github.com/superseb/b14ed3b5535f621ad3d2aa6a4cd6443b
# 5. Select Force Update of Fleet clusters to connect fleet-agent to Rancher
Select 'Force Update' for the clusters within the [Continuous Delivery]({{<baseurl>}}/rancher/v2.5/en/deploy-across-clusters/fleet/#accessing-fleet-in-the-rancher-ui) view under Cluster Explorer in the Rancher UI to allow the fleet-agent in downstream clusters to successfully connect to Rancher.
### Why is this step required?
Fleet agents in Rancher managed clusters store kubeconfig that is used to connect to the Rancher proxied kube-api in the fleet-agent secret of the fleet-system namespace. The kubeconfig contains a certificate-authority-data block containing the Rancher CA. When changing the Rancher CA, this block needs to be updated for a successful connection of the fleet-agent to Rancher.
@@ -1281,7 +1281,7 @@ on the master node and ensure the correct value for the `--bind-address` paramet
**Expected result**:
```
'--bind-address' is present OR '--bind-address' is not present
'--bind-address' argument is set to 127.0.0.1
```
### 1.4 Scheduler
@@ -1327,7 +1327,7 @@ on the master node and ensure the correct value for the `--bind-address` paramet
**Expected result**:
```
'--bind-address' is present OR '--bind-address' is not present
'--bind-address' argument is set to 127.0.0.1
```
## 2 Etcd Node Configuration
@@ -667,6 +667,7 @@ rancher_kubernetes_engine_config:
service_node_port_range: 30000-32767
kube_controller:
extra_args:
bind-address: 127.0.0.1
address: 127.0.0.1
feature-gates: RotateKubeletServerCertificate=true
profiling: 'false'
@@ -685,6 +686,7 @@ rancher_kubernetes_engine_config:
generate_serving_certificate: true
scheduler:
extra_args:
bind-address: 127.0.0.1
address: 127.0.0.1
profiling: 'false'
ssh_agent_auth: false
@@ -1803,13 +1803,13 @@ on the master node and ensure the correct value for the --bind-address parameter
**Expected Result**:
```console
'--bind-address' is not present OR '--bind-address' is not present
'--bind-address' argument is set to 127.0.0.1
```
**Returned Value**:
```console
root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=0.0.0.0 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true
root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --bind-address=127.0.0.1 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=127.0.0.1 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true
```
## 1.4 Scheduler
@@ -1859,13 +1859,13 @@ on the master node and ensure the correct value for the --bind-address parameter
**Expected Result**:
```console
'--bind-address' is not present OR '--bind-address' is not present
'--bind-address' argument is set to 127.0.0.1
```
**Returned Value**:
```console
root 4947 4930 1 16:16 ? 00:00:02 kube-scheduler --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --address=0.0.0.0
root 4947 4930 1 16:16 ? 00:00:02 kube-scheduler --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --address=127.0.0.1 --bind-address=127.0.0.1
```
## 2 Etcd Node Configuration Files
@@ -511,6 +511,8 @@ rancher_kubernetes_engine_config:
kube_controller:
extra_args:
feature-gates: RotateKubeletServerCertificate=true
bind-address: 127.0.0.1
address: 127.0.0.1
kubelet:
extra_args:
feature-gates: RotateKubeletServerCertificate=true
@@ -519,6 +521,10 @@ rancher_kubernetes_engine_config:
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
fail_swap_on: false
generate_serving_certificate: true
scheduler:
extra_args:
bind-address: 127.0.0.1
address: 127.0.0.1
ssh_agent_auth: false
upgrade_strategy:
max_unavailable_controlplane: '1'
@@ -23,36 +23,87 @@ If your organization uses Keycloak Identity Provider (IdP) for user authenticati
><sup>1</sup>: Optionally, you can enable either one or both of these settings.
><sup>2</sup>: Rancher SAML metadata won't be generated until a SAML provider is configured and saved.
{{< img "/img/rancher/keycloak/keycloak-saml-client-configuration.png" "">}}
- In the new SAML client, create Mappers to expose the users fields
- Add all "Builtin Protocol Mappers"
{{< img "/img/rancher/keycloak/keycloak-saml-client-builtin-mappers.png" "">}}
- Create a new "Group list" mapper to map the member attribute to a user's groups
{{< img "/img/rancher/keycloak/keycloak-saml-client-group-mapper.png" "">}}
- Export a `metadata.xml` file from your Keycloak client:
From the `Installation` tab, choose the `SAML Metadata IDPSSODescriptor` format option and download your file.
>**Note**
> Keycloak versions 6.0.0 and up no longer provide the IDP metadata under the `Installation` tab.
> You can still get the XML from the following url:
>
> `https://{KEYCLOAK-URL}/auth/realms/{REALM-NAME}/protocol/saml/descriptor`
>
> The XML obtained from this URL contains `EntitiesDescriptor` as the root element. Rancher expects the root element to be `EntityDescriptor` rather than `EntitiesDescriptor`. So before passing this XML to Rancher, follow these steps to adjust it:
>
> * Copy all the attributes from `EntitiesDescriptor` to the `EntityDescriptor` that are not present.
> * Remove the `<EntitiesDescriptor>` tag from the beginning.
> * Remove the `</EntitiesDescriptor>` from the end of the xml.
>
> You are left with something similar as the example below:
>
> ```
> <EntityDescriptor xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" entityID="https://{KEYCLOAK-URL}/auth/realms/{REALM-NAME}">
> ....
> </EntityDescriptor>
> ```
{{< img "/img/rancher/keycloak/keycloak-saml-client-group-mapper.png" "">}}
## Getting the IDP Metadata
{{% tabs %}}
{{% tab "Keycloak 5 and earlier" %}}
To get the IDP metadata, export a `metadata.xml` file from your Keycloak client.
From the **Installation** tab, choose the **SAML Metadata IDPSSODescriptor** format option and download your file.
{{% /tab %}}
{{% tab "Keycloak 6-13" %}}
1. From the **Configure** section, click the **Realm Settings** tab.
1. Click the **General** tab.
1. From the **Endpoints** field, click **SAML 2.0 Identity Provider Metadata**.
Verify the IDP metadata contains the following attributes:
```
xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
xmlns:ds="http://www.w3.org/2000/09/xmldsig#"
```
Some browsers, such as Firefox, may render/process the document such that the contents appear to have been modified, and some attributes appear to be missing. In this situation, use the raw response data that can be found using your browser.
The following is an example process for Firefox, but will vary slightly for other browsers:
1. Press **F12** to access the developer console.
1. Click the **Network** tab.
1. From the table, click the row containing `descriptor`.
1. From the details pane, click the **Response** tab.
1. Copy the raw response data.
The XML obtained contains `EntitiesDescriptor` as the root element. Rancher expects the root element to be `EntityDescriptor` rather than `EntitiesDescriptor`. So before passing this XML to Rancher, follow these steps to adjust it:
1. Copy all the attributes from `EntitiesDescriptor` to the `EntityDescriptor` that are not present.
1. Remove the `<EntitiesDescriptor>` tag from the beginning.
1. Remove the `</EntitiesDescriptor>` from the end of the xml.
You are left with something similar as the example below:
```
<EntityDescriptor xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" entityID="https://{KEYCLOAK-URL}/auth/realms/{REALM-NAME}">
....
</EntityDescriptor>
```
{{% /tab %}}
{{% tab "Keycloak 14+" %}}
1. From the **Configure** section, click the **Realm Settings** tab.
1. Click the **General** tab.
1. From the **Endpoints** field, click **SAML 2.0 Identity Provider Metadata**.
Verify the IDP metadata contains the following attributes:
```
xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
xmlns:ds="http://www.w3.org/2000/09/xmldsig#"
```
Some browsers, such as Firefox, may render/process the document such that the contents appear to have been modified, and some attributes appear to be missing. In this situation, use the raw response data that can be found using your browser.
The following is an example process for Firefox, but will vary slightly for other browsers:
1. Press **F12** to access the developer console.
1. Click the **Network** tab.
1. From the table, click the row containing `descriptor`.
1. From the details pane, click the **Response** tab.
1. Copy the raw response data.
{{% /tab %}}
{{% /tabs %}}
## Configuring Keycloak in Rancher
@@ -40,7 +40,21 @@ You can override the primary color used throughout the UI with a custom color of
### Fixed Banners
{{% tabs %}}
{{% tab "Rancher before v2.6.4" %}}
Display a custom fixed banner in the header, footer, or both.
{{% /tab %}}
{{% tab "Rancher v2.6.4+" %}}
Display a custom fixed banner in the header, footer, or both.
As of Rancher v2.6.4, configuration of fixed banners has moved from the **Branding** tab to the **Banners** tab.
To configure banner settings,
1. Click **☰ > Global settings**.
2. Click **Banners**.
{{% /tab %}}
{{% /tabs %}}
# Custom Navigation Links
@@ -85,16 +85,30 @@ spec:
>**Important:** The field `encryptionConfigSecretName` must be set only if your backup was created with encryption enabled. Provide the name of the Secret containing the encryption config file. If you only have the encryption config file, but don't have a secret created with it in this cluster, use the following steps to create the secret:
1. The encryption configuration file must be named `encryption-provider-config.yaml`, and the `--from-file` flag must be used to create this secret. So save your `EncryptionConfiguration` in a file called `encryption-provider-config.yaml` and run this command:
```
kubectl create secret generic encryptionconfig \
--from-file=./encryption-provider-config.yaml \
-n cattle-resources-system
```
1. Then apply the resource:
```
kubectl apply -f migrationResource.yaml
```
```
kubectl create secret generic encryptionconfig \
--from-file=./encryption-provider-config.yaml \
-n cattle-resources-system
```
1. Apply the manifest, and watch for the Restore resources status:
Apply the resource:
```
kubectl apply -f migrationResource.yaml
```
Watch the Restore status:
```
kubectl get restore
```
Watch the restoration logs:
```
kubectl logs -n cattle-resources-system --tail 100 -f rancher-backup-xxx-xxx
```
Once the Restore resource has the status `Completed`, you can continue the Rancher installation.
### 3. Install cert-manager
@@ -19,3 +19,22 @@ Certificates can be rotated for the following services:
> **Note:** For users who didn't rotate their webhook certificates, and they have expired after one year, please see this [page]({{<baseurl>}}/rancher/v2.6/en/troubleshooting/expired-webhook-certificates/) for help.
### Certificate Rotation
Rancher launched Kubernetes clusters have the ability to rotate the auto-generated certificates through the UI.
1. In the **Global** view, navigate to the cluster that you want to rotate certificates.
2. Select **⋮ > Rotate Certificates**.
3. Select which certificates that you want to rotate.
* Rotate all Service certificates (keep the same CA)
* Rotate an individual service and choose one of the services from the drop-down menu
4. Click **Save**.
**Results:** The selected certificates will be rotated and the related services will be restarted to start using the new certificate.
> **Note:** Even though the RKE CLI can use custom certificates for the Kubernetes cluster components, Rancher currently doesn't allow the ability to upload these in Rancher launched Kubernetes clusters.
@@ -341,7 +341,10 @@ Example:
local_cluster_auth_endpoint:
enabled: true
fqdn: "FQDN"
ca_certs: "BASE64_CACERT"
ca_certs: |-
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
```
### Custom Network Plug-in
@@ -14,7 +14,6 @@ You can use Rancher to create a cluster hosted in Microsoft Azure Kubernetes Ser
- [Role-based Access Control](#role-based-access-control)
- [AKS Cluster Configuration Reference](#aks-cluster-configuration-reference)
- [Private Clusters](#private-clusters)
- [Minimum AKS Permissions](#minimum-aks-permissions)
- [Syncing](#syncing)
- [Programmatically Creating AKS Clusters](#programmatically-creating-aks-clusters)
@@ -168,7 +168,7 @@ Also in the K3s documentation, nodes with the worker role are called agent nodes
# Debug Logging and Troubleshooting for Registered K3s Clusters
Nodes are upgraded by the system upgrade controller running in the downstream cluster. Based on the cluster configuration, Rancher deploys two [plans](https://github.com/rancher/system-upgrade-controller#example-upgrade-plan) to upgrade K3s nodes: one for controlplane nodes and one for workers. The system upgrade controller follows the plans and upgrades the nodes.
Nodes are upgraded by the system upgrade controller running in the downstream cluster. Based on the cluster configuration, Rancher deploys two [plans](https://github.com/rancher/system-upgrade-controller#example-upgrade-plan) to upgrade K3s nodes: one for controlplane nodes and one for workers. The system upgrade controller follows the plans and upgrades the nodes.
To enable debug logging on the system upgrade controller deployment, edit the [configmap](https://github.com/rancher/system-upgrade-controller/blob/50a4c8975543d75f1d76a8290001d87dc298bdb4/manifests/system-upgrade-controller.yaml#L32) to set the debug environment variable to true. Then restart the `system-upgrade-controller` pod.
@@ -196,7 +196,7 @@ Authorized Cluster Endpoint (ACE) support has been added for registered RKE2 and
> **Note:**
>
> - These steps only need to be performed on the control plane nodes of the downstream cluster. You must configure each control plane node individually.
> - These steps only need to be performed on the control plane nodes of the downstream cluster. You must configure each control plane node individually.
>
> - The following steps will work on both RKE2 and K3s clusters registered in v2.6.x as well as those registered (or imported) from a previous version of Rancher with an upgrade to v2.6.x.
>
@@ -223,19 +223,19 @@ Authorized Cluster Endpoint (ACE) support has been added for registered RKE2 and
context:
user: Default
cluster: Default
1. Add the following to the config file (or create one if it doesnt exist); note that the default location is `/etc/rancher/{rke2,k3s}/config.yaml`:
kube-apiserver-arg:
- authentication-token-webhook-config-file=/var/lib/rancher/{rke2,k3s}/kube-api-authn-webhook.yaml
1. Run the following commands:
sudo systemctl stop {rke2,k3s}-server
sudo systemctl start {rke2,k3s}-server
1. Finally, you **must** go back to the Rancher UI and edit the imported cluster there to complete the ACE enablement. Click on **⋮ > Edit Config**, then click the **Networking** tab under Cluster Configuration. Finally, click the **Enabled** button for **Authorized Endpoint**. Once the ACE is enabled, you then have the option of entering a fully qualified domain name (FQDN) and certificate information.
>**Note:** The <b>FQDN</b> field is optional, and if one is entered, it should point to the downstream cluster. Certificate information is only needed if there is a load balancer in front of the downstream cluster that is using an untrusted certificate. If you have a valid certificate, then nothing needs to be added to the <b>CA Certificates</b> field.
# Annotating Registered Clusters
@@ -286,4 +286,3 @@ To annotate a registered cluster,
1. Click **Save**.
**Result:** The annotation does not give the capabilities to the cluster, but it does indicate to Rancher that the cluster has those capabilities.
@@ -0,0 +1,34 @@
---
title: Behavior Differences Between RKE1 and RKE2
weight: 2450
---
RKE2, also known as RKE Government, is a Kubernetes distribution that focuses on security and compliance for U.S. Federal Government entities. It is considered the next iteration of the Rancher Kubernetes Engine, now known as RKE1.
RKE1 and RKE2 have several slight behavioral differences to note, and this page will highlight some of these at a high level.
### Control Plane Components
RKE1 uses Docker for deploying and managing control plane components, and it also uses Docker as the container runtime for Kubernetes. By contrast, RKE2 launches control plane components as static pods that are managed by the kubelet. RKE2's container runtime is containerd, which allows things such as container registry mirroring (RKE1 with Docker does not).
### Cluster API
RKE2/K3s provisioning is built on top of the Cluster API (CAPI) upstream framework which often makes RKE2-provisioned clusters behave differently than RKE1-provisioned clusters.
When you make changes to your cluster configuration in RKE2, this **may** result in nodes reprovisioning. This is controlled by CAPI controllers and not by Rancher itself. Note that for etcd nodes, the same behavior does not apply.
The following are some specific example configuration changes that may cause the described behavior:
- When editing the cluster and enabling `drain before delete`, the existing control plane nodes and worker are deleted and new nodes are created.
- When nodes are being provisioned and a scale down operation is performed, rather than scaling down the desired number of nodes, it is possible that the currently provisioning nodes get deleted and new nodes are provisioned to reach the desired node count. Please note that this is a bug in Cluster API, and it will be fixed in an upcoming release. Once fixed, Rancher will update the documentation.
Users who are used to RKE1 provisioning should take note of this new RKE2 behavior which may be unexpected.
### Terminology
You will notice that some terms have changed or gone away going from RKE1 to RKE2. For example, in RKE1 provisioning, you use **node templates**; in RKE2 provisioning, you can configure your cluster node pools when creating or editing the cluster. Another example is that the term **node pool** in RKE1 is now known as **machine pool** in RKE2.
@@ -10,7 +10,10 @@ This page covers how to install the Cloud Provider Interface (CPI) and Cloud Sto
# Prerequisites
The vSphere version must be 7.0u1 or higher.
The vSphere versions supported:
* 6.7u3
* 7.0u1 or higher.
The Kubernetes version must be 1.19 or higher.
@@ -34,6 +34,11 @@ Choose the default security group or configure a security group.
Please refer to [Amazon EC2 security group when using Node Driver]({{<baseurl>}}/rancher/v2.6/en/installation/requirements/ports/#rancher-aws-ec2-security-group) to see what rules are created in the `rancher-nodes` Security Group.
---
**_New in v2.6.4_**
If you provide your own security group for an EC2 instance, please note that Rancher will not modify it. As such, you will be responsible for ensuring that your security group is set to allow the [necessary ports for Rancher to provision the instance]({{<baseurl>}}/rancher/v2.6/en/installation/requirements/ports/#ports-for-rancher-server-nodes-on-rke). For more information on controlling inbound and outbound traffic to EC2 instances with security groups, refer [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups).
### Instance Options
Configure the instances that will be created. Make sure you configure the correct **SSH User** for the configured AMI. It is possible that a selected region does not support the default instance type. In this scenario you must select an instance type that does exist, otherwise an error will occur stating the requested configuration is not supported.
@@ -74,11 +74,14 @@ To add a private CA for Helm Chart repositories:
[...]
```
- **Git-based chart repositories**: It is not currently possible to add a private CA. For git-based chart repositories with a certificate signed by a private CA, you must disable TLS verification. Click **Edit YAML** for the chart repo, and add the key/value pair as follows:
- **Git-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click **Edit YAML** for the chart repo and set, as in the following example:</br>
```
[...]
spec:
insecureSkipTLSVerify: true
caBundle:
MIIFXzCCA0egAwIBAgIUWNy8WrvSkgNzV0zdWRP79j9cVcEwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMRQwEgYDVQQKDAtNeU9yZywgSW5jLjENMAsGA1UEAwwEcm9vdDAeFw0yMTEyMTQwODMyMTdaFw0yNDEwMDMwODMyMT
...
nDxZ/tNXt/WPJr/PgEB3hQdInDWYMg7vGO0Oz00G5kWg0sJ0ZTSoA10ZwdjIdGEeKlj1NlPyAqpQ+uDnmx6DW+zqfYtLnc/g6GuLLVPamraqN+gyU8CHwAWPNjZonFN9Vpg0PIk1I2zuOc4EHifoTAXSpnjfzfyAxCaZsnTptimlPFJJqAMj+FfDArGmr4=
[...]
```
@@ -17,7 +17,7 @@ In this section, you'll learn how to deploy Rancher on a Kubernetes cluster usin
### Kubernetes Cluster
Set up the Rancher server's local Kubernetes cluster.
Set up the Rancher server's local Kubernetes cluster.
Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS.
@@ -104,7 +104,7 @@ There are three recommended options for the source of the certificate used for T
### 4. Install cert-manager
> You should skip this step if you are bringing your own certificate files (option `ingress.tls.source=secret`), or if you use [TLS termination on an external load balancer]({{<baseurl>}}/rancher/v2.6/en/installation/install-rancher-on-k8s/chart-options/#external-tls-termination).
> You should skip this step if you are bringing your own certificate files (option `ingress.tls.source=secret`), or if you use [TLS termination on an external load balancer]({{<baseurl>}}/rancher/v2.6/en/installation/install-rancher-on-k8s/chart-options/#external-tls-termination).
This step is only required to use certificates issued by Rancher's generated CA (`ingress.tls.source=rancher`) or to request Let's Encrypt issued certificates (`ingress.tls.source=letsEncrypt`).
@@ -148,6 +148,8 @@ cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m
The exact command to install Rancher differs depending on the certificate configuration.
However, irrespective of the certificate configuration, the name of the Rancher installation in the `cattle-system` namespace should always be `rancher`.
> **Tip for testing and development:** This final command to install Rancher requires a domain name that forwards traffic to Rancher. If you are using the Helm CLI to set up a proof-of-concept, you can use a fake domain name when passing the `hostname` option. An example of a fake domain name would be `<IP_OF_LINUX_NODE>.sslip.io`, which would expose Rancher on an IP where it is running. Production installs would require a real domain name.
{{% tabs %}}
@@ -160,7 +162,7 @@ Because `rancher` is the default option for `ingress.tls.source`, we are not spe
- Set the `hostname` to the DNS name you pointed at your load balancer.
- Set the `bootstrapPassword` to something unique for the `admin` user.
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
- To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6`
```
@@ -192,7 +194,7 @@ In the following command,
- `ingress.tls.source` is set to `letsEncrypt`
- `letsEncrypt.email` is set to the email address used for communication about your certificate (for example, expiry notices)
- Set `letsEncrypt.ingress.class` to whatever your ingress controller is, e.g., `traefik`, `nginx`, `haproxy`, etc.
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
```
helm install rancher rancher-<CHART_REPO>/rancher \
@@ -225,7 +227,7 @@ Although an entry in the `Subject Alternative Names` is technically required, ha
- Set the `hostname`.
- Set the `bootstrapPassword` to something unique for the `admin` user.
- Set `ingress.tls.source` to `secret`.
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
```
helm install rancher rancher-<CHART_REPO>/rancher \
@@ -65,7 +65,7 @@ helm upgrade --install rancher rancher-latest/rancher \
--namespace cattle-system \
--set hostname=rancher.example.com \
--set proxy=http://${proxy_host}
--set no_proxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local
--set noProxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local
```
After waiting for the deployment to finish:
@@ -9,7 +9,7 @@ Once the infrastructure is ready, you can continue with setting up an RKE cluste
First, you have to install Docker and setup the HTTP proxy on all three Linux nodes. For this perform the following steps on all three nodes.
For convenience export the IP address and port of your proxy into an environment variable and set up the HTTP_PROXY variables for your current shell:
For convenience, export the IP address and port of your proxy into an environment variable and set up the HTTP_PROXY variables for your current shell:
```
export proxy_host="10.0.0.5:8888"
@@ -58,6 +58,24 @@ sudo systemctl daemon-reload
sudo systemctl restart docker
```
#### Air-gapped proxy
_New in v2.6.4_
You can now provision node driver clusters from an air-gapped cluster configured to use a proxy for outbound connections.
In addition to setting the default rules for a proxy server, you will need to add additional rules, shown below, to provision node driver clusters from a proxied Rancher environment.
You will configure your filepath according to your setup, e.g., `/etc/apt/apt.conf.d/proxy.conf`:
```
acl SSL_ports port 22
acl SSL_ports port 2376
acl Safe_ports port 22 # ssh
acl Safe_ports port 2376 # docker port
```
### Creating the RKE Cluster
You need several command line tools on the host where you have SSH access to the Linux nodes to create and interact with the cluster:
@@ -40,3 +40,21 @@ docker run -d --restart=unless-stopped \
```
Privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
### Air-gapped proxy configuration
_New in v2.6.4_
You can now provision node driver clusters from an air-gapped cluster configured to use a proxy for outbound connections.
In addition to setting the default rules for a proxy server as shown above, you will need to add additional rules, shown below, to provision node driver clusters from a proxied Rancher environment.
You will configure your filepath according to your setup, e.g., `/etc/apt/apt.conf.d/proxy.conf`:
```
acl SSL_ports port 22
acl SSL_ports port 2376
acl Safe_ports port 22 # ssh
acl Safe_ports port 2376 # docker port
```
@@ -3,7 +3,7 @@ title: Opening Ports with firewalld
weight: 1
---
> We recommend disabling firewalld. For Kubernetes 1.19, firewalld must be turned off.
> We recommend disabling firewalld. For Kubernetes 1.19.x and higher, firewalld must be turned off.
Some distributions of Linux [derived from RHEL,](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Rebuilds) including Oracle Linux, may have default firewall rules that block communication with Helm.
@@ -11,6 +11,7 @@ A summary of the steps is as follows:
2. Create or update the `tls-ca` Kubernetes secret resource with the root CA certificate (only required when using a private CA).
3. Update the Rancher installation using the Helm CLI.
4. Reconfigure the Rancher agents to trust the new CA certificate.
5. Select Force Update of Fleet clusters to connect fleet-agent to Rancher.
The details of these instructions are below.
@@ -143,3 +144,11 @@ First, generate the agent definitions as described here: https://gist.github.com
Then, connect to a controlplane node of the downstream cluster via SSH, create a Kubeconfig and apply the definitions:
https://gist.github.com/superseb/b14ed3b5535f621ad3d2aa6a4cd6443b
# 5. Select Force Update of Fleet clusters to connect fleet-agent to Rancher
Select 'Force Update' for the clusters within the [Continuous Delivery]({{<baseurl>}}/rancher/v2.6/en/deploy-across-clusters/fleet/#accessing-fleet-in-the-rancher-ui) view of the Rancher UI to allow the fleet-agent in downstream clusters to successfully connect to Rancher.
### Why is this step required?
Fleet agents in Rancher managed clusters store kubeconfig that is used to connect to the Rancher proxied kube-api in the fleet-agent secret of the fleet-system namespace. The kubeconfig contains a certificate-authority-data block containing the Rancher CA. When changing the Rancher CA, this block needs to be updated for a successful connection of the fleet-agent to Rancher.
@@ -19,18 +19,38 @@ The resource quota includes two limits, which you set while creating or editing
- **Project Limits:**
This set of values configures an overall resource limit for the project. If you try to add a new namespace to the project, Rancher uses the limits you've set to validate that the project has enough resources to accommodate the namespace. In other words, if you try to move a namespace into a project near its resource quota, Rancher blocks you from moving the namespace.
This set of values configures a total limit for each specified resource shared among all namespaces in the project.
- **Namespace Default Limits:**
This value is the default resource limit available for each namespace. When the resource quota is created at the project level, this limit is automatically propagated to each namespace in the project. Each namespace is bound to this default limit unless you override it.
This set of values configures the default quota limit available for each namespace for each specified resource.
When a namespace is created in the project without overrides, this limit is automatically bound to the namespace and enforced.
In the following diagram, a Rancher administrator wants to apply a resource quota that sets the same CPU and memory limit for every namespace in their project (`Namespace 1-4`). However, in Rancher, the administrator can set a resource quota for the project (`Project Resource Quota`) rather than individual namespaces. This quota includes resource limits for both the entire project (`Project Limit`) and individual namespaces (`Namespace Default Limit`). Rancher then propagates the `Namespace Default Limit` quotas to each namespace (`Namespace Resource Quota`) when created.
<sup>Rancher: Resource Quotas Propagating to Each Namespace</sup>
![Rancher Resource Quota Implementation]({{<baseurl>}}/img/rancher/rancher-resource-quota.png)
Let's highlight some more nuanced functionality. If a quota is deleted at the project level, it will also be removed from all namespaces contained within that project, despite any overrides that may exist. Further, updating an existing namespace default limit for a quota at the project level will not result in that value being propagated to existing namespaces in the project; the updated value will only be applied to newly created namespaces in that project. To update a namespace default limit for existing namespaces you can delete and subsequently recreate the quota at the project level with the new default value. This will result in the new default value being applied to all existing namespaces in the project.
Let's highlight some more nuanced functionality for namespaces created **_within_** the Rancher UI. If a quota is deleted at the project level, it will also be removed from all namespaces contained within that project, despite any overrides that may exist. Further, updating an existing namespace default limit for a quota at the project level will not result in that value being propagated to existing namespaces in the project; the updated value will only be applied to newly created namespaces in that project. To update a namespace default limit for existing namespaces you can delete and subsequently recreate the quota at the project level with the new default value. This will result in the new default value being applied to all existing namespaces in the project.
Before creating a namespace in a project, Rancher compares the amounts of the project's available resources and requested resources, regardless of whether they come from the default or overridden limits.
If the requested resources exceed the remaining capacity in the project for those resources, Rancher will assign the namespace the remaining capacity for that resource.
However, this is not the case with namespaces created **_outside_** of Rancher's UI. For namespaces created via `kubectl`, Rancher
will assign a resource quota that has a **zero** amount for any resource that requested more capacity than what remains in the project.
To create a namespace in an existing project via `kubectl`, use the `field.cattle.io/projectId` annotation. To override the default
requested quota limit, use the `field.cattle.io/resourceQuota` annotation.
```
apiVersion: v1
kind: Namespace
metadata:
annotations:
field.cattle.io/projectId: [your-cluster-ID]:[your-project-ID]
field.cattle.io/resourceQuota: '{"limit":{"limitsCpu":"100m", "limitsMemory":"100Mi", "configMaps": "50"}}'
name: my-ns
```
The following table explains the key differences between the two quota types.
@@ -6,7 +6,9 @@ weight: 230
To deploy Kubernetes, RKE deploys several core components or services in Docker containers on the nodes. Based on the roles of the node, the containers deployed may be different.
**All services support additional [custom arguments, Docker mount binds and extra environment variables]({{<baseurl>}}/rke/latest/en/config-options/services/services-extras/).**
>**Note:** All services support <b>additional custom arguments, Docker mount binds, and extra environment variables.</b>
>
>To configure advanced options for Kubernetes services such as `kubelet`, `kube-controller`, and `kube-apiserver` that are not documented below, see the [`extra_args` documentation]({{<baseurl>}}/rke/latest/en/config-options/services/services-extras/) for more details.
| Component | Services key name in cluster.yml |
|-------------------------|----------------------------------|
+36
View File
@@ -0,0 +1,36 @@
---
title: CIS v1.6 Benchmark - Self-Assessment Guide - Rancher v2.6
weight: 101
---
### CIS v1.6 Kubernetes Benchmark - Rancher v2.6 with Kubernetes v1.18 to v1.21
[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.6/Rancher_v2-6_CIS_v1-6_Benchmark_Assessment.pdf).
#### Overview
This document is a companion to the [Rancher v2.6 security hardening guide]({{<baseurl>}}/rancher/v2.6/en/security/hardening-guides/). The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark.
This guide corresponds to specific versions of the hardening guide, Rancher, CIS Benchmark and Kubernetes:
| Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version |
| ----------------------- | --------------- | --------------------- | ------------------- |
| Hardening Guide CIS v1.6 Benchmark | Rancher v2.6.3 | CIS v1.6 | Kubernetes v1.18, v1.19, v1.20 and v1.21 |
Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark do not apply and will have a result of \`Not Applicable\`. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher created clusters.
This document is to be used by Rancher operators, security teams, auditors and decision makers.
For more detail about each audit, including rationales and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.6. You can download the benchmark, after creating a free account, in [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
#### Testing controls methodology
Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files.
Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher are provided for testing. When performing the tests, you will need access to the Docker command line on the hosts of all three RKE roles. The commands also make use of the [kubectl](https://kubernetes.io/docs/tasks/tools/) (with a valid configuration file) and [jq](https://stedolan.github.io/jq/) tools, which are required in the testing and evaluation of test results.
> NOTE: Only `automated` tests (previously called `scored`) are covered in this guide.
### Controls
---
+35
View File
@@ -0,0 +1,35 @@
---
title: CIS Self Assessment Guide
weight: 90
---
### CIS Kubernetes Benchmark v1.6 - K3s with Kubernetes v1.17 to v1.21
#### Overview
This document is a companion to the [K3s security hardening guide]({{<baseurl>}}/k3s/latest/en/security/hardening_guide/). The hardening guide provides prescriptive guidance for hardening a production installation of K3s, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the CIS Kubernetes Benchmark. It is to be used by K3s operators, security teams, auditors, and decision-makers.
This guide is specific to the **v1.17**, **v1.18**, **v1.19**, **v1.20** and **v1.21** release line of K3s and the **v1.6** release of the CIS Kubernetes Benchmark.
For more information about each control, including detailed descriptions and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.6. You can download the benchmark, after creating a free account, in [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/kubernetes/).
#### Testing controls methodology
Each control in the CIS Kubernetes Benchmark was evaluated against a K3s cluster that was configured according to the accompanying hardening guide.
Where control audits differ from the original CIS benchmark, the audit commands specific to K3s are provided for testing.
These are the possible results for each control:
- **Pass** - The K3s cluster under test passed the audit outlined in the benchmark.
- **Not Applicable** - The control is not applicable to K3s because of how it is designed to operate. The remediation section will explain why this is so.
- **Warn** - The control is manual in the CIS benchmark and it depends on the cluster's use case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure K3s does not prevent their implementation, but no further configuration or auditing of the cluster under test has been performed.
This guide makes the assumption that K3s is running as a Systemd unit. Your installation may vary and will require you to adjust the "audit" commands to fit your scenario.
> NOTE: Only `automated` tests (previously called `scored`) are covered in this guide.
### Controls
---
+2 -1
View File
@@ -2,8 +2,9 @@
results=${1:?path to kube-bench json results is a required argument}
test_helpers=${2:?path to kube-bench test_helpers scripts is a required argument}
header=${3:?path to header file is a required argument}
[ -f ${results} ] || (echo "file:'${results}' does not exist"; exit 1)
[ -d ${test_helpers} ] || (echo "dir: '${test_helpers}' not a valid directory"; exit 1)
docker run -v${results}:/source/results.json -v ${test_helpers}:/test_helpers -it --rm doc_converters:latest results_to_md
docker run -v ${results}:/source/results.json -v ${test_helpers}:/test_helpers -v ${header}:/headers/header.md -it --rm doc_converters:latest results_to_md
+4 -41
View File
@@ -1,48 +1,11 @@
#!/bin/bash
#results_file="${1:-/source/results.json}"
results_file="${1:-/home/paraglade/brain/projects/cis_benchmark/clusters/cis/csr.json}"
#test_helpers="${2:-/test_helpers}"
test_helpers="${2:-/home/paraglade/brain/repos/rancher-security-scan/package/helper_scripts}"
results_file="${1:-/source/results.json}"
test_helpers="${2:-/test_helpers}"
header_file="${3:-/headers/header.md}"
header() {
cat <<EOF
---
title: CIS 1.6 Benchmark - Self-Assessment Guide - Rancher v2.5
weight: 101
---
### CIS v1.6 Kubernetes Benchmark - Rancher v2.5 with Kubernetes v1.18
[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.5/Rancher_1.6_Benchmark_Assessment.pdf)
#### Overview
This document is a companion to the Rancher v2.5 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark.
This guide corresponds to specific versions of the hardening guide, Rancher, CIS Benchmark, and Kubernetes:
Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version
---------------------------|----------|---------|-------
Hardening Guide with CIS 1.5 Benchmark | Rancher v2.5 | CIS v1.5| Kubernetes v1.15
Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply and will have a result of \`Not Applicable\`. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher-created clusters.
This document is to be used by Rancher operators, security teams, auditors and decision makers.
For more detail about each audit, including rationales and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.5. You can download the benchmark after logging in to [CISecurity.org]( https://www.cisecurity.org/benchmark/kubernetes/).
#### Testing controls methodology
Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files.
Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher Labs are provided for testing.
When performing the tests, you will need access to the Docker command line on the hosts of all three RKE roles. The commands also make use of the the [jq](https://stedolan.github.io/jq/) and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) (with valid config) tools to and are required in the testing and evaluation of test results.
> NOTE: only scored tests are covered in this guide.
### Controls
EOF
cat ${header_file}
}
get_ids() {