Add version 2.9 preview

This commit is contained in:
Billy Tat
2024-05-28 15:47:56 -07:00
parent 2611f98cbb
commit 1da04754f0
1003 changed files with 151079 additions and 0 deletions
@@ -0,0 +1,11 @@
---
title: Advanced User Guides
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides"/>
</head>
Advanced user guides are "problem-oriented" docs in which users learn how to answer questions or solve problems. The major difference between these and the new user guides is that these guides are geared toward more experienced or advanced users who have more technical needs from their documentation. These users already have an understanding of Rancher and its functions. They know what they need to accomplish; they just need additional guidance to complete some more complex task they they have encountered while working.
It should be noted that neither new user guides nor advanced user guides provide detailed explanations or discussions (these kinds of docs belong elsewhere). How-to guides focus on the action of guiding users through repeatable, effective steps to learn new skills, master some task, or overcome some problem.
@@ -0,0 +1,17 @@
---
title: CIS Scan Guides
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides"/>
</head>
- [Install rancher-cis-benchmark](install-rancher-cis-benchmark.md)
- [Uninstall rancher-cis-benchmark](uninstall-rancher-cis-benchmark.md)
- [Run a Scan](run-a-scan.md)
- [Run a Scan Periodically on a Schedule](run-a-scan-periodically-on-a-schedule.md)
- [Skip Tests](skip-tests.md)
- [View Reports](view-reports.md)
- [Enable Alerting for rancher-cis-benchmark](enable-alerting-for-rancher-cis-benchmark.md)
- [Configure Alerts for Periodic Scan on a Schedule](configure-alerts-for-periodic-scan-on-a-schedule.md)
- [Create a Custom Benchmark Version to Run](create-a-custom-benchmark-version-to-run.md)
@@ -0,0 +1,44 @@
---
title: Configure Alerts for Periodic Scan on a Schedule
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule"/>
</head>
It is possible to run a ClusterScan on a schedule.
A scheduled scan can also specify if you should receive alerts when the scan completes.
Alerts are supported only for a scan that runs on a schedule.
The CIS Benchmark application supports two types of alerts:
- Alert on scan completion: This alert is sent out when the scan run finishes. The alert includes details including the ClusterScan's name and the ClusterScanProfile name.
- Alert on scan failure: This alert is sent out if there are some test failures in the scan run or if the scan is in a `Fail` state.
:::note Prerequisite
Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md)
While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-cis-scan-alerts)
:::
To configure alerts for a scan that runs on a schedule,
1. Please enable alerts on the `rancher-cis-benchmark` application. For more information, see [this page](../../../how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark.md).
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**.
1. Click **CIS Benchmark > Scan**.
1. Click **Create**.
1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on.
1. Choose the option **Run scan on a schedule**.
1. Enter a valid [cron schedule expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) in the field **Schedule**.
1. Check the boxes next to the Alert types under **Alerting**.
1. Optional: Choose a **Retention Count**, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged.
1. Click **Create**.
**Result:** The scan runs and reschedules to run according to the cron schedule provided. Alerts are sent out when the scan finishes if routes and receiver are configured under `rancher-monitoring` application.
A report is generated with the scan results every time the scan runs. To see the latest results, click the name of the scan that appears.
@@ -0,0 +1,13 @@
---
title: Create a Custom Benchmark Version for Running a Cluster Scan
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/create-a-custom-benchmark-version-to-run"/>
</head>
There could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them.
It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application.
For details, see [this page.](../../../integrations-in-rancher/cis-scans/custom-benchmark.md)
@@ -0,0 +1,24 @@
---
title: Enable Alerting for Rancher CIS Benchmark
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark"/>
</head>
Alerts can be configured to be sent out for a scan that runs on a schedule.
:::note Prerequisite:
Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md)
While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-cis-scan-alerts)
:::
While installing or upgrading the `rancher-cis-benchmark` Helm chart, set the following flag to `true` in the `values.yaml`:
```yaml
alerts:
enabled: true
```
@@ -0,0 +1,21 @@
---
title: Install Rancher CIS Benchmark
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/install-rancher-cis-benchmark"/>
</head>
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to install CIS Benchmark and click **Explore**.
1. In the left navigation bar, click **Apps > Charts**.
1. Click **CIS Benchmark**
1. Click **Install**.
**Result:** The CIS scan application is deployed on the Kubernetes cluster.
:::note
If you are running Kubernetes v1.24 or earlier, and have a [Pod Security Policy](../../new-user-guides/authentication-permissions-and-global-configuration/create-pod-security-policies.md) (PSP) hardened cluster, CIS Benchmark 4.0.0 and later disable PSPs by default. To install CIS Benchmark on a PSP-hardened cluster, set `global.psp.enabled` to `true` in the values before installing the chart. [Pod Security Admission](../../new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards.md) (PSA) hardened clusters aren't affected.
:::
@@ -0,0 +1,24 @@
---
title: Run a Scan Periodically on a Schedule
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan-periodically-on-a-schedule"/>
</head>
To run a ClusterScan on a schedule,
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**.
1. Click **CIS Benchmark > Scan**.
1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on.
1. Choose the option **Run scan on a schedule**.
1. Enter a valid <a href="https://en.wikipedia.org/wiki/Cron#CRON_expression" target="_blank">cron schedule expression</a> in the field **Schedule**.
1. Choose a **Retention** count, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged.
1. Click **Create**.
**Result:** The scan runs and reschedules to run according to the cron schedule provided. The **Next Scan** value indicates the next time this scan will run again.
A report is generated with the scan results every time the scan runs. To see the latest results, click the name of the scan that appears.
You can also see the previous reports by choosing the report from the **Reports** dropdown on the scan detail page.
@@ -0,0 +1,26 @@
---
title: Run a Scan
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan"/>
</head>
When a ClusterScan custom resource is created, it launches a new CIS scan on the cluster for the chosen ClusterScanProfile.
:::note
There is currently a limitation of running only one CIS scan at a time for a cluster. If you create multiple ClusterScan custom resources, they will be run one after the other by the operator, and until one scan finishes, the rest of the ClusterScan custom resources will be in the "Pending" state.
:::
To run a scan,
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**.
1. Click **CIS Benchmark > Scan**.
1. Click **Create**.
1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on.
1. Click **Create**.
**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears.
@@ -0,0 +1,38 @@
---
title: Skip Tests
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/skip-tests"/>
</head>
CIS scans can be run using test profiles with user-defined skips.
To skip tests, you will create a custom CIS scan profile. A profile contains the configuration for the CIS scan, which includes the benchmark versions to use and any specific tests to skip in that benchmark.
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**.
1. Click **CIS Benchmark > Profile**.
1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile and click **⋮ Clone**. If you are filling out the form, add the tests to skip using the test IDs, using the relevant CIS Benchmark as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name:
```yaml
apiVersion: cis.cattle.io/v1
kind: ClusterScanProfile
metadata:
annotations:
meta.helm.sh/release-name: clusterscan-operator
meta.helm.sh/release-namespace: cis-operator-system
labels:
app.kubernetes.io/managed-by: Helm
name: "<example-profile>"
spec:
benchmarkVersion: cis-1.5
skipTests:
- "1.1.20"
- "1.1.21"
```
1. Click **Create**.
**Result:** A new CIS scan profile is created.
When you [run a scan](./run-a-scan.md) that uses this profile, the defined tests will be skipped during the scan. The skipped tests will be marked in the generated report as `Skip`.
@@ -0,0 +1,13 @@
---
title: Uninstall Rancher CIS Benchmark
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/uninstall-rancher-cis-benchmark"/>
</head>
1. From the **Cluster Dashboard,** go to the left navigation bar and click **Apps > Installed Apps**.
1. Go to the `cis-operator-system` namespace and check the boxes next to `rancher-cis-benchmark-crd` and `rancher-cis-benchmark`.
1. Click **Delete** and confirm **Delete**.
**Result:** The `rancher-cis-benchmark` application is uninstalled.
@@ -0,0 +1,16 @@
---
title: View Reports
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/view-reports"/>
</head>
To view the generated CIS scan reports,
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**.
1. Click **CIS Benchmark > Scan**.
1. The **Scans** page will show the generated reports. To see a detailed report, go to a scan report and click the name.
One can download the report from the Scans list or from the scan detail page.
@@ -0,0 +1,264 @@
---
title: Docker Install with TLS Termination at Layer-7 NGINX Load Balancer
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/configure-layer-7-nginx-load-balancer"/>
</head>
For development and testing environments that have a special requirement to terminate TLS/SSL at a load balancer instead of your Rancher Server container, deploy Rancher and configure a load balancer to work with it conjunction.
A layer-7 load balancer can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with.
This install procedure walks you through deployment of Rancher using a single container, and then provides a sample configuration for a layer-7 NGINX load balancer.
## Requirements for OS, Docker, Hardware, and Networking
Make sure that your node fulfills the general [installation requirements.](../../getting-started/installation-and-upgrade/installation-requirements/installation-requirements.md)
## Installation Outline
## 1. Provision Linux Host
Provision a single Linux host according to our [Requirements](../../getting-started/installation-and-upgrade/installation-requirements/installation-requirements.md) to launch your Rancher Server.
## 2. Choose an SSL Option and Install Rancher
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
:::note Do you want to..
- Complete an Air Gap Installation?
- Record all transactions with the Rancher API?
See [Advanced Options](#advanced-options) below before continuing.
:::
Choose from the following options:
<details id="option-a">
<summary>Option A-Bring Your Own Certificate: Self-Signed</summary>
If you elect to use a self-signed certificate to encrypt communication, you must install the certificate on your load balancer (which you'll do later) and your Rancher container. Run the Docker command to deploy Rancher, pointing it toward your certificate.
:::note Prerequisites:
Create a self-signed certificate.
- The certificate files must be in PEM format.
:::
**To Install Rancher Using a Self-Signed Cert:**
1. While running the Docker command to deploy Rancher, point Docker toward your CA certificate file.
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
-v /etc/your_certificate_directory/cacerts.pem:/etc/rancher/ssl/cacerts.pem \
rancher/rancher:latest
```
</details>
<details id="option-b">
<summary>Option B-Bring Your Own Certificate: Signed by Recognized CA</summary>
If your cluster is public facing, it's best to use a certificate signed by a recognized CA.
:::note Prerequisites:
- The certificate files must be in PEM format.
:::
**To Install Rancher Using a Cert Signed by a Recognized CA:**
If you use a certificate signed by a recognized CA, installing your certificate in the Rancher container isn't necessary. We do have to make sure there is no default CA certificate generated and stored, you can do this by passing the `--no-cacerts` parameter to the container.
1. Enter the following command.
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
rancher/rancher:latest --no-cacerts
```
</details>
## 3. Configure Load Balancer
When using a load balancer in front of your Rancher container, there's no need for the container to redirect port communication from port 80 or port 443. By passing the header `X-Forwarded-Proto: https` header, this redirect is disabled.
The load balancer or proxy has to be configured to support the following:
- **WebSocket** connections
- **SPDY** / **HTTP/2** protocols
- Passing / setting the following headers:
| Header | Value | Description |
|--------|-------|-------------|
| `Host` | Hostname used to reach Rancher. | To identify the server requested by the client.
| `X-Forwarded-Proto` | `https` | To identify the protocol that a client used to connect to the load balancer or proxy.<br /><br/>**Note:** If this header is present, `rancher/rancher` does not redirect HTTP to HTTPS.
| `X-Forwarded-Port` | Port used to reach Rancher. | To identify the protocol that client used to connect to the load balancer or proxy.
| `X-Forwarded-For` | IP of the client connection. | To identify the originating IP address of a client.
### Example NGINX configuration
This NGINX configuration is tested on NGINX 1.14.
:::note
This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - HTTP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/).
:::
- Replace `rancher-server` with the IP address or hostname of the node running the Rancher container.
- Replace both occurrences of `FQDN` to the DNS name for Rancher.
- Replace `/certs/fullchain.pem` and `/certs/privkey.pem` to the location of the server certificate and the server certificate key respectively.
```
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 8192;
}
http {
upstream rancher {
server rancher-server:80;
}
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
listen 443 ssl http2;
server_name FQDN;
ssl_certificate /certs/fullchain.pem;
ssl_certificate_key /certs/privkey.pem;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://rancher;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
proxy_read_timeout 900s;
proxy_buffering off;
}
}
server {
listen 80;
server_name FQDN;
return 301 https://$server_name$request_uri;
}
}
```
<br/>
## What's Next?
- **Recommended:** Review Single Node [Backup](../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-docker-installed-rancher.md) and [Restore](../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-docker-installed-rancher.md). Although you don't have any data you need to back up right now, we recommend creating backups after regular Rancher use.
- Create a Kubernetes cluster: [Provisioning Kubernetes Clusters](../new-user-guides/kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md).
<br/>
## FAQ and Troubleshooting
For help troubleshooting certificates, see [this section.](../../getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/certificate-troubleshooting.md)
## Advanced Options
### API Auditing
If you want to record all transactions with the Rancher API, enable the [API Auditing](enable-api-audit-log.md) feature by adding the flags below into your install command.
-e AUDIT_LEVEL=1 \
-e AUDIT_LOG_PATH=/var/log/auditlog/rancher-api-audit.log \
-e AUDIT_LOG_MAXAGE=20 \
-e AUDIT_LOG_MAXBACKUP=20 \
-e AUDIT_LOG_MAXSIZE=100 \
### Air Gap
If you are visiting this page to complete an [Air Gap Installation](../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md), you must pre-pend your private registry URL to the server tag when running the installation command in the option that you choose. Add `<REGISTRY.DOMAIN.COM:PORT>` with your private registry URL in front of `rancher/rancher:latest`.
**Example:**
<REGISTRY.DOMAIN.COM:PORT>/rancher/rancher:latest
### Persistent Data
Rancher uses etcd as a datastore. When Rancher is installed with Docker, the embedded etcd is being used. The persistent data is at the following path in the container: `/var/lib/rancher`.
You can bind mount a host volume to this location to preserve data on the host it is running on:
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
-v /opt/rancher:/var/lib/rancher \
--privileged \
rancher/rancher:latest
```
This operation requires [privileged access](../../getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher).
This layer 7 NGINX configuration is tested on NGINX version 1.13 (mainline) and 1.14 (stable).
:::note
This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - TCP and UDP Load Balancer](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/).
:::
```
upstream rancher {
server rancher-server:80;
}
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
listen 443 ssl http2;
server_name rancher.yourdomain.com;
ssl_certificate /etc/your_certificate_directory/fullchain.pem;
ssl_certificate_key /etc/your_certificate_directory/privkey.pem;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://rancher;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
proxy_read_timeout 900s;
proxy_buffering off;
}
}
server {
listen 80;
server_name rancher.yourdomain.com;
return 301 https://$server_name$request_uri;
}
```
<br/>
@@ -0,0 +1,249 @@
---
title: Enabling the API Audit Log in Downstream Clusters
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-api-audit-log-in-downstream-clusters"/>
</head>
Kubernetes auditing provides a security-relevant chronological set of records about a cluster. Kube-apiserver performs auditing. Requests generate an event at each stage of its execution, which is then preprocessed according to a certain policy and written to a backend. The policy determines whats recorded and the backend persists the records.
You might want to configure the audit log as part of compliance with the Center for Internet Security (CIS) Kubernetes Benchmark controls.
For configuration details, refer to the [official Kubernetes documentation](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/).
<Tabs groupId="k8s-distro">
<TabItem value="RKE2" default>
### Method 1 (Recommended): Set `audit-policy-file` in `machineGlobalConfig`
You can set `audit-policy-file` in the configuration file. Rancher delivers the file to the path `/var/lib/rancher/rke2/etc/config-files/audit-policy-file` in control plane nodes, and sets the proper options in the RKE2 server.
Example:
```yaml
apiVersion: provisioning.cattle.io/v1
kind: Cluster
spec:
rkeConfig:
machineGlobalConfig:
audit-policy-file: |
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: RequestResponse
resources:
- group: ""
resources:
- pods
```
### Method 2: Use the Directives, `machineSelectorFiles` and `machineGlobalConfig`
:::note
This feature is available in Rancher v2.7.2 and later.
:::
You can use `machineSelectorFiles` to deliver the audit policy file to the control plane nodes, and `machineGlobalConfig` to set the options on kube-apiserver.
As a prerequisite, you must create a [secret](../new-user-guides/kubernetes-resources-setup/secrets.md) or [configmap](../new-user-guides/kubernetes-resources-setup/configmaps.md) to be the source of the audit policy.
The secret or configmap must meet the following requirements:
1. It must be in the `fleet-default` namespace where the Cluster object exists.
2. It must have the annotation `rke.cattle.io/object-authorized-for-clusters: <cluster-name1>,<cluster-name2>` which permits the target clusters to use it.
:::tip
Rancher Dashboard provides an easy-to-use form for creating the secret or configmap.
:::
Example:
```yaml
apiVersion: v1
data:
audit-policy: >-
IyBMb2cgYWxsIHJlcXVlc3RzIGF0IHRoZSBNZXRhZGF0YSBsZXZlbC4KYXBpVmVyc2lvbjogYXVkaXQuazhzLmlvL3YxCmtpbmQ6IFBvbGljeQpydWxlczoKLSBsZXZlbDogTWV0YWRhdGE=
kind: Secret
metadata:
annotations:
rke.cattle.io/object-authorized-for-clusters: cluster1
name: <name1>
namespace: fleet-default
```
Enable and configure the audit log by editing the cluster in YAML, and utilizing the `machineSelectorFiles` and `machineGlobalConfig` directives.
Example:
```yaml
apiVersion: provisioning.cattle.io/v1
kind: Cluster
spec:
rkeConfig:
machineGlobalConfig:
kube-apiserver-arg:
- audit-policy-file=<customized-path>/dev-audit-policy.yaml
- audit-log-path=<customized-path>/dev-audit.logs
machineSelectorFiles:
- fileSources:
- configMap:
name: ''
secret:
items:
- key: audit-policy
path: <customized-path>/dev-audit-policy.yaml
name: dev-audit-policy
machineLabelSelector:
matchLabels:
rke.cattle.io/control-plane-role: 'true'
```
:::tip
You can also use the directive `machineSelectorConfig` with proper machineLabelSelectors to achieve the same effect.
:::
For more information about cluster configuration, refer to the [RKE2 cluster configuration reference](../../reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md) pages.
</TabItem>
<TabItem value="K3s">
:::note
This feature is available in Rancher v2.7.2 and later.
:::
You can use `machineSelectorFiles` to deliver the audit policy file to the control plane nodes, and `machineGlobalConfig` to set the options on kube-apiserver.
As a prerequisite, you must create a [secret](../new-user-guides/kubernetes-resources-setup/secrets.md) or [configmap](../new-user-guides/kubernetes-resources-setup/configmaps.md) to be the source of the audit policy.
The secret or configmap must meet the following requirements:
1. It must be in the `fleet-default` namespace where the Cluster object exists.
2. It must have the annotation `rke.cattle.io/object-authorized-for-clusters: <cluster-name1>,<cluster-name2>` which permits the target clusters to use it.
:::tip
Rancher Dashboard provides an easy-to-use form for creating the [secret](../new-user-guides/kubernetes-resources-setup/secrets.md) or [configmap](../new-user-guides/kubernetes-resources-setup/configmaps.md).
:::
Example:
```yaml
apiVersion: v1
data:
audit-policy: >-
IyBMb2cgYWxsIHJlcXVlc3RzIGF0IHRoZSBNZXRhZGF0YSBsZXZlbC4KYXBpVmVyc2lvbjogYXVkaXQuazhzLmlvL3YxCmtpbmQ6IFBvbGljeQpydWxlczoKLSBsZXZlbDogTWV0YWRhdGE=
kind: Secret
metadata:
annotations:
rke.cattle.io/object-authorized-for-clusters: cluster1
name: <name1>
namespace: fleet-default
```
Enable and configure the audit log by editing the cluster in YAML, and utilizing the `machineSelectorFiles` and `machineGlobalConfig` directives.
Example:
```yaml
apiVersion: provisioning.cattle.io/v1
kind: Cluster
spec:
rkeConfig:
machineGlobalConfig:
kube-apiserver-arg:
- audit-policy-file=<customized-path>/dev-audit-policy.yaml
- audit-log-path=<customized-path>/dev-audit.logs
machineSelectorFiles:
- fileSources:
- configMap:
name: ''
secret:
items:
- key: audit-policy
path: <customized-path>/dev-audit-policy.yaml
name: dev-audit-policy
machineLabelSelector:
matchLabels:
rke.cattle.io/control-plane-role: 'true'
```
:::tip
You can also use the directive `machineSelectorConfig` with proper machineLabelSelectors to achieve the same effect.
:::
For more information about cluster configuration, refer to the [K3s cluster configuration reference](../../reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md) pages.
</TabItem>
<TabItem value="RKE1">
The audit log can be enabled and configured by editing the cluster with YAML.
When the audit log is enabled, RKE1 default values will be applied.
```yaml
#
# Rancher Config
#
rancher_kubernetes_engine_config:
services:
kube-api:
audit_log:
enabled: true
```
You can customize the audit log by using the configuration directive.
```yaml
#
# Rancher Config
#
rancher_kubernetes_engine_config:
services:
kube-api:
audit_log:
enabled: true
configuration:
max_age: 6
max_backup: 6
max_size: 110
path: /var/log/kube-audit/audit-log.json
format: json
policy:
apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
```
For configuration details, refer to the official [RKE1 documentation](https://rke.docs.rancher.com/config-options/audit-log).
</TabItem>
</Tabs>
@@ -0,0 +1,562 @@
---
title: Enabling the API Audit Log to Record System Events
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-api-audit-log"/>
</head>
You can enable the API audit log to record the sequence of system events initiated by individual users. You can know what happened, when it happened, who initiated it, and what cluster it affected. When you enable this feature, all requests to the Rancher API and all responses from it are written to a log.
You can enable API Auditing during Rancher installation or upgrade.
## Enabling API Audit Log
The Audit Log is enabled and configured by passing environment variables to the Rancher server container. See the following to enable on your installation.
- [Docker Install](../../reference-guides/single-node-rancher-in-docker/advanced-options.md#api-audit-log)
- [Kubernetes Install](../../getting-started/installation-and-upgrade/installation-references/helm-chart-options.md#api-audit-log)
## API Audit Log Options
The usage below defines rules about what the audit log should record and what data it should include:
| Parameter | Description |
| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `AUDIT_LEVEL` | `0` - Disable audit log (default setting).<br/>`1` - Log event metadata.<br/>`2` - Log event metadata and request body.<br/>`3` - Log event metadata, request body, and response body. Each log transaction for a request/response pair uses the same `auditID` value.<br/><br/>See [Audit Level Logging](#audit-log-levels) for a table that displays what each setting logs. |
| `AUDIT_LOG_PATH` | Log path for Rancher Server API. Default path is `/var/log/auditlog/rancher-api-audit.log`. You can mount the log directory to host. <br/><br/>Usage Example: `AUDIT_LOG_PATH=/my/custom/path/`<br/> |
| `AUDIT_LOG_MAXAGE` | Defined the maximum number of days to retain old audit log files. Default is 10 days. |
| `AUDIT_LOG_MAXBACKUP` | Defines the maximum number of audit log files to retain. Default is 10. |
| `AUDIT_LOG_MAXSIZE` | Defines the maximum size in megabytes of the audit log file before it gets rotated. Default size is 100M. |
<br/>
### Audit Log Levels
The following table displays what parts of API transactions are logged for each [`AUDIT_LEVEL`](#api-audit-log-options) setting.
| `AUDIT_LEVEL` Setting | Request Metadata | Request Body | Response Metadata | Response Body |
| --------------------- | ---------------- | ------------ | ----------------- | ------------- |
| `0` | | | | |
| `1` | ✓ | | | |
| `2` | ✓ | ✓ | | |
| `3` | ✓ | ✓ | ✓ | ✓ |
## Viewing API Audit Logs
### Docker Install
Share the `AUDIT_LOG_PATH` directory (Default: `/var/log/auditlog`) with the host system. The log can be parsed by standard CLI tools or forwarded on to a log collection tool like Fluentd, Filebeat, Logstash, etc.
### Kubernetes Install
Enabling the API Audit Log with the Helm chart install will create a `rancher-audit-log` sidecar container in the Rancher pod. This container will stream the log to standard output (stdout). You can view the log as you would any container log.
The `rancher-audit-log` container is part of the `rancher` pod in the `cattle-system` namespace.
#### CLI
```bash
kubectl -n cattle-system logs -f rancher-84d886bdbb-s4s69 rancher-audit-log
```
#### Shipping the Audit Log
You can enable Rancher's built in log collection and shipping for the cluster to ship the audit and other services logs to a supported collection endpoint. See [Rancher Tools - Logging](../../integrations-in-rancher/logging/logging.md) for details.
## Audit Log Samples
After you enable auditing, each API request or response is logged by Rancher in the form of JSON. Each of the following code samples provide examples of how to identify each API transaction.
### Metadata Level
If you set your `AUDIT_LEVEL` to `1`, Rancher logs the metadata header for every API request, but not the body. The header provides basic information about the API transaction, such as the transaction's ID, who initiated the transaction, the time it occurred, etc.
```json
{
"auditID": "30022177-9e2e-43d1-b0d0-06ef9d3db183",
"requestURI": "/v3/schemas",
"sourceIPs": ["::1"],
"user": {
"name": "user-f4tt2",
"group": ["system:authenticated"]
},
"verb": "GET",
"stage": "RequestReceived",
"stageTimestamp": "2018-07-20 10:22:43 +0800"
}
```
### Metadata and Request Body Level
If you set your `AUDIT_LEVEL` to `2`, Rancher logs the metadata header and body for every API request.
The code sample below depicts an API request, with both its metadata header and body.
```json
{
"auditID": "ef1d249e-bfac-4fd0-a61f-cbdcad53b9bb",
"requestURI": "/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx",
"sourceIPs": ["::1"],
"user": {
"name": "user-f4tt2",
"group": ["system:authenticated"]
},
"verb": "PUT",
"stage": "RequestReceived",
"stageTimestamp": "2018-07-20 10:28:08 +0800",
"requestBody": {
"hostIPC": false,
"hostNetwork": false,
"hostPID": false,
"paused": false,
"annotations": {},
"baseType": "workload",
"containers": [
{
"allowPrivilegeEscalation": false,
"image": "nginx",
"imagePullPolicy": "Always",
"initContainer": false,
"name": "nginx",
"ports": [
{
"containerPort": 80,
"dnsName": "nginx-nodeport",
"kind": "NodePort",
"name": "80tcp01",
"protocol": "TCP",
"sourcePort": 0,
"type": "/v3/project/schemas/containerPort"
}
],
"privileged": false,
"readOnly": false,
"resources": {
"type": "/v3/project/schemas/resourceRequirements",
"requests": {},
"limits": {}
},
"restartCount": 0,
"runAsNonRoot": false,
"stdin": true,
"stdinOnce": false,
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"tty": true,
"type": "/v3/project/schemas/container",
"environmentFrom": [],
"capAdd": [],
"capDrop": [],
"livenessProbe": null,
"volumeMounts": []
}
],
"created": "2018-07-18T07:34:16Z",
"createdTS": 1531899256000,
"creatorId": null,
"deploymentConfig": {
"maxSurge": 1,
"maxUnavailable": 0,
"minReadySeconds": 0,
"progressDeadlineSeconds": 600,
"revisionHistoryLimit": 10,
"strategy": "RollingUpdate"
},
"deploymentStatus": {
"availableReplicas": 1,
"conditions": [
{
"lastTransitionTime": "2018-07-18T07:34:38Z",
"lastTransitionTimeTS": 1531899278000,
"lastUpdateTime": "2018-07-18T07:34:38Z",
"lastUpdateTimeTS": 1531899278000,
"message": "Deployment has minimum availability.",
"reason": "MinimumReplicasAvailable",
"status": "True",
"type": "Available"
},
{
"lastTransitionTime": "2018-07-18T07:34:16Z",
"lastTransitionTimeTS": 1531899256000,
"lastUpdateTime": "2018-07-18T07:34:38Z",
"lastUpdateTimeTS": 1531899278000,
"message": "ReplicaSet \"nginx-64d85666f9\" has successfully progressed.",
"reason": "NewReplicaSetAvailable",
"status": "True",
"type": "Progressing"
}
],
"observedGeneration": 2,
"readyReplicas": 1,
"replicas": 1,
"type": "/v3/project/schemas/deploymentStatus",
"unavailableReplicas": 0,
"updatedReplicas": 1
},
"dnsPolicy": "ClusterFirst",
"id": "deployment:default:nginx",
"labels": {
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
},
"name": "nginx",
"namespaceId": "default",
"projectId": "c-bcz5t:p-fdr4s",
"publicEndpoints": [
{
"addresses": ["10.64.3.58"],
"allNodes": true,
"ingressId": null,
"nodeId": null,
"podId": null,
"port": 30917,
"protocol": "TCP",
"serviceId": "default:nginx-nodeport",
"type": "publicEndpoint"
}
],
"restartPolicy": "Always",
"scale": 1,
"schedulerName": "default-scheduler",
"selector": {
"matchLabels": {
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
},
"type": "/v3/project/schemas/labelSelector"
},
"state": "active",
"terminationGracePeriodSeconds": 30,
"transitioning": "no",
"transitioningMessage": "",
"type": "deployment",
"uuid": "f998037d-8a5c-11e8-a4cf-0245a7ebb0fd",
"workloadAnnotations": {
"deployment.kubernetes.io/revision": "1",
"field.cattle.io/creatorId": "user-f4tt2"
},
"workloadLabels": {
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
},
"scheduling": {
"node": {}
},
"description": "my description",
"volumes": []
}
}
```
### Metadata, Request Body, and Response Body Level
If you set your `AUDIT_LEVEL` to `3`, Rancher logs:
- The metadata header and body for every API request.
- The metadata header and body for every API response.
#### Request
The code sample below depicts an API request, with both its metadata header and body.
```json
{
"auditID": "a886fd9f-5d6b-4ae3-9a10-5bff8f3d68af",
"requestURI": "/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx",
"sourceIPs": ["::1"],
"user": {
"name": "user-f4tt2",
"group": ["system:authenticated"]
},
"verb": "PUT",
"stage": "RequestReceived",
"stageTimestamp": "2018-07-20 10:33:06 +0800",
"requestBody": {
"hostIPC": false,
"hostNetwork": false,
"hostPID": false,
"paused": false,
"annotations": {},
"baseType": "workload",
"containers": [
{
"allowPrivilegeEscalation": false,
"image": "nginx",
"imagePullPolicy": "Always",
"initContainer": false,
"name": "nginx",
"ports": [
{
"containerPort": 80,
"dnsName": "nginx-nodeport",
"kind": "NodePort",
"name": "80tcp01",
"protocol": "TCP",
"sourcePort": 0,
"type": "/v3/project/schemas/containerPort"
}
],
"privileged": false,
"readOnly": false,
"resources": {
"type": "/v3/project/schemas/resourceRequirements",
"requests": {},
"limits": {}
},
"restartCount": 0,
"runAsNonRoot": false,
"stdin": true,
"stdinOnce": false,
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"tty": true,
"type": "/v3/project/schemas/container",
"environmentFrom": [],
"capAdd": [],
"capDrop": [],
"livenessProbe": null,
"volumeMounts": []
}
],
"created": "2018-07-18T07:34:16Z",
"createdTS": 1531899256000,
"creatorId": null,
"deploymentConfig": {
"maxSurge": 1,
"maxUnavailable": 0,
"minReadySeconds": 0,
"progressDeadlineSeconds": 600,
"revisionHistoryLimit": 10,
"strategy": "RollingUpdate"
},
"deploymentStatus": {
"availableReplicas": 1,
"conditions": [
{
"lastTransitionTime": "2018-07-18T07:34:38Z",
"lastTransitionTimeTS": 1531899278000,
"lastUpdateTime": "2018-07-18T07:34:38Z",
"lastUpdateTimeTS": 1531899278000,
"message": "Deployment has minimum availability.",
"reason": "MinimumReplicasAvailable",
"status": "True",
"type": "Available"
},
{
"lastTransitionTime": "2018-07-18T07:34:16Z",
"lastTransitionTimeTS": 1531899256000,
"lastUpdateTime": "2018-07-18T07:34:38Z",
"lastUpdateTimeTS": 1531899278000,
"message": "ReplicaSet \"nginx-64d85666f9\" has successfully progressed.",
"reason": "NewReplicaSetAvailable",
"status": "True",
"type": "Progressing"
}
],
"observedGeneration": 2,
"readyReplicas": 1,
"replicas": 1,
"type": "/v3/project/schemas/deploymentStatus",
"unavailableReplicas": 0,
"updatedReplicas": 1
},
"dnsPolicy": "ClusterFirst",
"id": "deployment:default:nginx",
"labels": {
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
},
"name": "nginx",
"namespaceId": "default",
"projectId": "c-bcz5t:p-fdr4s",
"publicEndpoints": [
{
"addresses": ["10.64.3.58"],
"allNodes": true,
"ingressId": null,
"nodeId": null,
"podId": null,
"port": 30917,
"protocol": "TCP",
"serviceId": "default:nginx-nodeport",
"type": "publicEndpoint"
}
],
"restartPolicy": "Always",
"scale": 1,
"schedulerName": "default-scheduler",
"selector": {
"matchLabels": {
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
},
"type": "/v3/project/schemas/labelSelector"
},
"state": "active",
"terminationGracePeriodSeconds": 30,
"transitioning": "no",
"transitioningMessage": "",
"type": "deployment",
"uuid": "f998037d-8a5c-11e8-a4cf-0245a7ebb0fd",
"workloadAnnotations": {
"deployment.kubernetes.io/revision": "1",
"field.cattle.io/creatorId": "user-f4tt2"
},
"workloadLabels": {
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
},
"scheduling": {
"node": {}
},
"description": "my decript",
"volumes": []
}
}
```
#### Response
The code sample below depicts an API response, with both its metadata header and body.
```json
{
"auditID": "a886fd9f-5d6b-4ae3-9a10-5bff8f3d68af",
"responseStatus": "200",
"stage": "ResponseComplete",
"stageTimestamp": "2018-07-20 10:33:06 +0800",
"responseBody": {
"actionLinks": {
"pause": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx?action=pause",
"resume": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx?action=resume",
"rollback": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx?action=rollback"
},
"annotations": {},
"baseType": "workload",
"containers": [
{
"allowPrivilegeEscalation": false,
"image": "nginx",
"imagePullPolicy": "Always",
"initContainer": false,
"name": "nginx",
"ports": [
{
"containerPort": 80,
"dnsName": "nginx-nodeport",
"kind": "NodePort",
"name": "80tcp01",
"protocol": "TCP",
"sourcePort": 0,
"type": "/v3/project/schemas/containerPort"
}
],
"privileged": false,
"readOnly": false,
"resources": {
"type": "/v3/project/schemas/resourceRequirements"
},
"restartCount": 0,
"runAsNonRoot": false,
"stdin": true,
"stdinOnce": false,
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"tty": true,
"type": "/v3/project/schemas/container"
}
],
"created": "2018-07-18T07:34:16Z",
"createdTS": 1531899256000,
"creatorId": null,
"deploymentConfig": {
"maxSurge": 1,
"maxUnavailable": 0,
"minReadySeconds": 0,
"progressDeadlineSeconds": 600,
"revisionHistoryLimit": 10,
"strategy": "RollingUpdate"
},
"deploymentStatus": {
"availableReplicas": 1,
"conditions": [
{
"lastTransitionTime": "2018-07-18T07:34:38Z",
"lastTransitionTimeTS": 1531899278000,
"lastUpdateTime": "2018-07-18T07:34:38Z",
"lastUpdateTimeTS": 1531899278000,
"message": "Deployment has minimum availability.",
"reason": "MinimumReplicasAvailable",
"status": "True",
"type": "Available"
},
{
"lastTransitionTime": "2018-07-18T07:34:16Z",
"lastTransitionTimeTS": 1531899256000,
"lastUpdateTime": "2018-07-18T07:34:38Z",
"lastUpdateTimeTS": 1531899278000,
"message": "ReplicaSet \"nginx-64d85666f9\" has successfully progressed.",
"reason": "NewReplicaSetAvailable",
"status": "True",
"type": "Progressing"
}
],
"observedGeneration": 2,
"readyReplicas": 1,
"replicas": 1,
"type": "/v3/project/schemas/deploymentStatus",
"unavailableReplicas": 0,
"updatedReplicas": 1
},
"dnsPolicy": "ClusterFirst",
"hostIPC": false,
"hostNetwork": false,
"hostPID": false,
"id": "deployment:default:nginx",
"labels": {
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
},
"links": {
"remove": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx",
"revisions": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx/revisions",
"self": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx",
"update": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx",
"yaml": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx/yaml"
},
"name": "nginx",
"namespaceId": "default",
"paused": false,
"projectId": "c-bcz5t:p-fdr4s",
"publicEndpoints": [
{
"addresses": ["10.64.3.58"],
"allNodes": true,
"ingressId": null,
"nodeId": null,
"podId": null,
"port": 30917,
"protocol": "TCP",
"serviceId": "default:nginx-nodeport"
}
],
"restartPolicy": "Always",
"scale": 1,
"schedulerName": "default-scheduler",
"selector": {
"matchLabels": {
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
},
"type": "/v3/project/schemas/labelSelector"
},
"state": "active",
"terminationGracePeriodSeconds": 30,
"transitioning": "no",
"transitioningMessage": "",
"type": "deployment",
"uuid": "f998037d-8a5c-11e8-a4cf-0245a7ebb0fd",
"workloadAnnotations": {
"deployment.kubernetes.io/revision": "1",
"field.cattle.io/creatorId": "user-f4tt2"
},
"workloadLabels": {
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
}
}
}
```
@@ -0,0 +1,17 @@
---
title: Continuous Delivery
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features/continuous-delivery"/>
</head>
[Continuous Delivery with Fleet](../../../integrations-in-rancher/fleet/fleet.md) comes preinstalled in Rancher and can't be fully disabled. However, the Fleet feature for GitOps continuous delivery may be disabled using the `continuous-delivery` feature flag.
To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.](enable-experimental-features.md)
Environment Variable Key | Default Value | Description
---|---|---
`continuous-delivery` | `true` | This flag disables the GitOps continuous delivery feature of Fleet. |
If Fleet was disabled in Rancher v2.5.x, it will become enabled if Rancher is upgraded to v2.6.x. Only the continuous delivery part of Fleet can be disabled. When `continuous-delivery` is disabled, the `gitjob` deployment is no longer deployed into the Rancher server's local cluster, and `continuous-delivery` is not shown in the Rancher UI.
@@ -0,0 +1,126 @@
---
title: Enabling Experimental Features
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features"/>
</head>
Rancher includes some features that are experimental and disabled by default. You might want to enable these features, for example, if you decide that the benefits of using an [unsupported storage type](unsupported-storage-drivers.md) outweighs the risk of using an untested feature. Feature flags were introduced to allow you to try these features that are not enabled by default.
The features can be enabled in three ways:
- [Enable features when starting Rancher.](#enabling-features-when-starting-rancher) When installing Rancher with a CLI, you can use a feature flag to enable a feature by default.
- [Enable features from the Rancher UI](#enabling-features-with-the-rancher-ui) by going to the **Settings** page.
- [Enable features with the Rancher API](#enabling-features-with-the-rancher-api) after installing Rancher.
Each feature has two values:
- A default value, which can be configured with a flag or environment variable from the command line
- A set value, which can be configured with the Rancher API or UI
If no value has been set, Rancher uses the default value.
Because the API sets the actual value and the command line sets the default value, that means that if you enable or disable a feature with the API or UI, it will override any value set with the command line.
For example, if you install Rancher, then set a feature flag to true with the Rancher API, then upgrade Rancher with a command that sets the feature flag to false, the default value will still be false, but the feature will still be enabled because it was set with the Rancher API. If you then deleted the set value (true) with the Rancher API, setting it to NULL, the default value (false) would take effect. See the [feature flags page](../../../getting-started/installation-and-upgrade/installation-references/feature-flags.md) for more information.
## Enabling Features when Starting Rancher
When you install Rancher, enable the feature you want with a feature flag. The command is different depending on whether you are installing Rancher on a single node or if you are doing a Kubernetes Installation of Rancher.
### Enabling Features for Kubernetes Installs
:::note
Values set from the Rancher API will override the value passed in through the command line.
:::
When installing Rancher with a Helm chart, use the `--set` option. In the below example, two features are enabled by passing the feature flag names in a comma separated list:
For Kubernetes v1.25 or later, set `global.cattle.psp.enabled` to `false` when using Rancher v2.7.2-v2.7.4. This is not necessary for Rancher v2.7.5 and above, but you can still manually set the option if you choose.
```
helm install rancher rancher-latest/rancher \
--namespace cattle-system \
--set hostname=rancher.my.org \
--set 'extraEnv[0].name=CATTLE_FEATURES'
--set 'extraEnv[0].value=<FEATURE-FLAG-NAME-1>=true,<FEATURE-FLAG-NAME-2>=true'
```
:::note
If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
:::
### Enabling Features for Air Gap Installs
To perform an [air gap installation of Rancher](../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha.md), add a Helm chart repository and download a Helm chart, then install Rancher with Helm.
When you install the Helm chart, you should pass in feature flag names in a comma separated list, as in the following example:
```
helm install rancher ./rancher-<VERSION>.tgz \
--namespace cattle-system \
--set hostname=<RANCHER.YOURDOMAIN.COM> \
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
--set ingress.tls.source=secret \
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
--set useBundledSystemChart=true # Use the packaged Rancher system charts
--set 'extraEnv[0].name=CATTLE_FEATURES'
--set 'extraEnv[0].value=<FEATURE-FLAG-NAME-1>=true,<FEATURE-FLAG-NAME-2>=true'
```
### Enabling Features for Docker Installs
When installing Rancher with Docker, use the `--features` option. In the below example, two features are enabled by passing the feature flag names in a comma separated list:
```
docker run -d -p 80:80 -p 443:443 \
--restart=unless-stopped \
rancher/rancher:rancher-latest \
--features=<FEATURE-FLAG-NAME-1>=true,<FEATURE-FLAG-NAME-2>=true
```
## Enabling Features with the Rancher UI
1. In the upper left corner, click **☰ > Global Settings**.
1. Click **Feature Flags**.
1. To enable a feature, go to the disabled feature you want to enable and click **⋮ > Activate**.
**Result:** The feature is enabled.
### Disabling Features with the Rancher UI
1. In the upper left corner, click **☰ > Global Settings**.
1. Click **Feature Flags**. You will see a list of experimental features.
1. To disable a feature, go to the enabled feature you want to disable and click **⋮ > Deactivate**.
**Result:** The feature is disabled.
## Enabling Features with the Rancher API
1. Go to `<RANCHER-SERVER-URL>/v3/features`.
1. In the `data` section, you will see an array containing all of the features that can be turned on with feature flags. The name of the feature is in the `id` field. Click the name of the feature you want to enable.
1. In the upper left corner of the screen, under **Operations,** click **Edit**.
1. In the **Value** drop-down menu, click **True**.
1. Click **Show Request**.
1. Click **Send Request**.
1. Click **Close**.
**Result:** The feature is enabled.
### Disabling Features with the Rancher API
1. Go to `<RANCHER-SERVER-URL>/v3/features`.
1. In the `data` section, you will see an array containing all of the features that can be turned on with feature flags. The name of the feature is in the `id` field. Click the name of the feature you want to enable.
1. In the upper left corner of the screen, under **Operations,** click **Edit**.
1. In the **Value** drop-down menu, click **False**.
1. Click **Show Request**.
1. Click **Send Request**.
1. Click **Close**.
**Result:** The feature is disabled.
@@ -0,0 +1,36 @@
---
title: UI for Istio Virtual Services and Destination Rules
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features/istio-traffic-management-features"/>
</head>
This feature enables a UI that lets you create, read, update and delete virtual services and destination rules, which are traffic management features of Istio.
> **Prerequisite:** Turning on this feature does not enable Istio. A cluster administrator needs to [enable Istio for the cluster](../istio-setup-guide/istio-setup-guide.md) in order to use the feature.
To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.](enable-experimental-features.md)
Environment Variable Key | Default Value | Status | Available as of
---|---|---|---
`istio-virtual-service-ui` |`false` | Experimental | v2.3.0
`istio-virtual-service-ui` | `true` | GA | v2.3.2
## About this Feature
A central advantage of Istio's traffic management features is that they allow dynamic request routing, which is useful for canary deployments, blue/green deployments, or A/B testing.
When enabled, this feature turns on a page that lets you configure some traffic management features of Istio using the Rancher UI. Without this feature, you need to use `kubectl` to manage traffic with Istio.
The feature enables two UI tabs: one tab for **Virtual Services** and another for **Destination Rules**.
- **Virtual services** intercept and direct traffic to your Kubernetes services, allowing you to direct percentages of traffic from a request to different services. You can use them to define a set of routing rules to apply when a host is addressed. For details, refer to the [Istio documentation.](https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/)
- **Destination rules** serve as the single source of truth about which service versions are available to receive traffic from virtual services. You can use these resources to define policies that apply to traffic that is intended for a service after routing has occurred. For details, refer to the [Istio documentation.](https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule)
To see these tabs,
1. Click **☰ > Cluster Management**.
1. Go to the cluster where Istio is installed and click **Explore**.
1. In the left navigation bar, click **Istio**.
1. You will see tabs for **Kiali** and **Jaeger**. From the left navigation bar, you can view and configure **Virtual Services** and **Destination Rules**.
@@ -0,0 +1,49 @@
---
title: "Running on ARM64 (Experimental)"
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features/rancher-on-arm64"/>
</head>
:::caution
Running on an ARM64 platform is currently an experimental feature and is not yet officially supported in Rancher. Therefore, we do not recommend using ARM64 based nodes in a production environment.
:::
The following options are available when using an ARM64 platform:
- Running Rancher on ARM64 based node(s)
- Only for Docker Install. Please note that the following installation command replaces the examples found in the [Docker Install link](../../../getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md):
```
# In the last line `rancher/rancher:vX.Y.Z`, be certain to replace "X.Y.Z" with a released version in which ARM64 builds exist. For example, if your matching version is v2.5.8, you would fill in this line with `rancher/rancher:v2.5.8`.
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
--privileged \
rancher/rancher:vX.Y.Z
```
:::note
To check if your specific released version is compatible with the ARM64 architecture, you may navigate to your
version's release notes in the following two ways:
- Manually find your version using https://github.com/rancher/rancher/releases.
- Go directly to your version using the tag and the specific version number. If you plan to use v2.5.8, for example, you may navigate to https://github.com/rancher/rancher/releases/tag/v2.5.8.
:::
- Create custom cluster and adding ARM64 based node(s)
- Kubernetes cluster version must be 1.12 or higher
- CNI Network Provider must be [Flannel](../../../faq/container-network-interface-providers.md#flannel)
- Importing clusters that contain ARM64 based nodes
- Kubernetes cluster version must be 1.12 or higher
Please see [Cluster Options](../../../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md) how to configure the cluster options.
The following features are not tested:
- Monitoring, alerts, notifiers, pipelines and logging
- Launching apps from the catalog
@@ -0,0 +1,43 @@
---
title: Allowing Unsupported Storage Drivers
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features/unsupported-storage-drivers"/>
</head>
This feature allows you to use types for storage providers and provisioners that are not enabled by default.
To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.](enable-experimental-features.md)
Environment Variable Key | Default Value | Description
---|---|---
`unsupported-storage-drivers` | `false` | This feature enables types for storage providers and provisioners that are not enabled by default.
### Types for Persistent Volume Plugins that are Enabled by Default
Below is a list of storage types for persistent volume plugins that are enabled by default. When enabling this feature flag, any persistent volume plugins that are not on this list are considered experimental and unsupported:
Name | Plugin
--------|----------
Amazon EBS Disk | `aws-ebs`
AzureFile | `azure-file`
AzureDisk | `azure-disk`
Google Persistent Disk | `gce-pd`
Longhorn | `flex-volume-longhorn`
VMware vSphere Volume | `vsphere-volume`
Local | `local`
Network File System | `nfs`
hostPath | `host-path`
### Types for StorageClass that are Enabled by Default
Below is a list of storage types for a StorageClass that are enabled by default. When enabling this feature flag, any persistent volume plugins that are not on this list are considered experimental and unsupported:
Name | Plugin
--------|--------
Amazon EBS Disk | `aws-ebs`
AzureFile | `azure-file`
AzureDisk | `azure-disk`
Google Persistent Disk | `gce-pd`
Longhorn | `flex-volume-longhorn`
VMware vSphere Volume | `vsphere-volume`
Local | `local`
@@ -0,0 +1,33 @@
---
title: Enable Istio in the Cluster
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-cluster"/>
</head>
:::note Prerequisites:
- Only a user with the `cluster-admin` [Kubernetes default role](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) assigned can configure and install Istio in a Kubernetes cluster.
- If you have pod security policies, you will need to install Istio with the CNI enabled. For details, see [this section.](../../../integrations-in-rancher/istio/configuration-options/pod-security-policies.md)
- To install Istio on an RKE2 cluster, additional steps are required. For details, see [this section.](../../../integrations-in-rancher/istio/configuration-options/install-istio-on-rke2-cluster.md)
- To install Istio in a cluster where project network isolation is enabled, additional steps are required. For details, see [this section.](../../../integrations-in-rancher/istio/configuration-options/project-network-isolation.md)
:::
1. Click **☰ > Cluster Management**.
1. Go to the where you want to enable Istio and click **Explore**.
1. Click **Apps**.
1. Click **Charts**.
1. Click **Istio**.
1. If you have not already installed your own monitoring app, you will be prompted to install the rancher-monitoring app. Optional: Set your Selector or Scrape config options on rancher-monitoring app install.
1. Optional: Configure member access and [resource limits](../../../integrations-in-rancher/istio/cpu-and-memory-allocations.md) for the Istio components. Ensure you have enough resources on your worker nodes to enable Istio.
1. Optional: Make additional configuration changes to values.yaml if needed.
1. Optional: Add further resources or configuration via the [overlay file](../../../integrations-in-rancher/istio/configuration-options/configuration-options.md#overlay-file).
1. Click **Install**.
**Result:** Istio is installed at the cluster level.
## Additional Config Options
For more information on configuring Istio, refer to the [configuration reference.](../../../integrations-in-rancher/istio/configuration-options/configuration-options.md)
@@ -0,0 +1,56 @@
---
title: Enable Istio in a Namespace
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/istio-setup-guide/enable-istio-in-namespace"/>
</head>
You will need to manually enable Istio in each namespace that you want to be tracked or controlled by Istio. When Istio is enabled in a namespace, the Envoy sidecar proxy will be automatically injected into all new workloads that are deployed in the namespace.
This namespace setting will only affect new workloads in the namespace. Any preexisting workloads will need to be re-deployed to leverage the sidecar auto injection.
:::note Prerequisite:
To enable Istio in a namespace, the cluster must have Istio installed.
:::
1. Click **☰ > Cluster Management**.
1. Go to the cluster that you created and click **Explore**.
1. Click **Cluster > Projects/Namespaces**.
1. Go to the namespace where you want to enable Istio and click **⋮ > Enable Istio Auto Injection**. Alternately, click the namespace, and then on the namespace detail page, click **⋮ > Enable Istio Auto Injection**.
**Result:** The namespace now has the label `istio-injection=enabled`. All new workloads deployed in this namespace will have the Istio sidecar injected by default.
### Verifying that Automatic Istio Sidecar Injection is Enabled
To verify that Istio is enabled, deploy a hello-world workload in the namespace. Go to the workload and click the pod name. In the **Containers** section, you should see the `istio-proxy` container.
### Excluding Workloads from Being Injected with the Istio Sidecar
If you need to exclude a workload from getting injected with the Istio sidecar, use the following annotation on the workload:
```
sidecar.istio.io/inject: “false”
```
To add the annotation to a workload,
1. Click **☰ > Cluster Management**.
1. Go to the cluster that you created and click **Explore**.
1. Click **Workload**.
1. Go to the workload that should not have the sidecar and edit as yaml
1. Add the following key, value `sidecar.istio.io/inject: false` as an annotation on the workload
1. Click **Save**.
**Result:** The Istio sidecar will not be injected into the workload.
:::note
If you are having issues with a Job you deployed not completing, you will need to add this annotation to your pod using the provided steps. Since Istio Sidecars run indefinitely, a Job cannot be considered complete even after its task has completed.
:::
### [Next: Add Deployments with the Istio Sidecar ](use-istio-sidecar.md)
@@ -0,0 +1,30 @@
---
title: Generate and View Traffic from Istio
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/istio-setup-guide/generate-and-view-traffic"/>
</head>
This section describes how to view the traffic that is being managed by Istio.
## The Kiali Traffic Graph
The Istio overview page provides a link to the Kiali dashboard. From the Kiali dashboard, you are able to view graphs for each namespace. The Kiali graph provides a powerful way to visualize the topology of your Istio service mesh. It shows you which services communicate with each other.
:::note Prerequisites:
To enable traffic to show up in the graph, ensure you have prometheus installed in the cluster. Rancher-istio installs Kiali configured by default to work with the rancher-monitoring chart. You can use rancher-monitoring or install your own monitoring solution. Optional: you can change configuration on how data scraping occurs by setting the [Selectors & Scrape Configs](../../../integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md) options.
:::
To see the traffic graph,
1. In the cluster where Istio is installed, click **Istio** in the left navigation bar.
1. Click the **Kiali** link.
1. Click on **Graph** in the side nav.
1. Change the namespace in the **Namespace** dropdown to view the traffic for each namespace.
If you refresh the URL to the BookInfo app several times, you should be able to see green arrows on the Kiali graph showing traffic to `v1` and `v3` of the `reviews` service. The control panel on the right side of the graph lets you configure details including how many minutes of the most recent traffic should be shown on the graph.
For additional tools and visualizations, you can go to Grafana, and Prometheus dashboards from the **Monitoring** **Overview** page
@@ -0,0 +1,34 @@
---
title: Istio Setup Guides
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/istio-setup-guide"/>
</head>
This section describes how to enable Istio and start using it in your projects.
If you use Istio for traffic management, you will need to allow external traffic to the cluster. In that case, you will need to follow all of the steps below.
## Prerequisites
This guide assumes you have already [installed Rancher,](../../../getting-started/installation-and-upgrade/installation-and-upgrade.md) and you have already [provisioned a separate Kubernetes cluster](../../new-user-guides/kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md) on which you will install Istio.
The nodes in your cluster must meet the [CPU and memory requirements.](../../../integrations-in-rancher/istio/cpu-and-memory-allocations.md)
The workloads and services that you want to be controlled by Istio must meet [Istio's requirements.](https://istio.io/docs/setup/additional-setup/requirements/)
## Install
:::tip Quick Setup Tip:
If you don't need external traffic to reach Istio, and you just want to set up Istio for monitoring and tracing traffic within the cluster, skip the steps for [setting up the Istio gateway](set-up-istio-gateway.md) and [setting up Istio's components for traffic management.](set-up-traffic-management.md)
:::
1. [Enable Istio in the cluster.](enable-istio-in-cluster.md)
1. [Enable Istio in all the namespaces where you want to use it.](enable-istio-in-namespace.md)
1. [Add deployments and services that have the Istio sidecar injected.](use-istio-sidecar.md)
1. [Set up the Istio gateway. ](set-up-istio-gateway.md)
1. [Set up Istio's components for traffic management.](set-up-traffic-management.md)
1. [Generate traffic and see Istio in action.](generate-and-view-traffic.md)
@@ -0,0 +1,150 @@
---
title: Set up the Istio Gateway
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/istio-setup-guide/set-up-istio-gateway"/>
</head>
The gateway to each cluster can have its own port or load balancer, which is unrelated to a service mesh. By default, each Rancher-provisioned cluster has one NGINX ingress controller allowing traffic into the cluster.
You can use the Nginx Ingress controller with or without Istio installed. If this is the only gateway to your cluster, Istio will be able to route traffic from service to service, but Istio will not be able to receive traffic from outside the cluster.
To allow Istio to receive external traffic, you need to enable Istio's gateway, which works as a north-south proxy for external traffic. When you enable the Istio gateway, the result is that your cluster will have two Ingresses.
You will also need to set up a Kubernetes gateway for your services. This Kubernetes resource points to Istio's implementation of the ingress gateway to the cluster.
You can route traffic into the service mesh with a load balancer or use Istio's NodePort gateway. This section describes how to set up the NodePort gateway.
For more information on the Istio gateway, refer to the [Istio documentation.](https://istio.io/docs/reference/config/networking/v1alpha3/gateway/)
![In an Istio-enabled cluster, you can have two Ingresses: the default Nginx Ingress, and the default Istio controller.](/img/istio-ingress.svg)
## Enable an Istio Gateway
The ingress gateway is a Kubernetes service that will be deployed in your cluster. The Istio Gateway allows for more extensive customization and flexibility.
1. Click **☰ > Cluster Management**.
1. Go to the cluster that you created and click **Explore**.
1. In the left navigation bar, click **Istio > Gateways**.
1. Click **Create from Yaml**.
1. Paste your Istio Gateway yaml, or **Read from File**.
1. Click **Create**.
**Result:** The gateway is deployed, and will now route traffic with applied rules.
## Example Istio Gateway
We add the BookInfo app deployments in services when going through the Workloads example. Next we add an Istio Gateway so that the app is accessible from outside your cluster.
1. Click **☰ > Cluster Management**.
1. Go to the cluster that you created and click **Explore**.
1. In the left navigation bar, click **Istio > Gateways**.
1. Click **Create from Yaml**.
1. Copy and paste the Gateway yaml provided below.
1. Click **Create**.
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
```
Then to deploy the VirtualService that provides the traffic routing for the Gateway:
1. Click **☰ > Cluster Management**.
1. Go to the cluster that you created and click **Explore**.
1. In the left navigation bar, click **Istio > VirtualServices**.
1. Copy and paste the VirtualService yaml provided below.
1. Click **Create**.
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
```
**Result:** You have configured your gateway resource so that Istio can receive traffic from outside the cluster.
Confirm that the resource exists by running:
```
kubectl get gateway -A
```
The result should be something like this:
```
NAME AGE
bookinfo-gateway 64m
```
### Access the ProductPage Service from a Web Browser
To test and see if the BookInfo app deployed correctly, the app can be viewed a web browser using the Istio controller IP and port, combined with the request name specified in your Kubernetes gateway resource:
`http://<IP of Istio controller>:<Port of istio controller>/productpage`
To get the ingress gateway URL and port,
1. Click **☰ > Cluster Management**.
1. Go to the cluster that you created and click **Explore**.
1. In the left navigation bar, click **Workload**.
1. Scroll down to the `istio-system` namespace.
1. Within `istio-system`, there is a workload named `istio-ingressgateway`. Under the name of this workload, you should see links, such as `80/tcp`.
1. Click one of those links. This should show you the URL of the ingress gateway in your web browser. Append `/productpage` to the URL.
**Result:** You should see the BookInfo app in the web browser.
For help inspecting the Istio controller URL and ports, try the commands the [Istio documentation.](https://istio.io/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports)
## Troubleshooting
The [official Istio documentation](https://istio.io/docs/tasks/traffic-management/ingress/ingress-control/#troubleshooting) suggests `kubectl` commands to inspect the correct ingress host and ingress port for external requests.
### Confirming that the Kubernetes Gateway Matches Istio's Ingress Controller
You can try the steps in this section to make sure the Kubernetes gateway is configured properly.
In the gateway resource, the selector refers to Istio's default ingress controller by its label, in which the key of the label is `istio` and the value is `ingressgateway`. To make sure the label is appropriate for the gateway, do the following:
1. Click **☰ > Cluster Management**.
1. Go to the cluster that you created and click **Explore**.
1. In the left navigation bar, click **Workload**.
1. Scroll down to the `istio-system` namespace.
1. Within `istio-system`, there is a workload named `istio-ingressgateway`. Click the name of this workload and go to the **Labels and Annotations** section. You should see that it has the key `istio` and the value `ingressgateway`. This confirms that the selector in the Gateway resource matches Istio's default ingress controller.
### [Next: Set up Istio's Components for Traffic Management](set-up-traffic-management.md)
@@ -0,0 +1,79 @@
---
title: Set up Istio's Components for Traffic Management
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/istio-setup-guide/set-up-traffic-management"/>
</head>
A central advantage of traffic management in Istio is that it allows dynamic request routing. Some common applications for dynamic request routing include canary deployments and blue/green deployments. The two key resources in Istio traffic management are *virtual services* and *destination rules*.
- [Virtual services](https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/) intercept and direct traffic to your Kubernetes services, allowing you to divide percentages of traffic from a request to different services. You can use them to define a set of routing rules to apply when a host is addressed.
- [Destination rules](https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule/) serve as the single source of truth about which service versions are available to receive traffic from virtual services. You can use these resources to define policies that apply to traffic that is intended for a service after routing has occurred.
This section describes how to add an example virtual service that corresponds to the `reviews` microservice in the sample BookInfo app. The purpose of this service is to divide traffic between two versions of the `reviews` service.
In this example, we take the traffic to the `reviews` service and intercept it so that 50 percent of it goes to `v1` of the service and 50 percent goes to `v2`.
After this virtual service is deployed, we will generate traffic and see from the Kiali visualization that traffic is being routed evenly between the two versions of the service.
To deploy the virtual service and destination rules for the `reviews` service,
1. Click **☰ > Cluster Management**.
1. Go to the cluster where Istio is installed and click **Explore**.
1. In the cluster where Istio is installed, click **Istio > DestinationRules** in the left navigation bar.
1. Click **Create**.
1. Copy and paste the DestinationRule yaml provided below.
1. Click **Create**.
1. Click **Edit as YAML** and use this configuration:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3
```
1. Click **Create**.
Then to deploy the VirtualService that provides the traffic routing that utilizes the DestinationRule:
1. Click **VirtualService** in the side nav bar.
1. Click **Create from Yaml**.
1. Copy and paste the VirtualService yaml provided below.
1. Click **Create**.
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 50
- destination:
host: reviews
subset: v3
weight: 50
---
```
**Result:** When you generate traffic to this service (for example, by refreshing the ingress gateway URL), the Kiali traffic graph will reflect that traffic to the `reviews` service is divided evenly between `v1` and `v3`.
### [Next: Generate and View Traffic](generate-and-view-traffic.md)
@@ -0,0 +1,363 @@
---
title: Add Deployments and Services with the Istio Sidecar
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/istio-setup-guide/use-istio-sidecar"/>
</head>
:::note Prerequisite:
To enable Istio for a workload, the cluster and namespace must have the Istio app installed.
:::
Enabling Istio in a namespace only enables automatic sidecar injection for new workloads. To enable the Envoy sidecar for existing workloads, you need to enable it manually for each workload.
To inject the Istio sidecar on an existing workload in the namespace,
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to see the visualizations and click **Explore**.
1. Click **Workload**.
1. Go to the workload where you want to inject the Istio sidecar and click **⋮ > Redeploy**. When the workload is redeployed, it will have the Envoy sidecar automatically injected.
Wait a few minutes for the workload to upgrade to have the istio sidecar. Click it and go to the Containers section. You should be able to see `istio-proxy` alongside your original workload. This means the Istio sidecar is enabled for the workload. Istio is doing all the wiring for the sidecar envoy. Now Istio can do all the features automatically if you enable them in the yaml.
### Add Deployments and Services
There are a few ways to add new **Deployments** in your namespace:
1. Click **☰ > Cluster Management**.
1. Go to the cluster that you created and click **Explore**.
1. Click **Workload**.
1. Click **Create**.
1. Click **Deployment**.
1. Fill out the form, or **Edit as Yaml**.
1. Click **Create**.
To add a **Service** to your namespace:
1. Click **☰ > Cluster Management**.
1. Go to the cluster that you created and click **Explore**.
1. Click **Service Discovery > Services**.
1. Click **Create**.
1. Select the type of service that you want.
1. Fill out the form, or **Edit as Yaml**.
1. Click **Create**
You can also create deployments and services using the kubectl **shell**
1. Run `kubectl create -f <name of service/deployment file>.yaml` if your file is stored locally in the cluster
1. Or run `cat<< EOF | kubectl apply -f -`, paste the file contents into the terminal, then run `EOF` to complete the command.
### Example Deployments and Services
Next we add the Kubernetes resources for the sample deployments and services for the BookInfo app in Istio's documentation.
1. Click **☰ > Cluster Management**.
1. Go to the cluster that you created and click **Explore**.
1. In the top navigation bar, open the kubectl shell.
1. Run `cat<< EOF | kubectl apply -f -`
1. Copy the below resources into the the shell.
1. Run `EOF`
This will set up the following sample resources from Istio's example BookInfo app:
Details service and deployment:
- A `details` Service
- A ServiceAccount for `bookinfo-details`
- A `details-v1` Deployment
Ratings service and deployment:
- A `ratings` Service
- A ServiceAccount for `bookinfo-ratings`
- A `ratings-v1` Deployment
Reviews service and deployments (three versions):
- A `reviews` Service
- A ServiceAccount for `bookinfo-reviews`
- A `reviews-v1` Deployment
- A `reviews-v2` Deployment
- A `reviews-v3` Deployment
Productpage service and deployment:
This is the main page of the app, which will be visible from a web browser. The other services will be called from this page.
- A `productpage` service
- A ServiceAccount for `bookinfo-productpage`
- A `productpage-v1` Deployment
### Resource YAML
```yaml
# Copyright 2017 Istio Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##################################################################################################
# Details service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: details
labels:
app: details
service: details
spec:
ports:
- port: 9080
name: http
selector:
app: details
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-details
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: details-v1
labels:
app: details
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: details
version: v1
template:
metadata:
labels:
app: details
version: v1
spec:
serviceAccountName: bookinfo-details
containers:
- name: details
image: docker.io/istio/examples-bookinfo-details-v1:1.15.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: ratings
labels:
app: ratings
service: ratings
spec:
ports:
- port: 9080
name: http
selector:
app: ratings
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-ratings
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ratings-v1
labels:
app: ratings
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: ratings
version: v1
template:
metadata:
labels:
app: ratings
version: v1
spec:
serviceAccountName: bookinfo-ratings
containers:
- name: ratings
image: docker.io/istio/examples-bookinfo-ratings-v1:1.15.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: reviews
labels:
app: reviews
service: reviews
spec:
ports:
- port: 9080
name: http
selector:
app: reviews
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-reviews
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v1
labels:
app: reviews
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: reviews
version: v1
template:
metadata:
labels:
app: reviews
version: v1
spec:
serviceAccountName: bookinfo-reviews
containers:
- name: reviews
image: docker.io/istio/examples-bookinfo-reviews-v1:1.15.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v2
labels:
app: reviews
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: reviews
version: v2
template:
metadata:
labels:
app: reviews
version: v2
spec:
serviceAccountName: bookinfo-reviews
containers:
- name: reviews
image: docker.io/istio/examples-bookinfo-reviews-v2:1.15.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v3
labels:
app: reviews
version: v3
spec:
replicas: 1
selector:
matchLabels:
app: reviews
version: v3
template:
metadata:
labels:
app: reviews
version: v3
spec:
serviceAccountName: bookinfo-reviews
containers:
- name: reviews
image: docker.io/istio/examples-bookinfo-reviews-v3:1.15.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
service: productpage
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-productpage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: productpage-v1
labels:
app: productpage
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: productpage
version: v1
template:
metadata:
labels:
app: productpage
version: v1
spec:
serviceAccountName: bookinfo-productpage
containers:
- name: productpage
image: docker.io/istio/examples-bookinfo-productpage-v1:1.15.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
```
### [Next: Set up the Istio Gateway](set-up-istio-gateway.md)
@@ -0,0 +1,43 @@
---
title: Applying Pod Security Policies to Projects
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/manage-projects/manage-pod-security-policies"/>
</head>
:::note
These cluster options are only available for [clusters in which Rancher has launched Kubernetes](../../new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md).
:::
You can always assign a pod security policy (PSP) to an existing project if you didn't assign one during creation.
### Prerequisites
- Create a Pod Security Policy within Rancher. Before you can assign a default PSP to an existing project, you must have a PSP available for assignment. For instruction, see [Creating Pod Security Policies](../../new-user-guides/authentication-permissions-and-global-configuration/create-pod-security-policies.md).
- Assign a default Pod Security Policy to the project's cluster. You can't assign a PSP to a project until one is already applied to the cluster. For more information, see [the documentation about adding a pod security policy to a cluster](../../new-user-guides/manage-clusters/add-a-pod-security-policy.md).
### Applying a Pod Security Policy
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to move a namespace and click **Explore**.
1. Click **Cluster > Projects/Namespaces**.
1. Find the project that you want to add a PSP to. From that project, select **⋮ > Edit Config**.
1. From the **Pod Security Policy** drop-down, select the PSP you want to apply to the project.
Assigning a PSP to a project will:
- Override the cluster's default PSP.
- Apply the PSP to the project.
- Apply the PSP to any namespaces you add to the project later.
1. Click **Save**.
**Result:** The PSP is applied to the project and any namespaces added to the project.
:::note
Any workloads that are already running in a cluster or project before a PSP is assigned will not be checked to determine if they comply with the PSP. Workloads would need to be cloned or upgraded to see if they pass the PSP.
:::
@@ -0,0 +1,74 @@
---
title: How Resource Quotas Work in Rancher Projects
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/manage-projects/manage-project-resource-quotas/about-project-resource-quotas"/>
</head>
Resource quotas in Rancher include the same functionality as the [native version of Kubernetes](https://kubernetes.io/docs/concepts/policy/resource-quotas/). However, in Rancher, resource quotas have been extended so that you can apply them to projects.
In a standard Kubernetes deployment, resource quotas are applied to individual namespaces. However, you cannot apply the quota to your namespaces simultaneously with a single action. Instead, the resource quota must be applied multiple times.
In the following diagram, a Kubernetes administrator is trying to enforce a resource quota without Rancher. The administrator wants to apply a resource quota that sets the same CPU and memory limit to every namespace in his cluster (`Namespace 1-4`) . However, in the base version of Kubernetes, each namespace requires a unique resource quota. The administrator has to create four different resource quotas that have the same specs configured (`Resource Quota 1-4`) and apply them individually.
<sup>Base Kubernetes: Unique Resource Quotas Being Applied to Each Namespace</sup>
![Native Kubernetes Resource Quota Implementation](/img/kubernetes-resource-quota.svg)
Resource quotas are a little different in Rancher. In Rancher, you apply a resource quota to the project, and then the quota propagates to each namespace, whereafter Kubernetes enforces your limits using the native version of resource quotas. If you want to change the quota for a specific namespace, you can override it.
The resource quota includes two limits, which you set while creating or editing a project:
<a id="project-limits"></a>
- **Project Limits:**
This set of values configures a total limit for each specified resource shared among all namespaces in the project.
- **Namespace Default Limits:**
This set of values configures the default quota limit available for each namespace for each specified resource.
When a namespace is created in the project without overrides, this limit is automatically bound to the namespace and enforced.
In the following diagram, a Rancher administrator wants to apply a resource quota that sets the same CPU and memory limit for every namespace in their project (`Namespace 1-4`). However, in Rancher, the administrator can set a resource quota for the project (`Project Resource Quota`) rather than individual namespaces. This quota includes resource limits for both the entire project (`Project Limit`) and individual namespaces (`Namespace Default Limit`). Rancher then propagates the `Namespace Default Limit` quotas to each namespace (`Namespace Resource Quota`) when created.
<sup>Rancher: Resource Quotas Propagating to Each Namespace</sup>
![Rancher Resource Quota Implementation](/img/rancher-resource-quota.png)
Let's highlight some more nuanced functionality for namespaces created **_within_** the Rancher UI. If a quota is deleted at the project level, it will also be removed from all namespaces contained within that project, despite any overrides that may exist. Further, updating an existing namespace default limit for a quota at the project level will not result in that value being propagated to existing namespaces in the project; the updated value will only be applied to newly created namespaces in that project. To update a namespace default limit for existing namespaces you can delete and subsequently recreate the quota at the project level with the new default value. This will result in the new default value being applied to all existing namespaces in the project.
Before creating a namespace in a project, Rancher compares the amounts of the project's available resources and requested resources, regardless of whether they come from the default or overridden limits.
If the requested resources exceed the remaining capacity in the project for those resources, Rancher will assign the namespace the remaining capacity for that resource.
However, this is not the case with namespaces created **_outside_** of Rancher's UI. For namespaces created via `kubectl`, Rancher
will assign a resource quota that has a **zero** amount for any resource that requested more capacity than what remains in the project.
To create a namespace in an existing project via `kubectl`, use the `field.cattle.io/projectId` annotation. To override the default
requested quota limit, use the `field.cattle.io/resourceQuota` annotation.
Note that Rancher will only override limits for resources that are defined on the project quota.
```
apiVersion: v1
kind: Namespace
metadata:
annotations:
field.cattle.io/projectId: [your-cluster-ID]:[your-project-ID]
field.cattle.io/resourceQuota: '{"limit":{"limitsCpu":"100m", "configMaps": "50"}}'
name: my-ns
```
In this example, if the project's quota does not include configMaps in its list of resources, then Rancher will ignore `configMaps` in this override.
Users are advised to create dedicated `ResourceQuota` objects in namespaces to configure additional custom limits for resources not defined on the project.
Resource quotas are native Kubernetes objects, and Rancher will ignore user-defined quotas in namespaces belonging to a project with a quota,
thus giving users more control.
The following table explains the key differences between the two quota types.
| Rancher Resource Quotas | Kubernetes Resource Quotas |
| ---------------------------------------------------------- | -------------------------------------------------------- |
| Applies to projects and namespace. | Applies to namespaces only. |
| Creates resource pool for all namespaces in project. | Applies static resource limits to individual namespaces. |
| Applies resource quotas to namespaces through propagation. | Applies only to the assigned namespace.
@@ -0,0 +1,50 @@
---
title: Project Resource Quotas
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/manage-projects/manage-project-resource-quotas"/>
</head>
In situations where several teams share a cluster, one team may overconsume the resources available: CPU, memory, storage, services, Kubernetes objects like pods or secrets, and so on. To prevent this overconsumption, you can apply a _resource quota_, which is a Rancher feature that limits the resources available to a project or namespace.
This page is a how-to guide for creating resource quotas in existing projects.
Resource quotas can also be set when a new project is created. For details, refer to the section on [creating new projects.](../../../new-user-guides/manage-clusters/projects-and-namespaces.md#creating-projects)
Resource quotas in Rancher include the same functionality as the [native version of Kubernetes](https://kubernetes.io/docs/concepts/policy/resource-quotas/). In Rancher, resource quotas have been extended so that you can apply them to projects. For details on how resource quotas work with projects in Rancher, refer to [this page.](about-project-resource-quotas.md)
### Applying Resource Quotas to Existing Projects
Edit resource quotas when:
- You want to limit the resources that a project and its namespaces can use.
- You want to scale the resources available to a project up or down when a resource quota is already in effect.
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to apply a resource quota and click **Explore**.
1. Click **Cluster > Projects/Namespaces**.
1. Make sure that the **Projects/Namespaces** page is in **Group by Project** view mode.
![Screenshot highlighting the "Group by Project" icon, above the list of projects. It resembles a folder.](/img/edit-project-config-for-resource-quotas-group-by-project.png)
1. Find the project that you want to add a resource quota to, and select the **⋮** that's on the same row as the project's name.
![Screenshot highlighting triple dots icon at the end of the same row as the project name.](/img/edit-project-config-for-resource-quotas-dots.png)
1. Select **Edit Config**.
1. Expand **Resource Quotas** and click **Add Resource**. Alternatively, you can edit existing quotas.
1. Select a Resource Type. For more information on types, see the [quota type reference.](resource-quota-types.md)
1. Enter values for the **Project Limit** and the **Namespace Default Limit**.
| Field | Description |
| ----------------------- | -------------------------------------------------------------------------------------------------------- |
| Project Limit | The overall resource limit for the project. |
| Namespace Default Limit | The default resource limit available for each namespace. This limit is propagated to each namespace in the project. The combined limit of all project namespaces shouldn't exceed the project limit. |
1. **Optional:** Add more quotas.
1. Click **Create**.
**Result:** The resource quota is applied to your project and namespaces. When you add more namespaces in the future, Rancher validates that the project can accommodate the namespace. If the project can't allocate the resources, you may still create namespaces, but they will be given a resource quota of 0. Subsequently, Rancher will not allow you to create any resources restricted by this quota.
@@ -0,0 +1,38 @@
---
title: Overriding the Default Limit for a Namespace
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/manage-projects/manage-project-resource-quotas/override-default-limit-in-namespaces"/>
</head>
Although the **Namespace Default Limit** propagates from the project to each namespace when created, in some cases, you may need to increase (or decrease) the quotas for a specific namespace. In this situation, you can override the default limits by editing the namespace.
In the diagram below, the Rancher administrator has a resource quota in effect for their project. However, the administrator wants to override the namespace limits for `Namespace 3` so that it has more resources available. Therefore, the administrator [raises the namespace limits](../../../new-user-guides/manage-clusters/projects-and-namespaces.md) for `Namespace 3` so that the namespace can access more resources.
<sup>Namespace Default Limit Override</sup>
![Namespace Default Limit Override](/img/rancher-resource-quota-override.svg)
How to: [Editing Namespace Resource Quotas](../../../new-user-guides/manage-clusters/projects-and-namespaces.md)
### Editing Namespace Resource Quotas
If there is a resource quota configured for a project, you can override the namespace default limit to provide a specific namespace with access to more (or less) project resources.
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to edit a namespace resource quota and click **Explore**.
1. Click **Cluster > Projects/Namespaces**.
1. Find the namespace for which you want to edit the resource quota. Click **⋮ > Edit Config**.
1. Edit the resource limits. These limits determine the resources available to the namespace. The limits must be set within the configured project limits.
For more information about each **Resource Type**, see [the type reference](resource-quota-types.md).
:::note
- If a resource quota is not configured for the project, these options will not be available.
- If you enter limits that exceed the configured project limits, Rancher will not let you save your edits.
:::
**Result:** Your override is applied to the namespace's resource quota.
@@ -0,0 +1,31 @@
---
title: Resource Quota Type Reference
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/manage-projects/manage-project-resource-quotas/resource-quota-types"/>
</head>
When you create a resource quota, you are configuring the pool of resources available to the project. You can set the following resource limits for the following resource types.
| Resource Type | Description |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| CPU Limit* | The maximum amount of CPU (in [millicores](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu)) allocated to the project/namespace.<sup>1</sup> |
| CPU Reservation* | The minimum amount of CPU (in millicores) guaranteed to the project/namespace.<sup>1</sup> |
| Memory Limit* | The maximum amount of memory (in bytes) allocated to the project/namespace.<sup>1</sup> |
| Memory Reservation* | The minimum amount of memory (in bytes) guaranteed to the project/namespace.<sup>1</sup> |
| Storage Reservation | The minimum amount of storage (in gigabytes) guaranteed to the project/namespace. |
| Services Load Balancers | The maximum number of load balancers services that can exist in the project/namespace. |
| Services Node Ports | The maximum number of node port services that can exist in the project/namespace. |
| Pods | The maximum number of pods that can exist in the project/namespace in a non-terminal state (i.e., pods with a state of `.status.phase in (Failed, Succeeded)` equal to true). |
| Services | The maximum number of services that can exist in the project/namespace. |
| ConfigMaps | The maximum number of ConfigMaps that can exist in the project/namespace. |
| Persistent Volume Claims | The maximum number of persistent volume claims that can exist in the project/namespace. |
| Replications Controllers | The maximum number of replication controllers that can exist in the project/namespace. |
| Secrets | The maximum number of secrets that can exist in the project/namespace. |
:::note **<sup>*</sup>**
When setting resource quotas, if you set anything related to CPU or Memory (i.e. limits or reservations) on a project / namespace, all containers will require a respective CPU or Memory field set during creation. A container default resource limit can be set at the same time to avoid the need to explicitly set these limits for every workload. See the [Kubernetes documentation](https://kubernetes.io/docs/concepts/policy/resource-quotas/#requests-vs-limits) for more details on why this is required.
:::
@@ -0,0 +1,44 @@
---
title: Setting Container Default Resource Limits
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/manage-projects/manage-project-resource-quotas/set-container-default-resource-limits"/>
</head>
When setting resource quotas, if you set anything related to CPU or Memory (i.e. limits or reservations) on a project / namespace, all containers will require a respective CPU or Memory field set during creation. See the [Kubernetes documentation](https://kubernetes.io/docs/concepts/policy/resource-quotas/#requests-vs-limits) for more details on why this is required.
To avoid setting these limits on each and every container during workload creation, a default container resource limit can be specified on the namespace.
### Editing the Container Default Resource Limit
Edit the container default resource limit when:
- You have a CPU or Memory resource quota set on a project, and want to supply the corresponding default values for a container.
- You want to edit the default container resource limit.
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to edit the default resource limit and click **Explore**.
1. Click **Cluster > Projects/Namespaces**.
1. Find the project that you want to edit the container default resource limit. From that project, select **⋮ > Edit Config**.
1. Expand **Container Default Resource Limit** and edit the values.
### Resource Limit Propagation
When the default container resource limit is set at a project level, the parameter will be propagated to any namespace created in the project after the limit has been set. For any existing namespace in a project, this limit will not be automatically propagated. You will need to manually set the default container resource limit for any existing namespaces in the project in order for it to be used when creating any containers.
You can set a default container resource limit on a project and launch any catalog applications.
Once a container default resource limit is configured on a namespace, the default will be pre-populated for any containers created in that namespace. These limits/reservations can always be overridden during workload creation.
### Container Resource Quota Types
The following resource limits can be configured:
| Resource Type | Description |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| CPU Limit | The maximum amount of CPU (in [millicores](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu)) allocated to the container.|
| CPU Reservation | The minimum amount of CPU (in millicores) guaranteed to the container. |
| Memory Limit | The maximum amount of memory (in bytes) allocated to the container. |
| Memory Reservation | The minimum amount of memory (in bytes) guaranteed to the container.
| NVIDIA GPU Limit/Reservation | The amount of GPUs allocated to the container. The limit and reservation are always the same for GPUs. |
@@ -0,0 +1,41 @@
---
title: Project Administration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/manage-projects"/>
</head>
_Projects_ are objects introduced in Rancher that help organize namespaces in your Kubernetes cluster. You can use projects to create multi-tenant clusters, which allows a group of users to share the same underlying resources without interacting with each other's applications.
In terms of hierarchy:
- Clusters contain projects
- Projects contain namespaces
Within Rancher, projects allow you to manage multiple namespaces as a single entity. In native Kubernetes, which does not include projects, features like role-based access rights or cluster resources are assigned to individual namespaces. In clusters where multiple namespaces require the same set of access rights, assigning these rights to each individual namespace can become tedious. Even though all namespaces require the same rights, there's no way to apply those rights to all of your namespaces in a single action. You'd have to repetitively assign these rights to each namespace!
Rancher projects resolve this issue by allowing you to apply resources and access rights at the project level. Each namespace in the project then inherits these resources and policies, so you only have to assign them to the project once, rather than assigning them to each individual namespace.
You can use projects to perform actions like:
- [Assign users access to a group of namespaces](../../new-user-guides/add-users-to-projects.md)
- Assign users [specific roles in a project](../../new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-roles). A role can be owner, member, read-only, or [custom](../../new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/custom-roles.md)
- [Set resource quotas](manage-project-resource-quotas/manage-project-resource-quotas.md)
- [Manage namespaces](../../new-user-guides/manage-namespaces.md)
- [Configure tools](../../../reference-guides/rancher-project-tools.md)
- [Configure pod security policies](manage-pod-security-policies.md)
### Authorization
Non-administrative users are only authorized for project access after an [administrator](../../new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md), [cluster owner or member](../../new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#cluster-roles), or [project owner](../../new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-roles) adds them to the project's **Members** tab.
Whoever creates the project automatically becomes a [project owner](../../new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-roles).
## Switching between Projects
To switch between projects, use the drop-down available in the navigation bar. Alternatively, you can switch between projects directly in the navigation bar.
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to switch projects and click **Explore**.
1. In the top navigation bar, select the project that you want to open.
@@ -0,0 +1,159 @@
---
title: Persistent Grafana Dashboards
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-alerting-guides/create-persistent-grafana-dashboard"/>
</head>
To allow the Grafana dashboard to persist after the Grafana instance restarts, add the dashboard configuration JSON into a ConfigMap. ConfigMaps also allow the dashboards to be deployed with a GitOps or CD based approach. This allows the dashboard to be put under version control.
- [Creating a Persistent Grafana Dashboard](#creating-a-persistent-grafana-dashboard)
- [Known Issues](#known-issues)
## Creating a Persistent Grafana Dashboard
<Tabs>
<TabItem value="Rancher v2.5.8+">
:::note Prerequisites:
- The monitoring application needs to be installed.
- To create the persistent dashboard, you must have at least the **Manage Config Maps** Rancher RBAC permissions assigned to you in the project or namespace that contains the Grafana Dashboards. This correlates to the `monitoring-dashboard-edit` or `monitoring-dashboard-admin` Kubernetes native RBAC Roles exposed by the Monitoring chart.
- To see the links to the external monitoring UIs, including Grafana dashboards, you will need at least a [project-member role.](../../../integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md#users-with-rancher-based-permissions)
:::
### 1. Get the JSON model of the dashboard that you want to persist
To create a persistent dashboard, you will need to get the JSON model of the dashboard you want to persist. You can use a premade dashboard or build your own.
To use a premade dashboard, go to [https://grafana.com/grafana/dashboards](https://grafana.com/grafana/dashboards), open up its detail page, and click on the **Download JSON** button to get the JSON model for the next step.
To use your own dashboard:
1. Click on the link to open Grafana. On the cluster detail page, click **Monitoring**.
1. Log in to Grafana. Note: The default Admin username and password for the Grafana instance is `admin/prom-operator`. Alternative credentials can also be supplied on deploying or upgrading the chart.
:::note
Regardless of who has the password, in order to access the Grafana instance, you still need at least the <b>Manage Services</b> or <b>View Monitoring</b> permissions in the project that Rancher Monitoring is deployed into. Alternative credentials can also be supplied on deploying or upgrading the chart.
:::
1. Create a dashboard using Grafana's UI. Once complete, go to the dashboard's settings by clicking on the gear icon in the top navigation menu. In the left navigation menu, click **JSON Model**.
1. Copy the JSON data structure that appears.
### 2. Create a ConfigMap using the Grafana JSON model
Create a ConfigMap in the namespace that contains your Grafana Dashboards (e.g. `cattle-dashboards` by default).
The ConfigMap should look like this:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
grafana_dashboard: "1"
name: <dashboard-name>
namespace: cattle-dashboards # Change if using a non-default namespace
data:
<dashboard-name>.json: |-
<copied-json>
```
By default, Grafana is configured to watch all ConfigMaps with the `grafana_dashboard` label within the `cattle-dashboards` namespace.
To specify that you would like Grafana to watch for ConfigMaps across all namespaces, refer to [this section](#configuring-namespaces-for-the-grafana-dashboard-configmap).
To create the ConfigMap through the Rancher UI, first make sure that you are currently logged in to the Grafana UI, to ensure that dashboards import without encountering permissions issues. Then, return to the Rancher UI and perform the following steps:
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to see the visualizations and click **Explore**.
1. Click **More Resources > Core > ConfigMaps**.
1. Click **Create**.
1. On the **Data** tab, set up the key-value pairs similar to the example above. When entering the value for `<dashboard-name>.json`, click **Read from File** to upload the JSON data model as the value.
1. On the **Labels & Annotations** tab, click **Add Label** and enter `grafana_dashboard` as the key, and `1` as the value.
1. Click **Create**.
**Result:** After the ConfigMap is created, it should show up on the Grafana UI and be persisted even if the Grafana pod is restarted.
:::note
The actual key-value pair may differ if you have modified the Helm chart to watch a different dashboard label and value.
:::
Dashboards that are persisted using ConfigMaps cannot be deleted or edited from the Grafana UI.
If you attempt to delete the dashboard in the Grafana UI, you will see the error message "Dashboard cannot be deleted because it was provisioned." To delete the dashboard, you will need to delete the ConfigMap.
### Configuring Namespaces for the Grafana Dashboard ConfigMap
To specify that you would like Grafana to watch for ConfigMaps across all namespaces, set this value in the `rancher-monitoring` Helm chart:
```
grafana.sidecar.dashboards.searchNamespace=ALL
```
Note that the RBAC roles exposed by the Monitoring chart to add Grafana Dashboards are still restricted to giving permissions for users to add dashboards in the namespace defined in `grafana.dashboards.namespace`, which defaults to `cattle-dashboards`.
</TabItem>
<TabItem value="Rancher before v2.5.8">
:::note Prerequisites:
- The monitoring application needs to be installed.
- You must have the cluster-admin ClusterRole permission.
:::
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to configure the Grafana namespace and click **Explore**.
1. In the left navigation bar, click **Monitoring**.
1. Click **Grafana**.
1. Log in to Grafana. Note: The default Admin username and password for the Grafana instance is `admin/prom-operator`. Alternative credentials can also be supplied on deploying or upgrading the chart.
:::note
Regardless of who has the password, cluster administrator permission in Rancher is still required to access the Grafana instance.
:::
1. Go to the dashboard that you want to persist. In the top navigation menu, go to the dashboard settings by clicking the gear icon.
1. In the left navigation menu, click **JSON Model**.
1. Copy the JSON data structure that appears.
1. Create a ConfigMap in the `cattle-dashboards` namespace. The ConfigMap needs to have the label `grafana_dashboard: "1"`. Paste the JSON into the ConfigMap in the format shown in the example below:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
grafana_dashboard: "1"
name: <dashboard-name>
namespace: cattle-dashboards
data:
<dashboard-name>.json: |-
<copied-json>
```
**Result:** After the ConfigMap is created, it should show up on the Grafana UI and be persisted even if the Grafana pod is restarted.
Dashboards that are persisted using ConfigMaps cannot be deleted from the Grafana UI. If you attempt to delete the dashboard in the Grafana UI, you will see the error message "Dashboard cannot be deleted because it was provisioned." To delete the dashboard, you will need to delete the ConfigMap.
To prevent the persistent dashboard from being deleted when Monitoring v2 is uninstalled, add the following annotation to the `cattle-dashboards` namespace:
```
helm.sh/resource-policy: "keep"
```
</TabItem>
</Tabs>
## Known Issues
For users who are using Monitoring V2 v9.4.203 or below, uninstalling the Monitoring chart will delete the `cattle-dashboards` namespace, which will delete all persisted dashboards, unless the namespace is marked with the annotation `helm.sh/resource-policy: "keep"`.
This annotation will be added by default in the new monitoring chart released by Rancher v2.5.8, but it still needs to be manually applied for users of earlier Rancher versions.
@@ -0,0 +1,43 @@
---
title: Customizing Grafana Dashboards
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-alerting-guides/customize-grafana-dashboard"/>
</head>
In this section, you'll learn how to customize the Grafana dashboard to show metrics that apply to a certain container.
### Prerequisites
Before you can customize a Grafana dashboard, the `rancher-monitoring` application must be installed.
To see the links to the external monitoring UIs, including Grafana dashboards, you will need at least a [project-member role.](../../../integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md#users-with-rancher-based-permissions)
### Signing in to Grafana
1. In the Rancher UI, go to the cluster that has the dashboard you want to customize.
1. In the left navigation menu, click **Monitoring.**
1. Click **Grafana.** The Grafana dashboard should open in a new tab.
1. Go to the log in icon in the lower left corner and click **Sign In.**
1. Log in to Grafana. The default Admin username and password for the Grafana instance is `admin/prom-operator`. (Regardless of who has the password, cluster administrator permission in Rancher is still required access the Grafana instance.) Alternative credentials can also be supplied on deploying or upgrading the chart.
### Getting the PromQL Query Powering a Grafana Panel
For any panel, you can click the title and click **Explore** to get the PromQL queries powering the graphic.
For this example, we would like to get the CPU usage for the Alertmanager container, so we click **CPU Utilization > Inspect.**
The **Data** tab shows the underlying data as a time series, with the time in first column and the PromQL query result in the second column. Copy the PromQL query.
```
(1 - (avg(irate({__name__=~"node_cpu_seconds_total|windows_cpu_time_total",mode="idle"}[5m])))) * 100
```
You can then modify the query in the Grafana panel or create a new Grafana panel using the query.
See also:
- [Grafana docs on editing a panel](https://grafana.com/docs/grafana/latest/panels-visualizations/configure-panel-options/#edit-a-panel)
- [Grafana docs on adding a panel to a dashboard](https://grafana.com/docs/grafana/latest/panels-visualizations/panel-editor-overview)
@@ -0,0 +1,23 @@
---
title: Debugging High Memory Usage
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-alerting-guides/debug-high-memory-usage"/>
</head>
Every time series in Prometheus is uniquely identified by its [metric name](https://prometheus.io/docs/practices/naming/#metric-names) and optional key-value pairs called [labels.](https://prometheus.io/docs/practices/naming/#labels)
The labels allow the ability to filter and aggregate the time series data, but they also multiply the amount of data that Prometheus collects.
Each time series has a defined set of labels, and Prometheus generates a new time series for all unique combinations of labels. If a metric has two labels attached, two time series are generated for that metric. Changing any label value, including adding or removing a label, will create a new time series.
Prometheus is optimized to store data that is index-based on series. It is designed for a relatively consistent number of time series and a relatively large number of samples that need to be collected from the exporters over time.
Inversely, Prometheus is not optimized to accommodate a rapidly changing number of time series. For that reason, large bursts of memory usage can occur when monitoring is installed on clusters where many resources are being created and destroyed, especially on multi-tenant clusters.
### Reducing Memory Bursts
To reduce memory consumption, Prometheus can be configured to store fewer time series, by scraping fewer metrics or by attaching fewer labels to the time series. To see which series use the most memory, you can check the TSDB (time series database) status page in the Prometheus UI.
Distributed Prometheus solutions such as [Thanos](https://thanos.io/) and [Cortex](https://cortexmetrics.io/) use an alternate architecture in which multiple small Prometheus instances are deployed. In the case of Thanos, the metrics from each Prometheus are aggregated into the common Thanos deployment, and then those metrics are exported to a persistent store, such as S3. This more robust architecture avoids burdening any single Prometheus instance with too many time series, while also preserving the ability to query metrics on a global level.
@@ -0,0 +1,155 @@
---
title: Enable Monitoring
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-alerting-guides/enable-monitoring"/>
</head>
As an [administrator](../../new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md) or [cluster owner](../../new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#cluster-roles), you can configure Rancher to deploy Prometheus to monitor your Kubernetes cluster.
This page describes how to enable monitoring and alerting within a cluster using the new monitoring application.
You can enable monitoring with or without SSL.
## Requirements
- Allow traffic on port 9796 for each of your nodes. Prometheus scrapes metrics from these ports.
- You may also need to allow traffic on port 10254 for each of your nodes, if [PushProx](../../../integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md#pushprox) is disabled (`ingressNginx.enabled` set to `false`), or you've upgraded from a previous Rancher version that had v1 monitoring already installed.
- Make sure that your cluster fulfills the resource requirements. The cluster should have at least 1950Mi memory available, 2700m CPU, and 50Gi storage. See [Configuring Resource Limits and Requests](../../../reference-guides/monitoring-v2-configuration/helm-chart-options.md#configuring-resource-limits-and-requests) for a breakdown of the resource limits and requests.
- When you install monitoring on an RKE cluster that uses RancherOS or Flatcar Linux nodes, change the etcd node certificate directory to `/opt/rke/etc/kubernetes/ssl`.
- For clusters that have been provisioned with the RKE CLI and that have the address set to a hostname instead of an IP address, set `rkeEtcd.clients.useLocalhost` to `true` when you configure the Values during installation. For example:
```yaml
rkeEtcd:
clients:
useLocalhost: true
```
:::note
If you want to set up Alertmanager, Grafana or Ingress, it has to be done with the settings on the Helm chart deployment. It's problematic to create Ingress outside the deployment.
:::
## Setting Resource Limits and Requests
The resource requests and limits can be configured when installing `rancher-monitoring`. To configure Prometheus resources from the Rancher UI, click **Apps > Monitoring** in the upper left corner.
For more information about the default limits, see [this page.](../../../reference-guides/monitoring-v2-configuration/helm-chart-options.md#configuring-resource-limits-and-requests)
## Install the Monitoring Application
### Enable Monitoring for use without SSL
1. Click **☰ > Cluster Management**.
1. Go to the cluster that you created and click **Explore**.
1. Click **Cluster Tools** (bottom left corner).
1. Click **Install** by Monitoring.
1. Optional: Customize requests, limits and more for Alerting, Prometheus, and Grafana in the Values step. For help, refer to the [configuration reference.](../../../reference-guides/monitoring-v2-configuration/helm-chart-options.md)
**Result:** The monitoring app is deployed in the `cattle-monitoring-system` namespace.
### Enable Monitoring for use with SSL
1. Follow the steps on [this page](../../new-user-guides/kubernetes-resources-setup/secrets.md) to create a secret in order for SSL to be used for alerts.
- The secret should be created in the `cattle-monitoring-system` namespace. If it doesn't exist, create it first.
- Add the `ca`, `cert`, and `key` files to the secret.
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to enable monitoring for use with SSL and click **Explore**.
1. Click **Apps > Charts**.
1. Click **Monitoring**.
1. Click **Install** or **Update**, depending on whether you have already installed Monitoring.
1. Check the box for **Customize Helm options before install** and click **Next**.
1. Click **Alerting**.
1. In the **Additional Secrets** field, add the secrets created earlier.
**Result:** The monitoring app is deployed in the `cattle-monitoring-system` namespace.
When [creating a receiver,](../../../reference-guides/monitoring-v2-configuration/receivers.md#creating-receivers-in-the-rancher-ui) SSL-enabled receivers such as email or webhook will have a **SSL** section with fields for **CA File Path**, **Cert File Path**, and **Key File Path**. Fill in these fields with the paths to each of `ca`, `cert`, and `key`. The path will be of the form `/etc/alertmanager/secrets/name-of-file-in-secret`.
For example, if you created a secret with these key-value pairs:
```yaml
ca.crt=`base64-content`
cert.pem=`base64-content`
key.pfx=`base64-content`
```
Then **Cert File Path** would be set to `/etc/alertmanager/secrets/cert.pem`.
## Rancher Performance Dashboard
When monitoring is installed on the upstream (local) cluster, you are given basic health metrics about the Rancher pods, such as CPU and memory data. To get advanced metrics for your local Rancher server, you must additionally enable the Rancher Performance Dashboard for Grafana.
This dashboard provides access to the following advanced metrics:
- Handler Average Execution Times Over Last 5 Minutes
- Rancher API Average Request Times Over Last 5 Minutes
- Subscribe Average Request Times Over Last 5 Minutes
- Lasso Controller Work Queue Depth (Top 20)
- Number of Rancher Requests (Top 20)
- Number of Failed Rancher API Requests (Top 20)
- K8s Proxy Store Average Request Times Over Last 5 Minutes (Top 20)
- K8s Proxy Client Average Request Times Over Last 5 Minutes (Top 20)
- Cached Objects by GroupVersionKind (Top 20)
- Lasso Handler Executions (Top 20)
- Handler Executions Over Last 2 Minutes (Top 20)
- Total Handler Executions with Error (Top 20)
- Data Transmitted by Remote Dialer Sessions (Top 20)
- Errors for Remote Dialer Sessions (Top 20)
- Remote Dialer Connections Removed (Top 20)
- Remote Dialer Connections Added by Client (Top 20)
:::note
Profiling data (such as advanced memory or CPU analysis) is not present as it is a very context-dependent technique that's meant for debugging and not intended for normal observation.
:::
### Enabling the Rancher Performance Dashboard
To enable the Rancher Performance Dashboard:
<Tabs groupid="UIorCLI">
<TabItem value="Helm">
Use the following options with the Helm CLI:
```bash
--set extraEnv\[0\].name="CATTLE_PROMETHEUS_METRICS" --set-string extraEnv\[0\].value=true
```
You can also include the following snippet in your Rancher Helm chart's values.yaml file:
```yaml
extraEnv:
- name: "CATTLE_PROMETHEUS_METRICS"
value: "true"
```
</TabItem>
<TabItem value="UI">
1. Click **☰ > Cluster Management**.
1. Go to the row of the `local` cluster and click **Explore**.
1. Click **Workloads > Deployments**.
1. Use the dropdown menu at the top to filter for **All Namespaces**.
1. Under the `cattle-system` namespace, go to the `rancher` row and click **⋮ > Edit Config**
1. Under **Environment Variables**, click **Add Variable**.
1. For **Type**, select `Key/Value Pair`.
1. For **Variable Name**, enter `CATTLE_PROMETHEUS_METRICS`.
1. For **Value**, enter `true`.
1. Click **Save** to apply the change.
</TabItem>
</Tabs>
### Accessing the Rancher Performance Dashboard
1. Click **☰ > Cluster Management**.
1. Go to the row of the `local` cluster and click **Explore**.
1. Click **Monitoring**
1. Select the **Grafana** dashboard.
1. From the sidebar, click **Search dashboards**.
1. Enter `Rancher Performance Debugging` and select it.
@@ -0,0 +1,14 @@
---
title: Monitoring/Alerting Guides
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-alerting-guides"/>
</head>
- [Enable monitoring](enable-monitoring.md)
- [Uninstall monitoring](uninstall-monitoring.md)
- [Monitoring workloads](set-up-monitoring-for-workloads.md)
- [Customizing Grafana dashboards](customize-grafana-dashboard.md)
- [Persistent Grafana dashboards](create-persistent-grafana-dashboard.md)
- [Debugging high memory usage](debug-high-memory-usage.md)
@@ -0,0 +1,11 @@
---
title: Customizing Grafana Dashboards
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides/customize-grafana-dashboards"/>
</head>
Grafana dashboards are customized the same way whether it's for rancher-monitoring or for Prometheus Federator.
For instructions, refer to [this page](../customize-grafana-dashboard.md).
@@ -0,0 +1,90 @@
---
title: Enable Prometheus Federator
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides/enable-prometheus-federator"/>
</head>
## Requirements
By default, Prometheus Federator is configured and intended to be deployed alongside [rancher-monitoring](../../../../integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md), which deploys Prometheus Operator alongside a Cluster Prometheus that each Project Monitoring Stack is configured to federate namespace-scoped metrics from by default.
For instructions on installing rancher-monitoring, refer to [this page](../enable-monitoring.md).
The default configuration should already be compatible with your rancher-monitoring stack. However, to optimize the security and usability of Prometheus Federator in your cluster, we recommend making these additional configurations to rancher-monitoring:
- [Ensure the cattle-monitoring-system namespace is placed into the System Project](#ensure-the-cattle-monitoring-system-namespace-is-placed-into-the-system-project-or-a-similarly-locked-down-project-that-has-access-to-other-projects-in-the-cluster).
- [Configure rancher-monitoring to only watch for resources created by the Helm chart itself](#configure-rancher-monitoring-to-only-watch-for-resources-created-by-the-helm-chart-itself).
- [Increase the CPU / memory limits of the Cluster Prometheus](#increase-the-cpu--memory-limits-of-the-cluster-prometheus).
### Ensure the cattle-monitoring-system namespace is placed into the System Project (or a similarly locked down Project that has access to other Projects in the cluster)
![Select Projects-Namespaces](/img/install-in-system-project.png)
Prometheus Operator's security model expects that the namespace it is deployed into (e.g., `cattle-monitoring-system`) has limited access for anyone except Cluster Admins to avoid privilege escalation via execing into Pods (such as the Jobs executing Helm operations). In addition, deploying Prometheus Federator and all Project Prometheus stacks into the System Project ensures that each Project Prometheus is able to reach out to scrape workloads across all Projects, even if Network Policies are defined via Project Network Isolation. It also provides limited access for Project Owners, Project Members, and other users so that they're unable to access data that they shouldn't have access to (i.e., being allowed to exec into pods, set up the ability to scrape namespaces outside of a given Project, etc.).
1. Open the `System` project to check your namespaces:
Click **Cluster > Projects/Namespaces** in the Rancher UI. This will display all of the namespaces in the `System` project:
![Select Projects-Namespaces](/img/cattle-monitoring-system.png)
1. If you have an existing Monitoring V2 installation within the `cattle-monitoring-system` namespace, but that namespace is not in the `System` project, you may move the `cattle-monitoring-system` namespace into the `System` project or into another project of limited access. To do so, you may either:
- Drag and drop the namespace into the `System` project or
- Select **⋮** to the right of the namespace, click **Move**, then choose `System` from the **Target Project** dropdown
![Move to a New Project](/img/move-to-new-project.png)
### Configure rancher-monitoring to only watch for resources created by the Helm chart itself
Since each Project Monitoring Stack will watch the other namespaces and collect additional custom workload metrics or dashboards already, it's recommended to configure the following settings on all selectors to ensure that the Cluster Prometheus Stack only monitors resources created by the Helm Chart itself:
```
matchLabels:
release: "rancher-monitoring"
```
The following selector fields are recommended to have this value:
- `.Values.alertmanager.alertmanagerSpec.alertmanagerConfigSelector`
- `.Values.prometheus.prometheusSpec.serviceMonitorSelector`
- `.Values.prometheus.prometheusSpec.podMonitorSelector`
- `.Values.prometheus.prometheusSpec.ruleSelector`
- `.Values.prometheus.prometheusSpec.probeSelector`
Once this setting is turned on, you can always create ServiceMonitors or PodMonitors that are picked up by the Cluster Prometheus by adding the label `release: "rancher-monitoring"` to them, in which case they will be ignored by Project Monitoring Stacks automatically by default, even if the namespace in which those ServiceMonitors or PodMonitors reside in are not system namespaces.
:::note
If you don't want to allow users to be able to create ServiceMonitors and PodMonitors that aggregate into the Cluster Prometheus in Project namespaces, you can additionally set the namespaceSelectors on the chart to only target system namespaces (which must contain `cattle-monitoring-system` and `cattle-dashboards`, where resources are deployed into by default by rancher-monitoring; you will also need to monitor the `default` namespace to get apiserver metrics or create a custom ServiceMonitor to scrape apiserver metrics from the Service residing in the default namespace) to limit your Cluster Prometheus from picking up other Prometheus Operator CRs. In that case, it would be recommended to turn `.Values.prometheus.prometheusSpec.ignoreNamespaceSelectors=true` to allow you to define ServiceMonitors that can monitor non-system namespaces from within a system namespace.
:::
### Increase the CPU / memory limits of the Cluster Prometheus
Depending on a cluster's setup, it's generally recommended to give a large amount of dedicated memory to the Cluster Prometheus to avoid restarts due to out-of-memory errors (OOMKilled) usually caused by churn created in the cluster that causes a large number of high cardinality metrics to be generated and ingested by Prometheus within one block of time. This is one of the reasons why the default Rancher Monitoring stack expects around 4GB of RAM to be able to operate in a normal-sized cluster. However, when introducing Project Monitoring Stacks that are all sending `/federate` requests to the same Cluster Prometheus and are reliant on the Cluster Prometheus being "up" to federate that system data on their namespaces, it's even more important that the Cluster Prometheus has an ample amount of CPU / memory assigned to it to prevent an outage that can cause data gaps across all Project Prometheis in the cluster.
:::note
There are no specific recommendations on how much memory the Cluster Prometheus should be configured with since it depends entirely on the user's setup (namely the likelihood of encountering a high churn rate and the scale of metrics that could be generated at that time); it generally varies per setup.
:::
## Install the Prometheus Federator Application
1. Click **☰ > Cluster Management**.
1. Go to the cluster that you want to install Prometheus Federator and click **Explore**.
1. Click **Apps -> Charts**.
1. Click the **Prometheus Federator** chart.
1. Click **Install**.
1. On the **Metadata** page, click **Next**.
1. In the **Namespaces** > **Project Release Namespace Project ID** field, the `System Project` is used as the default but can be overridden with another project with similarly [limited access](#ensure-the-cattle-monitoring-system-namespace-is-placed-into-the-system-project-or-a-similarly-locked-down-project-that-has-access-to-other-projects-in-the-cluster). Project IDs can be found with the following command run in the local upstream cluster:
```plain
kubectl get projects -A -o custom-columns="NAMESPACE":.metadata.namespace,"ID":.metadata.name,"NAME":.spec.displayName
```
1. Click **Install**.
**Result:** The Prometheus Federator app is deployed in the `cattle-monitoring-system` namespace.
@@ -0,0 +1,21 @@
---
title: Installing Project Monitors
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides/project-monitors"/>
</head>
Install **Project Monitors** in each project where you want to enable project monitoring.
1. Click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to enable monitoring and click **Explore**.
1. Click **Monitoring > Project Monitors** on the left nav bar. Then click **Create** in the upper right.
![Project Monitors](/img/project-monitors.png)
1. Select your project from the drop-down menu, then click **Create** again.
![Create Project Monitors](/img/create-project-monitors.png)
@@ -0,0 +1,12 @@
---
title: Prometheus Federator Guides
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides"/>
</head>
- [Enable Prometheus Operator](enable-prometheus-federator.md)
- [Uninstall Prometheus Operator](uninstall-prometheus-federator.md)
- [Customize Grafana Dashboards](customize-grafana-dashboards.md)
- [Set Up Workloads](set-up-workloads.md)
@@ -0,0 +1,17 @@
---
title: Setting up Prometheus Federator for a Workload
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides/set-up-workloads"/>
</head>
### Display CPU and Memory Metrics for a Workload
Displaying CPU and memory metrics with Prometheus Federator is done the same way as with rancher-monitoring. For instructions, refer [here](../set-up-monitoring-for-workloads.md#display-cpu-and-memory-metrics-for-a-workload).
### Setting up Metrics Beyond CPU and Memory
Setting up metrics beyond CPU and memory with Prometheus Federator is done the same way as with rancher-monitoring. For instructions, refer [here](../set-up-monitoring-for-workloads.md#setting-up-metrics-beyond-cpu-and-memory).
<!-- ### Custom Metrics -->
@@ -0,0 +1,17 @@
---
title: Uninstall Prometheus Federator
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides/uninstall-prometheus-federator"/>
</head>
1. Click **☰ > Cluster Management**.
1. Go to the cluster that you created and click **Explore**.
1. In the left navigation bar, click **Apps**.
1. Click **Installed Apps**.
1. Go to the `cattle-monitoring-system` namespace and check the boxes for `rancher-monitoring-crd` and `rancher-monitoring`.
1. Click **Delete**.
1. Confirm **Delete**.
**Result:** `prometheus-federator` is uninstalled.
@@ -0,0 +1,31 @@
---
title: Setting up Monitoring for a Workload
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-alerting-guides/set-up-monitoring-for-workloads"/>
</head>
If you only need CPU and memory time series for the workload, you don't need to deploy a ServiceMonitor or PodMonitor because the monitoring application already collects metrics data on resource usage by default.
The steps for setting up monitoring for workloads depend on whether you want basic metrics such as CPU and memory for the workload, or whether you want to scrape custom metrics from the workload.
If you only need CPU and memory time series for the workload, you don't need to deploy a ServiceMonitor or PodMonitor because the monitoring application already collects metrics data on resource usage by default. The resource usage time series data is in Prometheus's local time series database.
Grafana shows the data in aggregate, but you can see the data for the individual workload by using a PromQL query that extracts the data for that workload. Once you have the PromQL query, you can execute the query individually in the Prometheus UI and see the time series visualized there, or you can use the query to customize a Grafana dashboard to display the workload metrics. For examples of PromQL queries for workload metrics, see [this section](../../../integrations-in-rancher/monitoring-and-alerting/promql-expressions.md#workload-metrics).
To set up custom metrics for your workload, you will need to set up an exporter and create a new ServiceMonitor custom resource to configure Prometheus to scrape metrics from your exporter.
### Display CPU and Memory Metrics for a Workload
By default, the monitoring application already scrapes CPU and memory.
To get some fine-grained detail for a particular workload, you can customize a Grafana dashboard to display the metrics for a particular workload.
### Setting up Metrics Beyond CPU and Memory
For custom metrics, you will need to expose the metrics on your application in a format supported by Prometheus.
Then we recommend that you should create a new ServiceMonitor custom resource. When this resource is created, the Prometheus custom resource will be automatically updated so that its scrape configuration includes the new custom metrics endpoint. Then Prometheus will begin scraping metrics from the endpoint.
You can also create a PodMonitor to expose the custom metrics endpoint, but ServiceMonitors are more appropriate for the majority of use cases.
@@ -0,0 +1,23 @@
---
title: Uninstall Monitoring
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-alerting-guides/uninstall-monitoring"/>
</head>
1. Click **☰ > Cluster Management**.
1. Go to the cluster that you created and click **Explore**.
1. In the left navigation bar, click **Apps**.
1. Click **Installed Apps**.
1. Go to the `cattle-monitoring-system` namespace and check the boxes for `rancher-monitoring-crd` and `rancher-monitoring`.
1. Click **Delete**.
1. Confirm **Delete**.
**Result:** `rancher-monitoring` is uninstalled.
:::note Persistent Grafana Dashboards:
For users who are using Monitoring V2 v9.4.203 or below, uninstalling the Monitoring chart will delete the cattle-dashboards namespace, which will delete all persisted dashboards, unless the namespace is marked with the annotation `helm.sh/resource-policy: "keep"`. This annotation is added by default in Monitoring V2 v14.5.100+ but can be manually applied on the cattle-dashboards namespace before an uninstall if an older version of the Monitoring chart is currently installed onto your cluster.
:::
@@ -0,0 +1,19 @@
---
title: Advanced Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides/advanced-configuration"/>
</head>
### Alertmanager
For information on configuring the Alertmanager custom resource, see [this page.](alertmanager.md)
### Prometheus
For information on configuring the Prometheus custom resource, see [this page.](prometheus.md)
### PrometheusRules
For information on configuring the Prometheus custom resource, see [this page.](prometheusrules.md)
@@ -0,0 +1,47 @@
---
title: Alertmanager Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides/advanced-configuration/alertmanager"/>
</head>
It is usually not necessary to directly edit the Alertmanager custom resource. For most use cases, you will only need to edit the Receivers and Routes to configure notifications.
When Receivers and Routes are updated, the monitoring application will automatically update the Alertmanager custom resource to be consistent with those changes.
:::note
This section assumes familiarity with how monitoring components work together. For more information about Alertmanager, see [this section.](../../../../integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md#3-how-alertmanager-works)
:::
## About the Alertmanager Custom Resource
By default, Rancher Monitoring deploys a single Alertmanager onto a cluster that uses a default Alertmanager Config Secret.
You may want to edit the Alertmanager custom resource if you would like to take advantage of advanced options that are not exposed in the Rancher UI forms, such as the ability to create a routing tree structure that is more than two levels deep.
It is also possible to create more than one Alertmanager in a cluster, which may be useful if you want to implement namespace-scoped monitoring. In this case, you should manage the Alertmanager custom resources using the same underlying Alertmanager Config Secret.
### Deeply Nested Routes
While the Rancher UI only supports a routing tree that is two levels deep, you can configure more deeply nested routing structures by editing the Alertmanager YAML.
### Multiple Alertmanager Replicas
As part of the chart deployment options, you can opt to increase the number of replicas of the Alertmanager deployed onto your cluster. The replicas can all be managed using the same underlying Alertmanager Config Secret.
This Secret should be updated or modified any time you want to:
- Add in new notifiers or receivers
- Change the alerts that should be sent to specific notifiers or receivers
- Change the group of alerts that are sent out
By default, you can either choose to supply an existing Alertmanager Config Secret (i.e. any Secret in the `cattle-monitoring-system` namespace) or allow Rancher Monitoring to deploy a default Alertmanager Config Secret onto your cluster.
By default, the Alertmanager Config Secret created by Rancher will never be modified or deleted on an upgrade or uninstall of the `rancher-monitoring` chart. This restriction prevents users from losing or overwriting their alerting configuration when executing operations on the chart.
For more information on what fields can be specified in the Alertmanager Config Secret, please look at the [Prometheus Alertmanager docs.](https://prometheus.io/docs/alerting/latest/alertmanager/)
The full spec for the Alertmanager configuration file and what it takes in can be found [here.](https://prometheus.io/docs/alerting/latest/configuration/#configuration-file)
@@ -0,0 +1,23 @@
---
title: Prometheus Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides/advanced-configuration/prometheus"/>
</head>
It is usually not necessary to directly edit the Prometheus custom resource because the monitoring application automatically updates it based on changes to ServiceMonitors and PodMonitors.
:::note
This section assumes familiarity with how monitoring components work together. For more information, see [this section.](../../../../integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md)
:::
## About the Prometheus Custom Resource
The Prometheus CR defines a desired Prometheus deployment. The Prometheus Operator observes the Prometheus CR. When the CR changes, the Prometheus Operator creates `prometheus-rancher-monitoring-prometheus`, a Prometheus deployment based on the CR configuration.
The Prometheus CR specifies details such as rules and what Alertmanagers are connected to Prometheus. Rancher builds this CR for you.
Monitoring V2 only supports one Prometheus per cluster. However, you might want to edit the Prometheus CR if you want to limit monitoring to certain namespaces.
@@ -0,0 +1,84 @@
---
title: Configuring PrometheusRules
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides/advanced-configuration/prometheusrules"/>
</head>
A PrometheusRule defines a group of Prometheus alerting and/or recording rules.
:::note
This section assumes familiarity with how monitoring components work together. For more information, see [this section.](../../../../integrations-in-rancher/monitoring-and-alerting/how-monitoring-works.md)
:::
### Creating PrometheusRules in the Rancher UI
:::note Prerequisite:
The monitoring application needs to be installed.
:::
To create rule groups in the Rancher UI,
1. Go to the cluster where you want to create rule groups. Click **Monitoring > Advanced** and click **Prometheus Rules**.
1. Click **Create**.
1. Enter a **Group Name**.
1. Configure the rules. In Rancher's UI, we expect a rule group to contain either alert rules or recording rules, but not both. For help filling out the forms, refer to the configuration options below.
1. Click **Create**.
**Result:** Alerts can be configured to send notifications to the receiver(s).
### About the PrometheusRule Custom Resource
When you define a Rule (which is declared within a RuleGroup in a PrometheusRule resource), the [spec of the Rule itself](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#rule) contains labels that are used by Alertmanager to figure out which Route should receive this Alert. For example, an Alert with the label `team: front-end` will be sent to all Routes that match on that label.
Prometheus rule files are held in PrometheusRule custom resources. A PrometheusRule allows you to define one or more RuleGroups. Each RuleGroup consists of a set of Rule objects that can each represent either an alerting or a recording rule with the following fields:
- The name of the new alert or record
- A PromQL expression for the new alert or record
- Labels that should be attached to the alert or record that identify it (e.g. cluster name or severity)
- Annotations that encode any additional important pieces of information that need to be displayed on the notification for an alert (e.g. summary, description, message, runbook URL, etc.). This field is not required for recording rules.
For more information on what fields can be specified, please look at the [Prometheus Operator spec.](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#prometheusrulespec)
Use the label selector field `ruleSelector` in the Prometheus object to define the rule files that you want to be mounted into Prometheus.
For examples, refer to the Prometheus documentation on [recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) and [alerting rules.](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/)
## Configuration
### Rule Group
| Field | Description |
|-------|----------------|
| Group Name | The name of the group. Must be unique within a rules file. |
| Override Group Interval | Duration in seconds for how often rules in the group are evaluated. |
### Alerting Rules
[Alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) allow you to define alert conditions based on PromQL (Prometheus Query Language) expressions and to send notifications about firing alerts to an external service.
| Field | Description |
|-------|----------------|
| Alert Name | The name of the alert. Must be a valid label value. |
| Wait To Fire For | Duration in seconds. Alerts are considered firing once they have been returned for this long. Alerts which have not yet fired for long enough are considered pending. |
| PromQL Expression | The PromQL expression to evaluate. Prometheus will evaluate the current value of this PromQL expression on every evaluation cycle and all resultant time series will become pending/firing alerts. For more information, refer to the [Prometheus documentation](https://prometheus.io/docs/prometheus/latest/querying/basics/) or our [example PromQL expressions.](../../../../integrations-in-rancher/monitoring-and-alerting/promql-expressions.md) |
| Labels | Labels to add or overwrite for each alert. |
| Severity | When enabled, labels are attached to the alert or record that identify it by the severity level. |
| Severity Label Value | Critical, warning, or none |
| Annotations | Annotations are a set of informational labels that can be used to store longer additional information, such as alert descriptions or runbook links. A [runbook](https://en.wikipedia.org/wiki/Runbook) is a set of documentation about how to handle alerts. The annotation values can be [templated.](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/#templating) |
### Recording Rules
[Recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules) allow you to precompute frequently needed or computationally expensive PromQL (Prometheus Query Language) expressions and save their result as a new set of time series.
| Field | Description |
|-------|----------------|
| Time Series Name | The name of the time series to output to. Must be a valid metric name. |
| PromQL Expression | The PromQL expression to evaluate. Prometheus will evaluate the current value of this PromQL expression on every evaluation cycle and the result will be recorded as a new set of time series with the metric name as given by 'record'. For more information about expressions, refer to the [Prometheus documentation](https://prometheus.io/docs/prometheus/latest/querying/basics/) or our [example PromQL expressions.](../../../../integrations-in-rancher/monitoring-and-alerting/promql-expressions.md) |
| Labels | Labels to add or overwrite before storing the result. |
@@ -0,0 +1,55 @@
---
title: Monitoring Configuration Guides
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides"/>
</head>
This page captures some of the most important options for configuring Monitoring V2 in the Rancher UI.
For information on configuring custom scrape targets and rules for Prometheus, please refer to the upstream documentation for the [Prometheus Operator.](https://github.com/prometheus-operator/prometheus-operator) Some of the most important custom resources are explained in the Prometheus Operator [design documentation.](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/design.md) The Prometheus Operator documentation can help also you set up RBAC, Thanos, or custom configuration.
## Setting Resource Limits and Requests
The resource requests and limits for the monitoring application can be configured when installing `rancher-monitoring`. For more information about the default limits, see [this page.](../../../reference-guides/monitoring-v2-configuration/helm-chart-options.md#configuring-resource-limits-and-requests)
:::tip
On an idle cluster, Monitoring may have high CPU usage. To improve performance, turn off the Prometheus adapter.
:::
## Prometheus Configuration
It is usually not necessary to directly edit the Prometheus custom resource.
Instead, to configure Prometheus to scrape custom metrics, you will only need to create a new ServiceMonitor or PodMonitor to configure Prometheus to scrape additional metrics.
### ServiceMonitor and PodMonitor Configuration
For details, see [this page.](../../../reference-guides/monitoring-v2-configuration/servicemonitors-and-podmonitors.md)
### Advanced Prometheus Configuration
For more information about directly editing the Prometheus custom resource, which may be helpful in advanced use cases, see [this page.](advanced-configuration/prometheus.md)
## Alertmanager Configuration
The Alertmanager custom resource usually doesn't need to be edited directly. For most common use cases, you can manage alerts by updating Routes and Receivers.
Routes and receivers are part of the configuration of the alertmanager custom resource. In the Rancher UI, Routes and Receivers are not true custom resources, but pseudo-custom resources that the Prometheus Operator uses to synchronize your configuration with the Alertmanager custom resource. When routes and receivers are updated, the monitoring application will automatically update Alertmanager to reflect those changes.
For some advanced use cases, you may want to configure alertmanager directly. For more information, refer to [this page.](advanced-configuration/alertmanager.md)
### Receivers
Receivers are used to set up notifications. For details on how to configure receivers, see [this page.](../../../reference-guides/monitoring-v2-configuration/receivers.md)
### Routes
Routes filter notifications before they reach receivers. Each route needs to refer to a receiver that has already been configured. For details on how to configure routes, see [this page.](../../../reference-guides/monitoring-v2-configuration/routes.md)
### Advanced
For more information about directly editing the Alertmanager custom resource, which may be helpful in advanced use cases, see [this page.](advanced-configuration/alertmanager.md)
@@ -0,0 +1,121 @@
---
title: Opening Ports with firewalld
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/open-ports-with-firewalld"/>
</head>
:::danger
Enabling firewalld can cause serious network communication problems.
For proper network function, firewalld must be disabled on systems running RKE2. [Firewalld conflicts with Canal](https://docs.rke2.io/known_issues#firewalld-conflicts-with-default-networking), RKE2's default networking stack.
Firewalld must also be disabled on systems running Kubernetes 1.19 and later.
If you enable firewalld on systems running Kubernetes 1.18 or earlier, understand that this may cause networking issues. CNIs in Kubernetes dynamically update iptables and networking rules independently of any external firewalls, such as firewalld. This can cause unexpected behavior when the CNI and the external firewall conflict.
:::
Some distributions of Linux [derived from RHEL,](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Rebuilds) including Oracle Linux, may have default firewall rules that block communication with Helm.
For example, one Oracle Linux image in AWS has REJECT rules that stop Helm from communicating with Tiller:
```
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
```
You can check the default firewall rules with this command:
```
sudo iptables --list
```
This section describes how to use `firewalld` to apply the [firewall port rules](../../getting-started/installation-and-upgrade/installation-requirements/installation-requirements.md#port-requirements) for nodes in a high-availability Rancher server cluster.
## Prerequisite
Install v7.x or later ofv`firewalld`:
```
yum install firewalld
systemctl start firewalld
systemctl enable firewalld
```
## Applying Firewall Port Rules
In the Rancher high-availability installation instructions, the Rancher server is set up on three nodes that have all three Kubernetes roles: etcd, controlplane, and worker. If your Rancher server nodes have all three roles, run the following commands on each node:
```
firewall-cmd --permanent --add-port=22/tcp
firewall-cmd --permanent --add-port=80/tcp
firewall-cmd --permanent --add-port=443/tcp
firewall-cmd --permanent --add-port=2376/tcp
firewall-cmd --permanent --add-port=2379/tcp
firewall-cmd --permanent --add-port=2380/tcp
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=8472/udp
firewall-cmd --permanent --add-port=9099/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10254/tcp
firewall-cmd --permanent --add-port=30000-32767/tcp
firewall-cmd --permanent --add-port=30000-32767/udp
```
If your Rancher server nodes have separate roles, use the following commands based on the role of the node:
```
# For etcd nodes, run the following commands:
firewall-cmd --permanent --add-port=2376/tcp
firewall-cmd --permanent --add-port=2379/tcp
firewall-cmd --permanent --add-port=2380/tcp
firewall-cmd --permanent --add-port=8472/udp
firewall-cmd --permanent --add-port=9099/tcp
firewall-cmd --permanent --add-port=10250/tcp
# For control plane nodes, run the following commands:
firewall-cmd --permanent --add-port=80/tcp
firewall-cmd --permanent --add-port=443/tcp
firewall-cmd --permanent --add-port=2376/tcp
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=8472/udp
firewall-cmd --permanent --add-port=9099/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10254/tcp
firewall-cmd --permanent --add-port=30000-32767/tcp
firewall-cmd --permanent --add-port=30000-32767/udp
# For worker nodes, run the following commands:
firewall-cmd --permanent --add-port=22/tcp
firewall-cmd --permanent --add-port=80/tcp
firewall-cmd --permanent --add-port=443/tcp
firewall-cmd --permanent --add-port=2376/tcp
firewall-cmd --permanent --add-port=8472/udp
firewall-cmd --permanent --add-port=9099/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10254/tcp
firewall-cmd --permanent --add-port=30000-32767/tcp
firewall-cmd --permanent --add-port=30000-32767/udp
```
After the `firewall-cmd` commands have been run on a node, use the following command to enable the firewall rules:
```
firewall-cmd --reload
```
**Result:** The firewall is updated so that Helm can communicate with the Rancher server nodes.
@@ -0,0 +1,43 @@
---
title: Tuning etcd for Large Installations
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/tune-etcd-for-large-installs"/>
</head>
When Rancher is used to manage [a large infrastructure](../../getting-started/installation-and-upgrade/installation-requirements/installation-requirements.md) it is recommended to increase the default keyspace for etcd from the default 2 GB. The maximum setting is 8 GB and the host should have enough RAM to keep the entire dataset in memory. When increasing this value you should also increase the size of the host. The keyspace size can also be adjusted in smaller installations if you anticipate a high rate of change of pods during the garbage collection interval.
The etcd data set is automatically cleaned up on a five minute interval by Kubernetes. There are situations, e.g. deployment thrashing, where enough events could be written to etcd and deleted before garbage collection occurs and cleans things up causing the keyspace to fill up. If you see `mvcc: database space exceeded` errors, in the etcd logs or Kubernetes API server logs, you should consider increasing the keyspace size. This can be accomplished by setting the [quota-backend-bytes](https://etcd.io/docs/v3.4.0/op-guide/maintenance/#space-quota) setting on the etcd servers.
### Example: This snippet of the RKE cluster.yml file increases the keyspace size to 5GB
```yaml
# RKE cluster.yml
---
services:
etcd:
extra_args:
quota-backend-bytes: 5368709120
```
## Scaling etcd disk performance
You can follow the recommendations from [the etcd docs](https://etcd.io/docs/v3.4.0/tuning/#disk) on how to tune the disk priority on the host.
Additionally, to reduce IO contention on the disks for etcd, you can use a dedicated device for the data and wal directory. Based on etcd best practices, mirroring RAID configurations are unnecessary because etcd replicates data between the nodes in the cluster. You can use striping RAID configurations to increase available IOPS.
To implement this solution in an RKE cluster, the `/var/lib/etcd/data` and `/var/lib/etcd/wal` directories will need to have disks mounted and formatted on the underlying host. In the `extra_args` directive of the `etcd` service, you must include the `wal_dir` directory. Without specifying the `wal_dir`, etcd process will try to manipulate the underlying `wal` mount with insufficient permissions.
```yaml
# RKE cluster.yml
---
services:
etcd:
extra_args:
data-dir: '/var/lib/rancher/etcd/data/'
wal-dir: '/var/lib/rancher/etcd/wal/wal_dir'
extra_binds:
- '/var/lib/etcd/data:/var/lib/rancher/etcd/data'
- '/var/lib/etcd/wal:/var/lib/rancher/etcd/wal'
```
@@ -0,0 +1,66 @@
---
title: Adding Users to Projects
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/add-users-to-projects"/>
</head>
If you want to provide a user with access and permissions to _specific_ projects and resources within a cluster, assign the user a project membership.
You can add members to a project as it is created, or add them to an existing project.
:::tip
Want to provide a user with access to _all_ projects within a cluster? See [Adding Cluster Members](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md) instead.
:::
### Adding Members to a New Project
You can add members to a project as you create it (recommended if possible). For details on creating a new project, refer to the [cluster administration section.](../../how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces.md)
### Adding Members to an Existing Project
Following project creation, you can add users as project members so that they can access its resources.
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to add members to a project and click **Explore**.
1. Click **Cluster > Projects/Namespaces**.
1. Go to the project where you want to add members. Next to the **Create Namespace** button above the project name, click **☰**. Select **Edit Config**.
1. In the **Members** tab, click **Add**.
1. Search for the user or group that you want to add to the project.
If external authentication is configured:
- Rancher returns users from your external authentication source as you type.
- A drop-down allows you to add groups instead of individual users. The dropdown only lists groups that you, the logged in user, are included in.
:::note
If you are logged in as a local user, external users do not display in your search results.
:::
1. Assign the user or group **Project** roles.
[What are Project Roles?](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md)
:::note Notes:
- Users assigned the `Owner` or `Member` role for a project automatically inherit the `namespace creation` role. However, this role is a [Kubernetes ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole), meaning its scope extends to all projects in the cluster. Therefore, users explicitly assigned the `Owner` or `Member` role for a project can create or delete namespaces in other projects they're assigned to, even with only the `Read Only` role assigned.
- By default, the Rancher role of `project-member` inherits from the `Kubernetes-edit` role, and the `project-owner` role inherits from the `Kubernetes-admin` role. As such, both `project-member` and `project-owner` roles will allow for namespace management, including the ability to create and delete namespaces.
- For `Custom` roles, you can modify the list of individual roles available for assignment.
- To add roles to the list, [Add a Custom Role](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/custom-roles.md).
- To remove roles from the list, [Lock/Unlock Roles](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/locked-roles.md).
:::
**Result:** The chosen users are added to the project.
- To revoke project membership, select the user and click **Delete**. This action deletes membership, not the user.
- To modify a user's roles in the project, delete them from the project, and then re-add them with modified roles.
@@ -0,0 +1,51 @@
---
title: About Provisioning Drivers
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers"/>
</head>
Drivers in Rancher allow you to manage which providers can be used to deploy [hosted Kubernetes clusters](../../kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/set-up-clusters-from-hosted-kubernetes-providers.md) or [nodes in an infrastructure provider](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md) to allow Rancher to deploy and manage Kubernetes.
### Rancher Drivers
With Rancher drivers, you can enable/disable existing built-in drivers that are packaged in Rancher. Alternatively, you can add your own driver if Rancher has not yet implemented it.
There are two types of drivers within Rancher:
* [Cluster Drivers](#cluster-drivers)
* [Node Drivers](#node-drivers)
### Cluster Drivers
Cluster drivers are used to provision [hosted Kubernetes clusters](../../kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/set-up-clusters-from-hosted-kubernetes-providers.md), such as GKE, EKS, AKS, etc.. The availability of which cluster driver to display when creating a cluster is defined based on the cluster driver's status. Only `active` cluster drivers will be displayed as an option for creating clusters for hosted Kubernetes clusters. By default, Rancher is packaged with several existing cluster drivers, but you can also create custom cluster drivers to add to Rancher.
By default, Rancher has activated several hosted Kubernetes cloud providers including:
* [Amazon EKS](../../kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/eks.md)
* [Google GKE](../../kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/gke.md)
* [Azure AKS](../../kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/aks.md)
There are several other hosted Kubernetes cloud providers that are disabled by default, but are packaged in Rancher:
* [Alibaba ACK](../../kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/alibaba.md)
* [Huawei CCE](../../kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/huawei.md)
* [Tencent](../../kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/tencent.md)
### Node Drivers
Node drivers are used to provision hosts, which Rancher uses to launch and manage Kubernetes clusters. A node driver is the same as a [Docker Machine driver](https://docs.docker.com/machine/drivers/). The availability of which node driver to display when creating node templates is defined based on the node driver's status. Only `active` node drivers will be displayed as an option for creating node templates. By default, Rancher is packaged with many existing Docker Machine drivers, but you can also create custom node drivers to add to Rancher.
If there are specific node drivers that you don't want to show to your users, you would need to de-activate these node drivers.
Rancher supports several major cloud providers, but by default, these node drivers are active and available for deployment:
* [Amazon EC2](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md)
* [Azure](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-azure-cluster.md)
* [Digital Ocean](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-digitalocean-cluster.md)
* [vSphere](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/vsphere.md)
There are several other node drivers that are disabled by default, but are packaged in Rancher:
* [Harvester](../../../../integrations-in-rancher/harvester/overview.md#harvester-node-driver/), available as of Rancher v2.6.1
@@ -0,0 +1,46 @@
---
title: Cluster Drivers
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-cluster-drivers"/>
</head>
Cluster drivers are used to create clusters in a [hosted Kubernetes provider](../../kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/set-up-clusters-from-hosted-kubernetes-providers.md), such as Google GKE. The availability of which cluster driver to display when creating clusters is defined by the cluster driver's status. Only `active` cluster drivers will be displayed as an option for creating clusters. By default, Rancher is packaged with several existing cloud provider cluster drivers, but you can also add custom cluster drivers to Rancher.
If there are specific cluster drivers that you do not want to show your users, you may deactivate those cluster drivers within Rancher and they will not appear as an option for cluster creation.
### Managing Cluster Drivers
:::note Prerequisites:
To create, edit, or delete cluster drivers, you need _one_ of the following permissions:
- [Administrator Global Permissions](../manage-role-based-access-control-rbac/global-permissions.md)
- [Custom Global Permissions](../manage-role-based-access-control-rbac/global-permissions.md#custom-global-permissions) with the [Manage Cluster Drivers](../manage-role-based-access-control-rbac/global-permissions.md) role assigned.
:::
## Activating/Deactivating Cluster Drivers
By default, Rancher only activates drivers for the most popular cloud providers, Google GKE, Amazon EKS and Azure AKS. If you want to show or hide any node driver, you can change its status.
1. In the upper left corner, click **☰ > Cluster Management**.
2. In the left navigation menu, click **Drivers**.
3. On the **Cluster Drivers** tab, select the driver that you wish to activate or deactivate and click **⋮ > Activate** or **⋮ > Deactivate**.
## Adding Custom Cluster Drivers
If you want to use a cluster driver that Rancher doesn't support out-of-the-box, you can add the provider's driver in order to start using them to create _hosted_ kubernetes clusters.
1. In the upper left corner, click **☰ > Cluster Management**.
1. In the left navigation menu, click **Drivers**.
1. On the **Cluster Drivers** tab, click **Add Cluster Driver**.
1. Complete the **Add Cluster Driver** form. Then click **Create**.
### Developing your own Cluster Driver
In order to develop cluster driver to add to Rancher, please refer to our [example](https://github.com/rancher-plugins/kontainer-engine-driver-example).
@@ -0,0 +1,61 @@
---
title: Node Drivers
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/manage-node-drivers"/>
</head>
A node driver is the same as a [Docker Machine driver](https://docs.docker.com/machine/drivers/). Node drivers are used to provision hosts, which Rancher uses to launch and manage Kubernetes clusters. By default, Rancher is packaged with many node drivers, but you can also create and add custom node drivers to Rancher.
Only `Active` node drivers are displayed in the Rancher UI when you create node templates. If there are specific node drivers that you don't want to show your users, you must deactivate these node drivers.
## Managing Node Drivers
:::note Prerequisites:
To create, edit, or delete drivers, you need _one_ of the following permissions:
- [Administrator Global Permissions](../manage-role-based-access-control-rbac/global-permissions.md)
- [Custom Global Permissions](../manage-role-based-access-control-rbac/global-permissions.md#custom-global-permissions) with the [Manage Node Drivers](../manage-role-based-access-control-rbac/global-permissions.md) role assigned.
:::
### Activating/Deactivating Node Drivers
By default, Rancher only activates drivers for the most popular cloud providers, such as Amazon EC2, Azure, DigitalOcean, Linode and vSphere. If you want to show or hide any node driver, you can change its status.
1. In the upper left corner, click **☰ > Cluster Management**.
1. In the left navigation menu, click **Drivers**.
1. On the **Node Drivers** tab, select the driver that you wish to activate or deactivate and click **⋮ > Activate** or **⋮ > Deactivate**.
:::danger
You can lose access to clusters after deactivating a node driver.
Deactivating a node driver doesn't just affect its visibility in the Rancher UI. When you deactivate or delete a node driver, any nodes deployed with that driver become inaccessible.
For example, if you deactivate a vSphere node driver to hide it in the UI, and you have a vSphere cluster that was deployed with that driver, the initial node in the cluster will fail, and the entire cluster will become inaccessible. Attempts to delete the vSphere nodes will fail, with nodes stuck in an extended `Removing` state.
Before you deactivate a node driver, make sure that it has no associated clusters. One way to check is to see if the respective platform for a driver is listed among your clusters:
1. In the upper left corner, click **☰ > Cluster Management**.
1. Select **Clusters**.
1. Check the **Provider** column of the table for instances of the node driver you are deactivating.
:::
### Adding Custom Node Drivers
If you want to use a node driver that Rancher doesn't support out-of-the-box, you can add that provider's driver in order to start using them to create node templates and eventually node pools for your Kubernetes cluster.
1. In the upper left corner, click **☰ > Cluster Management**.
1. In the left navigation menu, click **Drivers**.
1. On **Node Drivers** tab, click **Add Node Driver**.
1. Complete the **Add Node Driver** form. Then click **Create**.
### Developing Your Own Node Drivers
Node drivers are implemented with [Rancher Machine](https://github.com/rancher/machine), a fork of [Docker Machine](https://github.com/docker/machine). Docker Machine is no longer under active development.
Refer to the original [Docker Machine documentation](https://github.com/docker/docs/blob/vnext-engine/machine/overview.md) for details on how to develop your own node drivers.
@@ -0,0 +1,130 @@
---
title: About RKE1 Templates
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates"/>
</head>
RKE templates are designed to allow DevOps and security teams to standardize and simplify the creation of Kubernetes clusters.
RKE is the [Rancher Kubernetes Engine,](https://rancher.com/docs/rke/latest/en/) which is the tool that Rancher uses to provision Kubernetes clusters.
With Kubernetes increasing in popularity, there is a trend toward managing a larger number of smaller clusters. When you want to create many clusters, its more important to manage them consistently. Multi-cluster management comes with challenges to enforcing security and add-on configurations that need to be standardized before turning clusters over to end users.
RKE templates help standardize these configurations. Regardless of whether clusters are created with the Rancher UI, the Rancher API, or an automated process, Rancher will guarantee that every cluster it provisions from an RKE template is uniform and consistent in the way it is produced.
Admins control which cluster options can be changed by end users. RKE templates can also be shared with specific users and groups, so that admins can create different RKE templates for different sets of users.
If a cluster was created with an RKE template, you can't change it to a different RKE template. You can only update the cluster to a new revision of the same template.
You can [save the configuration of an existing cluster as an RKE template.](apply-templates.md#converting-an-existing-cluster-to-use-an-rke-template) Then the cluster's settings can only be changed if the template is updated. The new template can also be used to launch new clusters.
The core features of RKE templates allow DevOps and security teams to:
- Standardize cluster configuration and ensure that Rancher-provisioned clusters are created following best practices
- Prevent less technical users from making uninformed choices when provisioning clusters
- Share different templates with different sets of users and groups
- Delegate ownership of templates to users who are trusted to make changes to them
- Control which users can create templates
- Require users to create clusters from a template
## Configurable Settings
RKE templates can be created in the Rancher UI or defined in YAML format. They can define all the same parameters that can be specified when you use Rancher to provision custom nodes or nodes from an infrastructure provider:
- Cloud provider options
- Pod security options
- Network providers
- Ingress controllers
- Network security configuration
- Network plugins
- Private registry URL and credentials
- Add-ons
- Kubernetes options, including configurations for Kubernetes components such as kube-api, kube-controller, kubelet, and services
The [add-on section](#add-ons) of an RKE template is especially powerful because it allows a wide range of customization options.
## Scope of RKE Templates
RKE templates are supported for Rancher-provisioned clusters. The templates can be used to provision custom clusters or clusters that are launched by an infrastructure provider.
RKE templates are for defining Kubernetes and Rancher settings. Node templates are responsible for configuring nodes. For tips on how to use RKE templates in conjunction with hardware, refer to [RKE Templates and Hardware](infrastructure.md).
RKE templates can be created from scratch to pre-define cluster configuration. They can be applied to launch new clusters, or templates can also be exported from existing running clusters.
The settings of an existing cluster can be [saved as an RKE template.](apply-templates.md#converting-an-existing-cluster-to-use-an-rke-template) This creates a new template and binds the cluster settings to the template, so that the cluster can only be upgraded if the [template is updated](manage-rke1-templates.md#updating-a-template), and the cluster is upgraded to [use a newer version of the template.](manage-rke1-templates.md#upgrading-a-cluster-to-use-a-new-template-revision) The new template can also be used to create new clusters.
## Example Scenarios
When an organization has both basic and advanced Rancher users, administrators might want to give the advanced users more options for cluster creation, while restricting the options for basic users.
These [example scenarios](example-use-cases.md) describe how an organization could use templates to standardize cluster creation.
Some of the example scenarios include the following:
- **Enforcing templates:** Administrators might want to [enforce one or more template settings for everyone](example-use-cases.md#enforcing-a-template-setting-for-everyone) if they want all new Rancher-provisioned clusters to have those settings.
- **Sharing different templates with different users:** Administrators might give [different templates to basic and advanced users,](example-use-cases.md#templates-for-basic-and-advanced-users) so that basic users can have more restricted options and advanced users can use more discretion when creating clusters.
- **Updating template settings:** If an organization's security and DevOps teams decide to embed best practices into the required settings for new clusters, those best practices could change over time. If the best practices change, [a template can be updated to a new revision](example-use-cases.md#updating-templates-and-clusters-created-with-them) and clusters created from the template can [upgrade to the new version](manage-rke1-templates.md#upgrading-a-cluster-to-use-a-new-template-revision) of the template.
- **Sharing ownership of a template:** When a template owner no longer wants to maintain a template, or wants to share ownership of the template, this scenario describes how [template ownership can be shared.](example-use-cases.md#allowing-other-users-to-control-and-share-a-template)
## Template Management
When you create an RKE template, it is available in the Rancher UI from the **Cluster Management** view under **RKE Templates**. When you create a template, you become the template owner, which gives you permission to revise and share the template. You can share the RKE templates with specific users or groups, and you can also make it public.
Administrators can turn on template enforcement to require users to always use RKE templates when creating a cluster. This allows administrators to guarantee that Rancher always provisions clusters with specific settings.
RKE template updates are handled through a revision system. If you want to change or update a template, you create a new revision of the template. Then a cluster that was created with the older version of the template can be upgraded to the new template revision.
In an RKE template, settings can be restricted to what the template owner chooses, or they can be open for the end user to select the value. The difference is indicated by the **Allow User Override** toggle over each setting in the Rancher UI when the template is created.
For the settings that cannot be overridden, the end user will not be able to directly edit them. In order for a user to get different options of these settings, an RKE template owner would need to create a new revision of the RKE template, which would allow the user to upgrade and change that option.
The documents in this section explain the details of RKE template management:
- [Getting permission to create templates](creator-permissions.md)
- [Creating and revising templates](manage-rke1-templates.md)
- [Enforcing template settings](enforce-templates.md#requiring-new-clusters-to-use-an-rke-template)
- [Overriding template settings](override-template-settings.md)
- [Sharing templates with cluster creators](access-or-share-templates.md#sharing-templates-with-specific-users-or-groups)
- [Sharing ownership of a template](access-or-share-templates.md#sharing-ownership-of-templates)
An [example YAML configuration file for a template](../../../../reference-guides/rke1-template-example-yaml.md) is provided for reference.
## Applying Templates
You can [create a cluster from a template](apply-templates.md#creating-a-cluster-from-an-rke-template) that you created, or from a template that has been [shared with you.](access-or-share-templates.md)
If the RKE template owner creates a new revision of the template, you can [upgrade your cluster to that revision.](apply-templates.md#updating-a-cluster-created-with-an-rke-template)
RKE templates can be created from scratch to pre-define cluster configuration. They can be applied to launch new clusters, or templates can also be exported from existing running clusters.
You can [save the configuration of an existing cluster as an RKE template.](apply-templates.md#converting-an-existing-cluster-to-use-an-rke-template) Then the cluster's settings can only be changed if the template is updated.
## Standardizing Hardware
RKE templates are designed to standardize Kubernetes and Rancher settings. If you want to standardize your infrastructure as well, one option is to use RKE templates [in conjunction with other tools](infrastructure.md).
Another option is to use [cluster templates,](../../manage-clusters/manage-cluster-templates.md) which include node pool configuration options, but don't provide configuration enforcement.
## YAML Customization
If you define an RKE template as a YAML file, you can modify this [example RKE template YAML](../../../../reference-guides/rke1-template-example-yaml.md). The YAML in the RKE template uses the same customization that Rancher uses when creating an RKE cluster, but since the YAML is located within the context of a Rancher provisioned cluster, you will need to nest the RKE template customization under the `rancher_kubernetes_engine_config` directive in the YAML.
The RKE documentation also has [annotated](https://rancher.com/docs/rke/latest/en/example-yamls/) `cluster.yml` files that you can use for reference.
For guidance on available options, refer to the RKE documentation on [cluster configuration.](https://rancher.com/docs/rke/latest/en/config-options/)
### Add-ons
The add-on section of the RKE template configuration file works the same way as the [add-on section of a cluster configuration file](https://rancher.com/docs/rke/latest/en/config-options/add-ons/).
The user-defined add-ons directive allows you to either call out and pull down Kubernetes manifests or put them inline directly. If you include these manifests as part of your RKE template, Rancher will provision those in the cluster.
Some things you could do with add-ons include:
- Install applications on the Kubernetes cluster after it starts
- Install plugins on nodes that are deployed with a Kubernetes daemonset
- Automatically set up namespaces, service accounts, or role binding
The RKE template configuration must be nested within the `rancher_kubernetes_engine_config` directive. To set add-ons, when creating the template, you will click **Edit as YAML**. Then use the `addons` directive to add a manifest, or the `addons_include` directive to set which YAML files are used for the add-ons. For more information on custom add-ons, refer to the [user-defined add-ons documentation.](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/)
@@ -0,0 +1,68 @@
---
title: Access and Sharing
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/access-or-share-templates"/>
</head>
If you are an RKE template owner, you can share it with users or groups of users, who can then use the template to create clusters.
Since RKE templates are specifically shared with users and groups, owners can share different RKE templates with different sets of users.
When you share a template, each user can have one of two access levels:
- **Owner:** This user can update, delete, and share the templates that they own. The owner can also share the template with other users.
- **User:** These users can create clusters using the template. They can also upgrade those clusters to new revisions of the same template. When you share a template as **Make Public (read-only),** all users in your Rancher setup have the User access level for the template.
If you create a template, you automatically become an owner of that template.
If you want to delegate responsibility for updating the template, you can share ownership of the template. For details on how owners can modify templates, refer to the [documentation about revising templates.](manage-rke1-templates.md)
There are several ways to share templates:
- Add users to a new RKE template during template creation
- Add users to an existing RKE template
- Make the RKE template public, sharing it with all users in the Rancher setup
- Share template ownership with users who are trusted to modify the template
### Sharing Templates with Specific Users or Groups
To allow users or groups to create clusters using your template, you can give them the basic **User** access level for the template.
1. In the upper left corner, click **☰ > Cluster Management**.
1. Under **RKE1 configuration**, click **RKE Templates**.
1. Go to the template that you want to share and click the **⋮ > Edit**.
1. In the **Share Template** section, click on **Add Member**.
1. Search in the **Name** field for the user or group you want to share the template with.
1. Choose the **User** access type.
1. Click **Save**.
**Result:** The user or group can create clusters using the template.
### Sharing Templates with All Users
1. In the upper left corner, click **☰ > Cluster Management**.
1. In the left navigation menu, click **RKE1 Configuration > RKE Templates**.
1. Go to the template that you want to share and click the **⋮ > Edit**.
1. Under **Share Template,** check the box for **Make Public (read-only)**.
1. Click **Save**.
**Result:** All users in the Rancher setup can create clusters using the template.
### Sharing Ownership of Templates
If you are the creator of a template, you might want to delegate responsibility for maintaining and updating a template to another user or group.
In that case, you can give users the Owner access type, which allows another user to update your template, delete it, or share access to it with other users.
To give Owner access to a user or group,
1. In the upper left corner, click **☰ > Cluster Management**.
1. Under **RKE1 configuration**, click **RKE Templates**.
1. Go to the RKE template that you want to share and click the **⋮ > Edit**.
1. Under **Share Template**, click on **Add Member** and search in the **Name** field for the user or group you want to share the template with.
1. In the **Access Type** field, click **Owner**.
1. Click **Save**.
**Result:** The user or group has the Owner access type, and can modify, share, or delete the template.
@@ -0,0 +1,63 @@
---
title: Applying Templates
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/apply-templates"/>
</head>
You can create a cluster from an RKE template that you created, or from a template that has been [shared with you.](access-or-share-templates.md)
RKE templates can be applied to new clusters.
You can [save the configuration of an existing cluster as an RKE template.](#converting-an-existing-cluster-to-use-an-rke-template) Then the cluster's settings can only be changed if the template is updated.
You can't change a cluster to use a different RKE template. You can only update the cluster to a new revision of the same template.
### Creating a Cluster from an RKE Template
To add a cluster [hosted by an infrastructure provider](../../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) using an RKE template, use these steps:
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, click **Create** and choose the infrastructure provider.
1. Provide the cluster name and node template details as usual.
1. To use an RKE template, under the **Cluster Options**, check the box for **Use an existing RKE template and revision**.
1. Choose an RKE template and revision from the dropdown menu.
1. Optional: You can edit any settings that the RKE template owner marked as **Allow User Override** when the template was created. If there are settings that you want to change, but don't have the option to, you will need to contact the template owner to get a new revision of the template. Then you will need to edit the cluster to upgrade it to the new revision.
1. Click **Create** to launch the cluster.
### Updating a Cluster Created with an RKE Template
When the template owner creates a template, each setting has a switch in the Rancher UI that indicates if users can override the setting.
- If the setting allows a user override, you can update these settings in the cluster by [editing the cluster.](../../../../reference-guides/cluster-configuration/cluster-configuration.md)
- If the switch is turned off, you cannot change these settings unless the cluster owner creates a template revision that lets you override them. If there are settings that you want to change, but don't have the option to, you will need to contact the template owner to get a new revision of the template.
If a cluster was created from an RKE template, you can edit the cluster to update the cluster to a new revision of the template.
An existing cluster's settings can be [saved as an RKE template.](#converting-an-existing-cluster-to-use-an-rke-template) In that situation, you can also edit the cluster to update the cluster to a new revision of the template.
:::note
You can't change the cluster to use a different RKE template. You can only update the cluster to a new revision of the same template.
:::
### Converting an Existing Cluster to Use an RKE Template
This section describes how to create an RKE template from an existing cluster.
RKE templates cannot be applied to existing clusters, except if you save an existing cluster's settings as an RKE template. This exports the cluster's settings as a new RKE template, and also binds the cluster to that template. The result is that the cluster can only be changed if the [template is updated,](manage-rke1-templates.md#updating-a-template) and the cluster is upgraded to [use a newer version of the template.](manage-rke1-templates.md#upgrading-a-cluster-to-use-a-new-template-revision)
To convert an existing cluster to use an RKE template,
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster that will be converted to use an RKE template. Click **⋮ > Save as RKE Template**.
1. Enter a name for the template in the form that appears, and click **Create**.
**Results:**
- A new RKE template is created.
- The cluster is converted to use the new template.
- New clusters can be [created from the new template.](apply-templates.md#creating-a-cluster-from-an-rke-template)
@@ -0,0 +1,61 @@
---
title: Template Creator Permissions
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/creator-permissions"/>
</head>
Administrators have the permission to create RKE templates, and only administrators can give that permission to other users.
For more information on administrator permissions, refer to the [documentation on global permissions](../manage-role-based-access-control-rbac/global-permissions.md).
## Giving Users Permission to Create Templates
Templates can only be created by users who have the global permission **Create RKE Templates**.
Administrators have the global permission to create templates, and only administrators can give that permission to other users.
For information on allowing users to modify existing templates, refer to [Sharing Templates.](access-or-share-templates.md)
Administrators can give users permission to create RKE templates in two ways:
- By editing the permissions of an [individual user](#allowing-a-user-to-create-templates)
- By changing the [default permissions of new users](#allowing-new-users-to-create-templates-by-default)
### Allowing a User to Create Templates
An administrator can individually grant the role **Create RKE Templates** to any existing user by following these steps:
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Users**.
1. Choose the user you want to edit and click **⋮ > Edit Config**.
1. In the **Built-in** section, check the box for **Create new RKE Cluster Templates** role along with any other roles the user should have. You may want to also check the box for **Create RKE Template Revisions**.
1. Click **Save**.
**Result:** The user has permission to create RKE templates.
### Allowing New Users to Create Templates by Default
Alternatively, the administrator can give all new users the default permission to create RKE templates by following the following steps. This will not affect the permissions of existing users.
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Role Templates**.
1. Select **Create new RKE Cluster Templates** and click **⋮ > Edit Config**.
1. Select the option **Yes: Default role for new users**.
1. Click **Save**.
1. If you would like new users to also be able to create RKE template revisions, enable that role as default as well.
**Result:** Any new user created in this Rancher installation will be able to create RKE templates. Existing users will not get this permission.
### Revoking Permission to Create Templates
Administrators can remove a user's permission to create templates with the following steps. Note: Administrators have full control over all resources regardless of whether fine-grained permissions are selected.
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Users**.
1. Choose the user you want to edit permissions for and click **⋮ > Edit Config**.
1. In the **Built-in** section, un-check the box for **Create RKE Templates** and **Create RKE Template Revisions,** if applicable. In this section, you can change the user back to a standard user, or give the user a different set of permissions.
1. Click **Save**.
**Result:** The user cannot create RKE templates.
@@ -0,0 +1,47 @@
---
title: Enforcing Templates
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/enforce-templates"/>
</head>
This section describes how template administrators can enforce templates in Rancher, restricting the ability of users to create clusters without a template.
By default, any standard user in Rancher can create clusters. But when RKE template enforcement is turned on,
- Only an administrator has the ability to create clusters without a template.
- All standard users must use an RKE template to create a new cluster.
- Standard users cannot create a cluster without using a template.
Users can only create new templates if the administrator [gives them permission.](creator-permissions.md#allowing-a-user-to-create-templates)
After a cluster is created with an RKE template, the cluster creator cannot edit settings that are defined in the template. The only way to change those settings after the cluster is created is to [upgrade the cluster to a new revision](apply-templates.md#updating-a-cluster-created-with-an-rke-template) of the same template. If cluster creators want to change template-defined settings, they would need to contact the template owner to get a new revision of the template. For details on how template revisions work, refer to the [documentation on revising templates.](manage-rke1-templates.md#updating-a-template)
## Requiring New Clusters to Use an RKE Template
You might want to require new clusters to use a template to ensure that any cluster launched by a [standard user](../manage-role-based-access-control-rbac/global-permissions.md) will use the Kubernetes and/or Rancher settings that are vetted by administrators.
To require new clusters to use an RKE template, administrators can turn on RKE template enforcement with the following steps:
1. Click **☰ > Global Settings**.
1. Go to the `cluster-template-enforcement` setting. Click **⋮ > Edit Setting**.
1. Set the value to **True** and click **Save**.
:::note Important:
When the admin sets the `cluster-template-enforcement` to <b>True</b>, they also need to share the `clusterTemplates` with users so that users can select one of these templates to create the cluster.
:::
**Result:** All clusters provisioned by Rancher must use a template, unless the creator is an administrator.
## Disabling RKE Template Enforcement
To allow new clusters to be created without an RKE template, administrators can turn off RKE template enforcement with the following steps:
1. Click **☰ > Global Settings**.
1. Go to the `cluster-template-enforcement` setting. Click **⋮ > Edit Setting**.
1. Set the value to **False** and click **Save**.
**Result:** When clusters are provisioned by Rancher, they don't need to use a template.
@@ -0,0 +1,74 @@
---
title: Example Scenarios
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/example-use-cases"/>
</head>
These example scenarios describe how an organization could use templates to standardize cluster creation.
- **Enforcing templates:** Administrators might want to [enforce one or more template settings for everyone](#enforcing-a-template-setting-for-everyone) if they want all new Rancher-provisioned clusters to have those settings.
- **Sharing different templates with different users:** Administrators might give [different templates to basic and advanced users,](#templates-for-basic-and-advanced-users) so that basic users have more restricted options and advanced users have more discretion when creating clusters.
- **Updating template settings:** If an organization's security and DevOps teams decide to embed best practices into the required settings for new clusters, those best practices could change over time. If the best practices change, [a template can be updated to a new revision](#updating-templates-and-clusters-created-with-them) and clusters created from the template can upgrade to the new version of the template.
- **Sharing ownership of a template:** When a template owner no longer wants to maintain a template, or wants to delegate ownership of the template, this scenario describes how [template ownership can be shared.](#allowing-other-users-to-control-and-share-a-template)
## Enforcing a Template Setting for Everyone
Let's say there is an organization in which the administrators decide that all new clusters should be created with Kubernetes version 1.14.
1. First, an administrator creates a template which specifies the Kubernetes version as 1.14 and marks all other settings as **Allow User Override**.
1. The administrator makes the template public.
1. The administrator turns on template enforcement.
**Results:**
- All Rancher users in the organization have access to the template.
- All new clusters created by [standard users](../manage-role-based-access-control-rbac/global-permissions.md) with this template will use Kubernetes 1.14 and they are unable to use a different Kubernetes version. By default, standard users don't have permission to create templates, so this template will be the only template they can use unless more templates are shared with them.
- All standard users must use a cluster template to create a new cluster. They cannot create a cluster without using a template.
In this way, the administrators enforce the Kubernetes version across the organization, while still allowing end users to configure everything else.
## Templates for Basic and Advanced Users
Let's say an organization has both basic and advanced users. Administrators want the basic users to be required to use a template, while the advanced users and administrators create their clusters however they want.
1. First, an administrator turns on [RKE template enforcement.](enforce-templates.md#requiring-new-clusters-to-use-an-rke-template) This means that every [standard user](../manage-role-based-access-control-rbac/global-permissions.md) in Rancher will need to use an RKE template when they create a cluster.
1. The administrator then creates two templates:
- One template for basic users, with almost every option specified except for access keys
- One template for advanced users, which has most or all options has **Allow User Override** turned on
1. The administrator shares the advanced template with only the advanced users.
1. The administrator makes the template for basic users public, so the more restrictive template is an option for everyone who creates a Rancher-provisioned cluster.
**Result:** All Rancher users, except for administrators, are required to use a template when creating a cluster. Everyone has access to the restrictive template, but only advanced users have permission to use the more permissive template. The basic users are more restricted, while advanced users have more freedom when configuring their Kubernetes clusters.
## Updating Templates and Clusters Created with Them
Let's say an organization has a template that requires clusters to use Kubernetes v1.14. However, as time goes on, the administrators change their minds. They decide they want users to be able to upgrade their clusters to use newer versions of Kubernetes.
In this organization, many clusters were created with a template that requires Kubernetes v1.14. Because the template does not allow that setting to be overridden, the users who created the cluster cannot directly edit that setting.
The template owner has several options for allowing the cluster creators to upgrade Kubernetes on their clusters:
- **Specify Kubernetes v1.15 on the template:** The template owner can create a new template revision that specifies Kubernetes v1.15. Then the owner of each cluster that uses that template can upgrade their cluster to a new revision of the template. This template upgrade allows the cluster creator to upgrade Kubernetes to v1.15 on their cluster.
- **Allow any Kubernetes version on the template:** When creating a template revision, the template owner can also mark the the Kubernetes version as **Allow User Override** using the switch near that setting on the Rancher UI. This will allow clusters that upgrade to this template revision to use any version of Kubernetes.
- **Allow the latest minor Kubernetes version on the template:** The template owner can also create a template revision in which the Kubernetes version is defined as **Latest v1.14 (Allows patch version upgrades)**. This means clusters that use that revision will be able to get patch version upgrades, but major version upgrades will not be allowed.
## Allowing Other Users to Control and Share a Template
Let's say Alice is a Rancher administrator. She owns an RKE template that reflects her organization's agreed-upon best practices for creating a cluster.
Bob is an advanced user who can make informed decisions about cluster configuration. Alice trusts Bob to create new revisions of her template as the best practices get updated over time. Therefore, she decides to make Bob an owner of the template.
To share ownership of the template with Bob, Alice [adds Bob as an owner of her template.](access-or-share-templates.md#sharing-ownership-of-templates)
The result is that as a template owner, Bob is in charge of version control for that template. Bob can now do all of the following:
- [Revise the template](manage-rke1-templates.md#updating-a-template) when the best practices change
- [Disable outdated revisions](manage-rke1-templates.md#disabling-a-template-revision) of the template so that no new clusters can be created with it
- [Delete the whole template](manage-rke1-templates.md#deleting-a-template) if the organization wants to go in a different direction
- [Set a certain revision as default](manage-rke1-templates.md#setting-a-template-revision-as-default) when users create a cluster with it. End users of the template will still be able to choose which revision they want to create the cluster with.
- [Share the template](access-or-share-templates.md) with specific users, make the template available to all Rancher users, or share ownership of the template with another user.
@@ -0,0 +1,70 @@
---
title: RKE Templates and Infrastructure
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/infrastructure"/>
</head>
In Rancher, RKE templates are used to provision Kubernetes and define Rancher settings, while node templates are used to provision nodes.
Therefore, even if RKE template enforcement is turned on, the end user still has flexibility when picking the underlying hardware when creating a Rancher cluster. The end users of an RKE template can still choose an infrastructure provider and the nodes they want to use.
If you want to standardize the hardware in your clusters, use RKE templates conjunction with node templates or with a server provisioning tool such as Terraform.
### Node Templates
[Node templates](../../../../reference-guides/user-settings/manage-node-templates.md) are responsible for node configuration and node provisioning in Rancher. From your user profile, you can set up node templates to define which templates are used in each of your node pools. With node pools enabled, you can make sure you have the required number of nodes in each node pool, and ensure that all nodes in the pool are the same.
### Terraform
Terraform is a server provisioning tool. It uses infrastructure-as-code that lets you create almost every aspect of your infrastructure with Terraform configuration files. It can automate the process of server provisioning in a way that is self-documenting and easy to track in version control.
This section focuses on how to use Terraform with the [Rancher 2 Terraform provider](https://www.terraform.io/docs/providers/rancher2/), which is a recommended option to standardize the hardware for your Kubernetes clusters. If you use the Rancher Terraform provider to provision hardware, and then use an RKE template to provision a Kubernetes cluster on that hardware, you can quickly create a comprehensive, production-ready cluster.
Terraform allows you to:
- Define almost any kind of infrastructure-as-code, including servers, databases, load balancers, monitoring, firewall settings, and SSL certificates
- Codify infrastructure across many platforms, including Rancher and major cloud providers
- Commit infrastructure-as-code to version control
- Easily repeat configuration and setup of infrastructure
- Incorporate infrastructure changes into standard development practices
- Prevent configuration drift, in which some servers become configured differently than others
## How Does Terraform Work?
Terraform is written in files with the extension `.tf`. It is written in HashiCorp Configuration Language, which is a declarative language that lets you define the infrastructure you want in your cluster, the cloud provider you are using, and your credentials for the provider. Then Terraform makes API calls to the provider in order to efficiently create that infrastructure.
To create a Rancher-provisioned cluster with Terraform, go to your Terraform configuration file and define the provider as Rancher 2. You can set up your Rancher 2 provider with a Rancher API key. Note: The API key has the same permissions and access level as the user it is associated with.
Then Terraform calls the Rancher API to provision your infrastructure, and Rancher calls the infrastructure provider. As an example, if you wanted to use Rancher to provision infrastructure on AWS, you would provide both your Rancher API key and your AWS credentials in the Terraform configuration file or in environment variables so that they could be used to provision the infrastructure.
When you need to make changes to your infrastructure, instead of manually updating the servers, you can make changes in the Terraform configuration files. Then those files can be committed to version control, validated, and reviewed as necessary. Then when you run `terraform apply`, the changes would be deployed.
## Tips for Working with Terraform
- There are examples of how to provide most aspects of a cluster in the [documentation for the Rancher 2 provider.](https://www.terraform.io/docs/providers/rancher2/)
- In the Terraform settings, you can install Docker Machine by using the Docker Machine node driver.
- You can also modify auth in the Terraform provider.
- You can reverse engineer how to do define a setting in Terraform by changing the setting in Rancher, then going back and checking your Terraform state file to see how it maps to the current state of your infrastructure.
## Tip for Creating CIS Benchmark Compliant Clusters
This section describes one way that you can make security and compliance-related config files standard in your clusters.
When you create a [CIS benchmark compliant cluster,](../../../../reference-guides/rancher-security/rancher-security.md) you have an encryption config file and an audit log config file.
Your infrastructure provisioning system can write those files to disk. Then in your RKE template, you would specify where those files will be, then add your encryption config file and audit log config file as extra mounts to the `kube-api-server`.
Then you would make sure that the `kube-api-server` flag in your RKE template uses your CIS-compliant config files.
In this way, you can create flags that comply with the CIS benchmark.
## Resources
- [Terraform documentation](https://www.terraform.io/docs/)
- [Rancher2 Terraform provider documentation](https://www.terraform.io/docs/providers/rancher2/)
- [The RanchCast - Episode 1: Rancher 2 Terraform Provider](https://youtu.be/YNCq-prI8-8): In this demo, Director of Community Jason van Brackel walks through using the Rancher 2 Terraform Provider to provision nodes and create a custom cluster.
@@ -0,0 +1,165 @@
---
title: Creating and Revising RKE Templates
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/manage-rke1-templates"/>
</head>
This section describes how to manage RKE templates and revisions. You an create, share, update, and delete templates from the **Cluster Management** view under **RKE1 Configuration > RKE Templates**.
Template updates are handled through a revision system. When template owners want to change or update a template, they create a new revision of the template. Individual revisions cannot be edited. However, if you want to prevent a revision from being used to create a new cluster, you can disable it.
Template revisions can be used in two ways: to create a new cluster, or to upgrade a cluster that was created with an earlier version of the template. The template creator can choose a default revision, but when end users create a cluster, they can choose any template and any template revision that is available to them. After the cluster is created from a specific revision, it cannot change to another template, but the cluster can be upgraded to a newer available revision of the same template.
The template owner has full control over template revisions, and can create new revisions to update the template, delete or disable revisions that should not be used to create clusters, and choose which template revision is the default.
### Prerequisites
You can create RKE templates if you have the **Create RKE Templates** permission, which can be [given by an administrator.](creator-permissions.md)
You can revise, share, and delete a template if you are an owner of the template. For details on how to become an owner of a template, refer to [the documentation on sharing template ownership.](access-or-share-templates.md#sharing-ownership-of-templates)
### Creating a Template
1. In the upper left corner, click **☰ > Cluster Management**.
1. Click **RKE1 configuration > Node Templates**.
1. Click **Add Template**.
1. Provide a name for the template. An auto-generated name is already provided for the template' first version, which is created along with this template.
1. Optional: Share the template with other users or groups by [adding them as members.](access-or-share-templates.md#sharing-templates-with-specific-users-or-groups) You can also make the template public to share with everyone in the Rancher setup.
1. Then follow the form on screen to save the cluster configuration parameters as part of the template's revision. The revision can be marked as default for this template.
**Result:** An RKE template with one revision is configured. You can use this RKE template revision later when you [provision a Rancher-launched cluster](../../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md). After a cluster is managed by an RKE template, it cannot be disconnected and the option to uncheck **Use an existing RKE Template and Revision** will be unavailable.
### Updating a Template
When you update an RKE template, you are creating a revision of the existing template. Clusters that were created with an older version of the template can be updated to match the new revision.
You can't edit individual revisions. Since you can't edit individual revisions of a template, in order to prevent a revision from being used, you can [disable it.](#disabling-a-template-revision)
When new template revisions are created, clusters using an older revision of the template are unaffected.
1. In the upper left corner, click **☰ > Cluster Management**.
1. In the left navigation menu, click **RKE1 Configuration > RKE Templates**.
1. Go to the template that you want to edit and click the **⋮ > Edit**.
1. Edit the required information and click **Save**.
1. Optional: You can change the default revision of this template and also change who it is shared with.
**Result:** The template is updated. To apply it to a cluster using an older version of the template, refer to the section on [upgrading a cluster to use a new revision of a template.](#upgrading-a-cluster-to-use-a-new-template-revision)
### Deleting a Template
When you no longer use an RKE template for any of your clusters, you can delete it.
1. In the upper left corner, click **☰ > Cluster Management**.
1. Click **RKE1 configuration > RKE Templates**.
1. Go to the RKE template that you want to delete and click the **⋮ > Delete**.
1. Confirm the deletion.
**Result:** The template is deleted.
### Creating a Revision Based on the Default Revision
You can clone the default template revision and quickly update its settings rather than creating a new revision from scratch. Cloning templates saves you the hassle of re-entering the access keys and other parameters needed for cluster creation.
1. In the upper left corner, click **☰ > Cluster Management**.
1. In the left navigation menu, click **RKE1 Configuration > RKE Templates**.
1. Go to the RKE template that you want to clone and click the **⋮ > New Revision from Default**.
1. Complete the rest of the form to create a new revision.
**Result:** The RKE template revision is cloned and configured.
### Creating a Revision Based on a Cloned Revision
When creating new RKE template revisions from your user settings, you can clone an existing revision and quickly update its settings rather than creating a new one from scratch. Cloning template revisions saves you the hassle of re-entering the cluster parameters.
1. In the upper left corner, click **☰ > Cluster Management**.
1. Under **RKE1 configuration**, click **RKE Templates**.
1. Go to the template revision you want to clone. Then select **⋮ > Clone Revision**.
1. Complete the rest of the form.
**Result:** The RKE template revision is cloned and configured. You can use the RKE template revision later when you provision a cluster. Any existing cluster using this RKE template can be upgraded to this new revision.
### Disabling a Template Revision
When you no longer want an RKE template revision to be used for creating new clusters, you can disable it. A disabled revision can be re-enabled.
You can disable the revision if it is not being used by any cluster.
1. In the upper left corner, click **☰ > Cluster Management**.
1. In the left navigation menu, click **RKE1 Configuration > RKE Templates**.
1. Go to the template revision you want to disable. Then select **⋮ > Disable**.
**Result:** The RKE template revision cannot be used to create a new cluster.
### Re-enabling a Disabled Template Revision
If you decide that a disabled RKE template revision should be used to create new clusters, you can re-enable it.
1. In the upper left corner, click **☰ > Cluster Management**.
1. Under **RKE1 configuration**, click **RKE Templates**.
1. Go to the template revision you want to re-enable. Then select **⋮ > Enable**.
**Result:** The RKE template revision can be used to create a new cluster.
### Setting a Template Revision as Default
When end users create a cluster using an RKE template, they can choose which revision to create the cluster with. You can configure which revision is used by default.
To set an RKE template revision as default,
1. In the upper left corner, click **☰ > Cluster Management**.
1. In the left navigation menu, click **RKE1 Configuration > RKE templates**.
1. Go to the RKE template revision that should be default and click the **⋮ > Set as Default**.
**Result:** The RKE template revision will be used as the default option when clusters are created with the template.
### Deleting a Template Revision
You can delete all revisions of a template except for the default revision.
To permanently delete a revision,
1. In the upper left corner, click **☰ > Cluster Management**.
1. In the left navigation menu, click **RKE1 Configuration > RKE templates**.
1. Go to the RKE template revision that should be deleted and click the **⋮ > Delete**.
**Result:** The RKE template revision is deleted.
### Upgrading a Cluster to Use a New Template Revision
:::note
This section assumes that you already have a cluster that [has an RKE template applied.](apply-templates.md)
This section also assumes that you have [updated the template that the cluster is using](#updating-a-template) so that a new template revision is available.
:::
To upgrade a cluster to use a new template revision,
1. In the upper left corner, click **☰ > Cluster Management**.
1. Go to the cluster that you want to upgrade and click **⋮ > Edit Config**.
1. In the **Cluster Options** section, click the dropdown menu for the template revision, then select the new template revision.
1. Click **Save**.
**Result:** The cluster is upgraded to use the settings defined in the new template revision.
### Exporting a Running Cluster to a New RKE Template and Revision
You can save an existing cluster's settings as an RKE template.
This exports the cluster's settings as a new RKE template, and also binds the cluster to that template. The result is that the cluster can only be changed if the [template is updated,](#updating-a-template) and the cluster is upgraded to [use a newer version of the template](#upgrading-a-cluster-to-use-a-new-template-revision).
To convert an existing cluster to use an RKE template,
1. In the upper left corner, click **☰ > Cluster Management**.
1. Go to the cluster that will be converted to use an RKE template and **⋮ > Save as RKE Template**.
1. Enter a name for the RKE template in the form that appears, and click **Create**.
**Results:**
- A new RKE template is created.
- The cluster is converted to use the new template.
- New clusters can be [created from the new template and revision.](apply-templates.md#creating-a-cluster-from-an-rke-template)
@@ -0,0 +1,18 @@
---
title: Overriding Template Settings
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/override-template-settings"/>
</head>
When a user creates an RKE template, each setting in the template has a switch in the Rancher UI that indicates if users can override the setting. This switch marks those settings as **Allow User Override**.
After a cluster is created with a template, end users can't update any of the settings defined in the template unless the template owner marked them as **Allow User Override**. However, if the template is [updated to a new revision](manage-rke1-templates.md) that changes the settings or allows end users to change them, the cluster can be upgraded to a new revision of the template and the changes in the new revision will be applied to the cluster.
When any parameter is set as **Allow User Override** on the RKE template, it means that end users have to fill out those fields during cluster creation and they can edit those settings afterward at any time.
The **Allow User Override** model of the RKE template is useful for situations such as:
- Administrators know that some settings will need the flexibility to be frequently updated over time
- End users will need to enter their own access keys or secret keys, for example, cloud credentials or credentials for backup snapshots
@@ -0,0 +1,145 @@
---
title: Configuring Authentication
weight: 10
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config"/>
</head>
One of the key features that Rancher adds to Kubernetes is centralized user authentication. This feature allows your users to use one set of credentials to authenticate with any of your Kubernetes clusters.
This centralized user authentication is accomplished using the Rancher authentication proxy, which is installed along with the rest of Rancher. This proxy authenticates your users and forwards their requests to your Kubernetes clusters using a service account.
:::warning
The account used to enable the external provider will be granted admin permissions. If you use a test account or non-admin account, that account will still be granted admin-level permissions. See [External Authentication Configuration and Principal Users](#external-authentication-configuration-and-principal-users) to understand why.
:::
## External vs. Local Authentication
The Rancher authentication proxy integrates with the following external authentication services.
| Auth Service |
| ------------------------------------------------------------------------------------------------ |
| [Microsoft Active Directory](configure-active-directory.md) |
| [GitHub](configure-github.md) |
| [Microsoft Azure AD](configure-azure-ad.md) |
| [FreeIPA](configure-freeipa.md) |
| [OpenLDAP](../configure-openldap/configure-openldap.md) |
| [Microsoft AD FS](../configure-microsoft-ad-federation-service-saml/configure-microsoft-ad-federation-service-saml.md) |
| [PingIdentity](configure-pingidentity.md) |
| [Keycloak (OIDC)](configure-keycloak-oidc.md) |
| [Keycloak (SAML)](configure-keycloak-saml.md) |
| [Okta](configure-okta-saml.md) |
| [Google OAuth](configure-google-oauth.md) |
| [Shibboleth](../configure-shibboleth-saml/configure-shibboleth-saml.md) |
However, Rancher also provides [local authentication](create-local-users.md).
In most cases, you should use an external authentication service over local authentication, as external authentication allows user management from a central location. However, you may want a few local authentication users for managing Rancher under rare circumstances, such as if your external authentication provider is unavailable or undergoing maintenance.
## Users and Groups
Rancher relies on users and groups to determine who is allowed to log in to Rancher and which resources they can access. When authenticating with an external provider, groups are provided from the external provider based on the user. These users and groups are given specific roles to resources like clusters, projects, and global DNS providers and entries. When you give access to a group, all users who are a member of that group in the authentication provider will be able to access the resource with the permissions that you've specified. For more information on roles and permissions, see [Role Based Access Control](../manage-role-based-access-control-rbac/manage-role-based-access-control-rbac.md).
:::note
Local authentication does not support creating or managing groups.
:::
For more information, see [Users and Groups](manage-users-and-groups.md)
## Scope of Rancher Authorization
After you configure Rancher to allow sign on using an external authentication service, you should configure who should be allowed to log in and use Rancher. The following options are available:
| Access Level | Description |
|----------------------------------------------|-------------|
| Allow any valid Users | _Any_ user in the authorization service can access Rancher. We generally discourage use of this setting! |
| Allow members of Clusters, Projects, plus Authorized Users and Organizations | Any user in the authorization service and any group added as a **Cluster Member** or **Project Member** can log in to Rancher. Additionally, any user in the authentication service or group you add to the **Authorized Users and Organizations** list may log in to Rancher. |
| Restrict access to only Authorized Users and Organizations | Only users in the authentication service or groups added to the Authorized Users and Organizations can log in to Rancher. |
To set the Rancher access level for users in the authorization service, follow these steps:
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Auth Provider**.
1. After setting up the configuration details for an auth provider, use the **Site Access** options to configure the scope of user authorization. The table above explains the access level for each option.
1. Optional: If you choose an option other than **Allow any valid Users,** you can add users to the list of authorized users and organizations by searching for them in the text field that appears.
1. Click **Save**.
**Result:** The Rancher access configuration settings are applied.
:::note SAML Provider Caveats:
- SAML Protocol does not support search or lookup for users or groups. Therefore, there is no validation on users or groups when adding them to Rancher.
- When adding users, the exact user IDs (i.e. `UID Field`) must be entered correctly. As you type the user ID, there will be no search for other user IDs that may match.
- When adding groups, you must select the group from the drop-down that is next to the text box. Rancher assumes that any input from the text box is a user.
- The group drop-down shows only the groups that you are a member of. You will not be able to add groups that you are not a member of.
:::
## External Authentication Configuration and Principal Users
Configuring external authentication requires:
- A local user assigned the administrator role, called hereafter the _local principal_.
- An external user that can authenticate with your external authentication service, called hereafter the _external principal_.
The configuration of external authentication also affects how principal users are managed within Rancher. Specifically, when a user account enables an external provider, it is granted admin-level permissions. This is because the local principal and external principal share the same user ID and access rights.
The following instructions demonstrate these effects:
1. Sign into Rancher as the local principal and complete configuration of external authentication.
![Sign In](/img/sign-in.png)
2. Rancher associates the external principal with the local principal. These two users share the local principal's user ID.
![Principal ID Sharing](/img/principal-ID.png)
3. After you complete configuration, Rancher automatically signs out the local principal.
![Sign Out Local Principal](/img/sign-out-local.png)
4. Then, Rancher automatically signs you back in as the external principal.
![Sign In External Principal](/img/sign-in-external.png)
5. Because the external principal and the local principal share an ID, no unique object for the external principal displays on the Users page.
![Sign In External Principal](/img/users-page.png)
6. The external principal and the local principal share the same access rights.
:::note Reconfiguring a previously set up auth provider
If you need to reconfigure or disable then re-enable a provider that had been previously set up, ensure that the user who attempts to do so
is logged in to Rancher as an external user, not the local admin.
:::
## Disabling An Auth Provider
When you disable an auth provider, Rancher deletes all resources associated with it, such as:
- Secrets.
- Global role bindings.
- Cluster role template bindings.
- Project role template bindings.
- External users associated with the provider, but who never logged in as local users to Rancher.
As this operation may lead to a loss of many resources, you may want to add a safeguard on the provider. To ensure that this cleanup process doesn't run when the auth provider is disabled, add a special annotation to the corresponding auth config.
For example, to add a safeguard to the Azure AD provider, annotate the `azuread` authconfig object:
`kubectl annotate --overwrite authconfig azuread management.cattle.io/auth-provider-cleanup='user-locked'`
Rancher won't perform cleanup until you set the annotation to `unlocked`.
### Running Resource Cleanup Manually
Rancher might retain resources from a previously disabled auth provider configuration in the local cluster, even after you configure another auth provider. For example, if you used Provider A, then disabled it and started using Provider B, when you upgrade to a new version of Rancher, you can manually trigger cleanup on resources configured by Provider A.
To manually trigger cleanup for a disabled auth provider, add the `management.cattle.io/auth-provider-cleanup` annotation with the `unlocked` value to its auth config.
@@ -0,0 +1,223 @@
---
title: Configure Active Directory (AD)
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-active-directory"/>
</head>
If your organization uses Microsoft Active Directory as central user repository, you can configure Rancher to communicate with an Active Directory server to authenticate users. This allows Rancher admins to control access to clusters and projects based on users and groups managed externally in the Active Directory, while allowing end-users to authenticate with their AD credentials when logging in to the Rancher UI.
Rancher uses LDAP to communicate with the Active Directory server. The authentication flow for Active Directory is therefore the same as for the [OpenLDAP authentication](../configure-openldap/configure-openldap.md) integration.
:::note
Before you start, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users](authentication-config.md#external-authentication-configuration-and-principal-users).
:::
## Prerequisites
You'll need to create or obtain from your AD administrator a new AD user to use as service account for Rancher. This user must have sufficient permissions to perform LDAP searches and read attributes of users and groups under your AD domain.
Usually a (non-admin) **Domain User** account should be used for this purpose, as by default such user has read-only privileges for most objects in the domain partition.
Note however, that in some locked-down Active Directory configurations this default behaviour may not apply. In such case you will need to ensure that the service account user has at least **Read** and **List Content** permissions granted either on the Base OU (enclosing users and groups) or globally for the domain.
:::note Using TLS?
- If the certificate used by the AD server is self-signed or not from a recognized certificate authority, make sure have at hand the CA certificate (concatenated with any intermediate certificates) in PEM format. You will have to paste in this certificate during the configuration so that Rancher is able to validate the certificate chain.
- Upon an upgrade to v2.6.0, authenticating via Rancher against an active directory using TLS can fail if the certificates on the AD server do not support SAN attributes. This is a check enabled by default in Go v1.15.
- The error received is "Error creating SSL connection: LDAP Result Code 200 "Network Error": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0".
- To resolve the error, update or replace the certificates on the AD server with new ones that support the SAN attribute. Alternatively, this error can be ignored by setting `GODEBUG=x509ignoreCN=0` as an environment variable to Rancher server container.
:::
## Configuration Steps
### Open Active Directory Configuration
1. Log into the Rancher UI using the initial local `admin` account.
1. In the top left corner, click **☰ > Users & Authentication**.
1. In the left navigation menu, click **Auth Provider**.
1. Click **ActiveDirectory**. The **Authentication Provider: ActiveDirectory** form will be displayed.
1. Fill out the form. For help, refer to the details on configuration options below.
1. Click **Enable**.
### Configure Active Directory Server Settings
In the section titled `1. Configure an Active Directory server`, complete the fields with the information specific to your Active Directory server. Please refer to the following table for detailed information on the required values for each parameter.
:::note
If you are unsure about the correct values to enter in the user/group Search Base field, please refer to [Identify Search Base and Schema using ldapsearch](#annex-identify-search-base-and-schema-using-ldapsearch).
:::
**Table 1: AD Server parameters**
| Parameter | Description |
|:--|:--|
| Hostname | Specify the hostname or IP address of the AD server |
| Port | Specify the port at which the Active Directory server is listening for connections. Unencrypted LDAP normally uses the standard port of 389, while LDAPS uses port 636.|
| TLS | Check this box to enable LDAP over SSL/TLS (commonly known as LDAPS).|
| Server Connection Timeout | The duration in number of seconds that Rancher waits before considering the AD server unreachable. |
| Service Account Username | Enter the username of an AD account with read-only access to your domain partition (see [Prerequisites](#prerequisites)). The username can be entered in NetBIOS format (e.g. "DOMAIN\serviceaccount") or UPN format (e.g. "serviceaccount@domain.com"). |
| Service Account Password | The password for the service account. |
| Default Login Domain | When you configure this field with the NetBIOS name of your AD domain, usernames entered without a domain (e.g. "jdoe") will automatically be converted to a slashed, NetBIOS logon (e.g. "LOGIN_DOMAIN\jdoe") when binding to the AD server. If your users authenticate with the UPN (e.g. "jdoe@acme.com") as username then this field **must** be left empty. |
| User Search Base | The Distinguished Name of the node in your directory tree from which to start searching for user objects. All users must be descendents of this base DN. For example: "ou=people,dc=acme,dc=com".|
| Group Search Base | If your groups live under a different node than the one configured under `User Search Base` you will need to provide the Distinguished Name here. Otherwise leave it empty. For example: "ou=groups,dc=acme,dc=com".|
---
### Configure User/Group Schema
In the section titled `2. Customize Schema` you must provide Rancher with a correct mapping of user and group attributes corresponding to the schema used in your directory.
Rancher uses LDAP queries to search for and retrieve information about users and groups within the Active Directory. The attribute mappings configured in this section are used to construct search filters and resolve group membership. It is therefore paramount that the provided settings reflect the reality of your AD domain.
:::note
If you are unfamiliar with the schema used in your Active Directory domain, please refer to [Identify Search Base and Schema using ldapsearch](#annex-identify-search-base-and-schema-using-ldapsearch) to determine the correct configuration values.
:::
#### User Schema
The table below details the parameters for the user schema section configuration.
**Table 2: User schema configuration parameters**
| Parameter | Description |
|:--|:--|
| Object Class | The name of the object class used for user objects in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
| Username Attribute | The user attribute whose value is suitable as a display name. |
| Login Attribute | The attribute whose value matches the username part of credentials entered by your users when logging in to Rancher. If your users authenticate with their UPN (e.g. "jdoe@acme.com") as username then this field must normally be set to `userPrincipalName`. Otherwise for the old, NetBIOS-style logon names (e.g. "jdoe") it's usually `sAMAccountName`. |
| User Member Attribute | The attribute containing the groups that a user is a member of. |
| Search Attribute | When a user enters text to add users or groups in the UI, Rancher queries the AD server and attempts to match users by the attributes provided in this setting. Multiple attributes can be specified by separating them with the pipe ("\|") symbol. To match UPN usernames (e.g. jdoe@acme.com) you should usually set the value of this field to `userPrincipalName`. |
| Search Filter | This filter gets applied to the list of users that is searched when Rancher attempts to add users to a site access list or tries to add members to clusters or projects. For example, a user search filter could be <code>(&#124;(memberOf=CN=group1,CN=Users,DC=testad,DC=rancher,DC=io)(memberOf=CN=group2,CN=Users,DC=testad,DC=rancher,DC=io))</code>. Note: If the search filter does not use [valid AD search syntax,](https://docs.microsoft.com/en-us/windows/win32/adsi/search-filter-syntax) the list of users will be empty. |
| User Enabled Attribute | The attribute containing an integer value representing a bitwise enumeration of user account flags. Rancher uses this to determine if a user account is disabled. You should normally leave this set to the AD standard `userAccountControl`. |
| Disabled Status Bitmask | This is the value of the `User Enabled Attribute` designating a disabled user account. You should normally leave this set to the default value of "2" as specified in the Microsoft Active Directory schema (see [here](https://docs.microsoft.com/en-us/windows/desktop/adschema/a-useraccountcontrol#remarks)). |
---
#### Group Schema
The table below details the parameters for the group schema configuration.
**Table 3: Group schema configuration parameters**
| Parameter | Description |
|:--|:--|
| Object Class | The name of the object class used for group objects in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
| Name Attribute | The group attribute whose value is suitable for a display name. |
| Group Member User Attribute | The name of the **user attribute** whose format matches the group members in the `Group Member Mapping Attribute`. |
| Group Member Mapping Attribute | The name of the group attribute containing the members of a group. |
| Search Attribute | Attribute used to construct search filters when adding groups to clusters or projects. See description of user schema `Search Attribute`. |
| Search Filter | This filter gets applied to the list of groups that is searched when Rancher attempts to add groups to a site access list or tries to add groups to clusters or projects. For example, a group search filter could be <code>(&#124;(cn=group1)(cn=group2))</code>. Note: If the search filter does not use [valid AD search syntax,](https://docs.microsoft.com/en-us/windows/win32/adsi/search-filter-syntax) the list of groups will be empty. |
| Group DN Attribute | The name of the group attribute whose format matches the values in the user attribute describing a the user's memberships. See `User Member Attribute`. |
| Nested Group Membership | This settings defines whether Rancher should resolve nested group memberships. Use only if your organization makes use of these nested memberships (i.e., you have groups that contain other groups as members. We advise avoiding nested groups when possible to avoid potential performance issues when there is a large amount of nested memberships). |
---
### Test Authentication
Once you have completed the configuration, proceed by testing the connection to the AD server **using your AD admin account**. If the test is successful, authentication with the configured Active Directory will be enabled implicitly with the account you test with set as admin.
:::note
The AD user pertaining to the credentials entered in this step will be mapped to the local principal account and assigned administrator privileges in Rancher. You should therefore make a conscious decision on which AD account you use to perform this step.
:::
1. Enter the **username** and **password** for the AD account that should be mapped to the local principal account.
2. Click **Authenticate with Active Directory** to finalise the setup.
**Result:**
- Active Directory authentication has been enabled.
- You have been signed into Rancher as administrator using the provided AD credentials.
:::note
You will still be able to login using the locally configured `admin` account and password in case of a disruption of LDAP services.
:::
## Annex: Identify Search Base and Schema using ldapsearch
In order to successfully configure AD authentication it is crucial that you provide the correct configuration pertaining to the hierarchy and schema of your AD server.
The [`ldapsearch`](https://manpages.ubuntu.com/manpages/kinetic/en/man1/ldapsearch.1.html) tool allows you to query your AD server to learn about the schema used for user and group objects.
For the purpose of the example commands provided below we will assume:
- The Active Directory server has a hostname of `ad.acme.com`
- The server is listening for unencrypted connections on port `389`
- The Active Directory domain is `acme`
- You have a valid AD account with the username `jdoe` and password `secret`
### Identify Search Base
First we will use `ldapsearch` to identify the Distinguished Name (DN) of the parent node(s) for users and groups:
```
$ ldapsearch -x -D "acme\jdoe" -w "secret" -p 389 \
-h ad.acme.com -b "dc=acme,dc=com" -s sub "sAMAccountName=jdoe"
```
This command performs an LDAP search with the search base set to the domain root (`-b "dc=acme,dc=com"`) and a filter targeting the user account (`sAMAccountNam=jdoe`), returning the attributes for said user:
![](/img/ldapsearch-user.png)
Since in this case the user's DN is `CN=John Doe,CN=Users,DC=acme,DC=com` [5], we should configure the **User Search Base** with the parent node DN `CN=Users,DC=acme,DC=com`.
Similarly, based on the DN of the group referenced in the **memberOf** attribute [4], the correct value for the **Group Search Base** would be the parent node of that value, i.e., `OU=Groups,DC=acme,DC=com`.
### Identify User Schema
The output of the above `ldapsearch` query also allows to determine the correct values to use in the user schema configuration:
- `Object Class`: **person** [1]
- `Username Attribute`: **name** [2]
- `Login Attribute`: **sAMAccountName** [3]
- `User Member Attribute`: **memberOf** [4]
:::note
If the AD users in our organization were to authenticate with their UPN (e.g. jdoe@acme.com) instead of the short logon name, then we would have to set the `Login Attribute` to **userPrincipalName** instead.
:::
We'll also set the `Search Attribute` parameter to **sAMAccountName|name**. That way users can be added to clusters/projects in the Rancher UI either by entering their username or full name.
### Identify Group Schema
Next, we'll query one of the groups associated with this user, in this case `CN=examplegroup,OU=Groups,DC=acme,DC=com`:
```
$ ldapsearch -x -D "acme\jdoe" -w "secret" -p 389 \
-h ad.acme.com -b "ou=groups,dc=acme,dc=com" \
-s sub "CN=examplegroup"
```
This command will inform us on the attributes used for group objects:
![](/img/ldapsearch-group.png)
Again, this allows us to determine the correct values to enter in the group schema configuration:
- `Object Class`: **group** [1]
- `Name Attribute`: **name** [2]
- `Group Member Mapping Attribute`: **member** [3]
- `Search Attribute`: **sAMAccountName** [4]
Looking at the value of the **member** attribute, we can see that it contains the DN of the referenced user. This corresponds to the **distinguishedName** attribute in our user object. Accordingly will have to set the value of the `Group Member User Attribute` parameter to this attribute.
In the same way, we can observe that the value in the **memberOf** attribute in the user object corresponds to the **distinguishedName** [5] of the group. We therefore need to set the value for the `Group DN Attribute` parameter to this attribute.
## Annex: Troubleshooting
If you are experiencing issues while testing the connection to the Active Directory server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging](../../../../faq/technical-items.md#how-can-i-enable-debug-logging) in this documentation.
@@ -0,0 +1,331 @@
---
title: Configure Azure AD
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad"/>
</head>
## Microsoft Graph API
Microsoft Graph API is now the flow through which you will set up Azure AD. The below sections will assist [new users](#new-user-setup) in configuring Azure AD with a new instance as well as assist existing Azure app owners in [migrating to the new flow](#migrating-from-azure-ad-graph-api-to-microsoft-graph-api).
The Microsoft Graph API flow in Rancher is constantly evolving. We recommend that you use the latest patched version of 2.7, as it is still in active development and will continue to receive new features and improvements.
### New User Setup
If you have an instance of Active Directory (AD) hosted in Azure, you can configure Rancher to allow your users to log in using their AD accounts. Configuration of Azure AD external authentication requires you to make configurations in both Azure and Rancher.
:::note Notes
- Azure AD integration only supports Service Provider initiated logins.
- Most of this procedure takes place from the [Microsoft Azure Portal](https://portal.azure.com/).
:::
#### Azure Active Directory Configuration Outline
Configuring Rancher to allow your users to authenticate with their Azure AD accounts involves multiple procedures. Review the outline below before getting started.
:::tip
Before you start, open two browser tabs: one for Rancher, and one for the Azure portal. This will help with copying and pasting configuration values from the portal to Rancher.
:::
#### 1. Register Rancher with Azure
Before enabling Azure AD within Rancher, you must register Rancher with Azure.
1. Log in to [Microsoft Azure](https://portal.azure.com/) as an administrative user. Configuration in future steps requires administrative access rights.
1. Use search to open the **App registrations** service.
1. Click **New registration** and complete the form.
![New App Registration](/img/new-app-registration.png)
1. Enter a **Name** (something like `Rancher`).
1. From **Supported account types**, select "Accounts in this organizational directory only (AzureADTest only - Single tenant)" This corresponds to the legacy app registration options.
:::note
In the updated Azure portal, Redirect URIs are synonymous with Reply URLs. In order to use Azure AD with Rancher, you must whitelist Rancher with Azure (previously done through Reply URLs). Therefore, you must ensure to fill in the Redirect URI with your Rancher server URL, to include the verification path as listed below.
:::
1. In the [**Redirect URI**](https://docs.microsoft.com/en-us/azure/active-directory/develop/reply-url) section, make sure **Web** is selected from the dropdown and enter the URL of your Rancher Server in the text box next to the dropdown. This Rancher server URL should be appended with the verification path: `<MY_RANCHER_URL>/verify-auth-azure`.
:::tip
You can find your personalized Azure Redirect URI (reply URL) in Rancher on the Azure AD Authentication page (Global View > Authentication > Web).
:::
1. Click **Register**.
:::note
It can take up to five minutes for this change to take affect, so don't be alarmed if you can't authenticate immediately after Azure AD configuration.
:::
#### 2. Create a new client secret
From the Azure portal, create a client secret. Rancher will use this key to authenticate with Azure AD.
1. Use search to open **App registrations** services. Then open the entry for Rancher that you created in the last procedure.
![Open Rancher Registration](/img/open-rancher-app-reg.png)
1. From the navigation pane, click **Certificates & secrets**.
1. Click **New client secret**.
![Create new client secret](/img/new-client-secret.png)
1. Enter a **Description** (something like `Rancher`).
1. Select the duration from the options under **Expires**. This drop-down menu sets the expiration date for the key. Shorter durations are more secure, but require you to create a new key more frequently.
Note that users won't be able to log in to Rancher if it detects that the application secret has expired. To avoid this problem, rotate the secret in Azure and update it in Rancher before it expires.
1. Click **Add** (you don't need to enter a value—it will automatically populate after you save).
<a id="secret"></a>
1. You'll enter this key into the Rancher UI later as your **Application Secret**. Since you won't be able to access the key value again within the Azure UI, keep this window open for the rest of the setup process.
#### 3. Set Required Permissions for Rancher
Next, set API permissions for Rancher within Azure.
:::caution
Ensure that you set Application permissions, and *not* Delegated permissions. Otherwise, you won't be able to login to Azure AD.
:::
1. From the navigation pane on, select **API permissions**.
1. Click **Add a permission**.
1. From the Microsoft Graph API, select the following **Application Permissions**: `Directory.Read.All`
![Select API Permissions](/img/api-permissions.png)
1. Return to **API permissions** in the nav bar. From there, click **Grant admin consent**. Then click **Yes**. The app's permissions should look like the following:
![Open Required Permissions](/img/select-req-permissions.png)
:::note
Rancher doesn't validate the permissions you grant to the app in Azure. You're free to try any permissions you want, as long as they allow Rancher to work with AD users and groups.
Specifically, Rancher needs permissions that allow the following actions:
- Get a user.
- List all users.
- List groups of which a given user is a member.
- Get a group.
- List all groups.
Rancher performs these actions either to log in a user or to run a user/group search. Keep in mind that the permissions must be of type `Application`.
Here are a few examples of permission combinations that satisfy Rancher's needs:
- `Directory.Read.All`
- `User.Read.All` and `GroupMember.Read.All`
- `User.Read.All` and `Group.Read.All`
:::
#### 4. Copy Azure Application Data
![Application ID](/img/app-configuration.png)
1. Obtain your Rancher **Tenant ID**.
1. Use search to open **App registrations**.
1. Find the entry you created for Rancher.
1. Copy the **Directory ID** and paste it into Rancher as your **Tenant ID**.
1. Obtain your Rancher **Application (Client) ID**.
1. If you aren't already there, use search to open **App registrations**.
1. In **Overview**, find the entry you created for Rancher.
1. Copy the **Application (Client) ID** and paste it into Rancher as your **Application ID**.
1. In most cases, your endpoint options will either be [Standard](#global) or [China](#china). For either of these options, you only need to enter the **Tenant ID**, **Application ID**, and **Application Secret**.
![Standard Endpoint Options](/img/tenant-application-id-secret.png)
**For Custom Endpoints:**
:::caution
Custom Endpoints are not tested or fully supported by Rancher.
:::
You'll also need to manually enter the Graph, Token, and Auth Endpoints.
- From <b>App registrations</b>, click <b>Endpoints</b>:
![Click Endpoints](/img/endpoints.png)
- The following endpoints will be your Rancher endpoint values. Make sure to use the v1 version of these endpoints:
- **Microsoft Graph API endpoint** (Graph Endpoint)
- **OAuth 2.0 token endpoint (v1)** (Token Endpoint)
- **OAuth 2.0 authorization endpoint (v1)** (Auth Endpoint)
#### 5. Configure Azure AD in Rancher
To complete configuration, enter information about your AD instance in the Rancher UI.
1. Log into Rancher.
1. In the top left corner, click **☰ > Users & Authentication**.
1. In the left navigation menu, click **Auth Provider**.
1. Click **AzureAD**.
1. Complete the **Configure Azure AD Account** form using the information you copied while completing [Copy Azure Application Data](#4-copy-azure-application-data).
:::caution
The Azure AD account will be granted administrator privileges, since its details will be mapped to the Rancher local principal account. Make sure that this level of privilege is appropriate before you continue.
:::
**For Standard or China Endpoints:**
The following table maps the values you copied in the Azure portal to the fields in Rancher:
| Rancher Field | Azure Value |
| ------------------ | ------------------------------------- |
| Tenant ID | Directory ID |
| Application ID | Application ID |
| Application Secret | Key Value |
| Endpoint | https://login.microsoftonline.com/ |
**For Custom Endpoints:**
The following table maps your custom config values to Rancher fields:
| Rancher Field | Azure Value |
| ------------------ | ------------------------------------- |
| Graph Endpoint | Microsoft Graph API Endpoint |
| Token Endpoint | OAuth 2.0 Token Endpoint |
| Auth Endpoint | OAuth 2.0 Authorization Endpoint |
**Important:** When entering the Graph Endpoint in a custom config, remove the tenant ID from the URL:
<code>http<span>s://g</span>raph.microsoft.com<del>/abb5adde-bee8-4821-8b03-e63efdc7701c</del></code>
1. Click **Enable**.
**Result:** Azure Active Directory authentication is configured.
#### (Optional) Configure Authentication with Multiple Rancher Domains
If you have multiple Rancher domains, it's not possible to configure multiple redirect URIs through the Rancher UI. The Azure AD configuration file, `azuread`, only allows one redirect URI by default. You must manually edit `azuread` to set the redirect URI as needed for any other domains. If you don't manually edit `azuread`, then upon a successful login attempt to any domain, Rancher automatically redirects the user to the **Redirect URI** value you set when you registered the app in [Step 1. Register Rancher with Azure](#1-register-rancher-with-azure).
### Migrating from Azure AD Graph API to Microsoft Graph API
Since the [Azure AD Graph API](https://docs.microsoft.com/en-us/graph/migrate-azure-ad-graph-overview) is deprecated and slated to retire in June 2023, admins should update their Azure AD App to use the [Microsoft Graph API](https://docs.microsoft.com/en-us/graph/use-the-api) in Rancher.
This needs to be done well in advance of the endpoint being retired.
If Rancher is still configured to use the Azure AD Graph API when it is retired, users may not be able to log into Rancher using Azure AD.
#### Updating Endpoints in the Rancher UI
:::caution
Admins should create a [Rancher backup](../../../new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher.md) before they commit to the endpoint migration described below.
:::
1. [Update](#3-set-required-permissions-for-rancher) the permissions of your Azure AD app registration. This is critical.
1. Log into Rancher.
1. In the Rancher UI homepage, make note of the banner at the top of screen that advises users to update their Azure AD authentication. Click on the link provided to do so.
![Rancher UI Banner](/img/rancher-ui-azure-update.png)
1. To complete the move to the new Microsoft Graph API, click **Update Endpoint**.
**Note:** Ensure that your Azure app has a [new set of permissions](#3-set-required-permissions-for-rancher) before starting the update.
![Update Endpoint](/img/rancher-button-to-update.png)
1. When you receive the pop-up warning message, click **Update**.
![Azure Update Pop-up](/img/azure-update-popup.png)
1. Refer to the [tables](#global) below for the full list of endpoint changes that Rancher performs. Admins do not need to do this manually.
#### Air-Gapped Environments
In air-gapped environments, admins should ensure that their endpoints are whitelisted (see note on [Step 3.2 of Register Rancher with Azure](#1-register-rancher-with-azure)) since the Graph Endpoint URL is changing.
#### Rolling Back the Migration
If you need to roll back your migration, please note the following:
1. Admins are encouraged to use the proper restore process if they want to go back. Please see [backup docs](../../../new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher.md), [restore docs](../../../new-user-guides/backup-restore-and-disaster-recovery/restore-rancher.md), and [examples](../../../../reference-guides/backup-restore-configuration/examples.md) for reference.
1. Azure app owners who want to rotate the Application Secret will need to also rotate it in Rancher as Rancher does not automatically update the Application Secret when it is changed in Azure. In Rancher, note that it is stored in a Kubernetes secret called `azureadconfig-applicationsecret` which is in the `cattle-global-data` namespace.
:::caution
If you upgrade to Rancher v2.7.0+ with an existing Azure AD setup, and choose to disable the auth provider, you won't be able to restore the previous setup. You also won't be able to set up Azure AD using the old flow. You'll need to re-register with the new auth flow. Since Rancher now uses the Graph API, users need set up the [proper permissions in the Azure portal](#3-set-required-permissions-for-rancher).
:::
#### Global:
Rancher Field | Deprecated Endpoints
---------------- | -------------------------------------------------------------
Auth Endpoint | https://login.microsoftonline.com/{tenantID}/oauth2/authorize
Endpoint | https://login.microsoftonline.com/
Graph Endpoint | https://graph.windows.net/
Token Endpoint | https://login.microsoftonline.com/{tenantID}/oauth2/token
Rancher Field | New Endpoints
---------------- | ------------------------------------------------------------------
Auth Endpoint | https://login.microsoftonline.com/{tenantID}/oauth2/v2.0/authorize
Endpoint | https://login.microsoftonline.com/
Graph Endpoint | https://graph.microsoft.com
Token Endpoint | https://login.microsoftonline.com/{tenantID}/oauth2/v2.0/token
#### China:
Rancher Field | Deprecated Endpoints
---------------- | ----------------------------------------------------------
Auth Endpoint | https://login.chinacloudapi.cn/{tenantID}/oauth2/authorize
Endpoint | https://login.chinacloudapi.cn/
Graph Endpoint | https://graph.chinacloudapi.cn/
Token Endpoint | https://login.chinacloudapi.cn/{tenantID}/oauth2/token
Rancher Field | New Endpoints
---------------- | -------------------------------------------------------------------------
Auth Endpoint | https://login.partner.microsoftonline.cn/{tenantID}/oauth2/v2.0/authorize
Endpoint | https://login.partner.microsoftonline.cn/
Graph Endpoint | https://microsoftgraph.chinacloudapi.cn
Token Endpoint | https://login.partner.microsoftonline.cn/{tenantID}/oauth2/v2.0/token
## Deprecated Azure AD Graph API
>**Important:**
>
>- The [Azure AD Graph API](https://docs.microsoft.com/en-us/graph/migrate-azure-ad-graph-overview) is deprecated and will be retired by Microsoft at any time after June 30, 2023, without advance notice. We will update our docs to advise the community when it is retired. Rancher now uses the [Microsoft Graph API](https://docs.microsoft.com/en-us/graph/use-the-api) as the new flow to set up Azure AD as the external auth provider.
>
>
>- If you're a new user, or wish to migrate, refer to the new flow instructions for <a href="#microsoft-graph-api/" target="_blank">Rancher v2.7.0+</a>.
>
>
>- If you don't wish to upgrade to v2.7.0+ after the Azure AD Graph API is retired, you'll need to either:
- Use the built-in Rancher auth or
- Use another third-party auth system and set that up in Rancher. Please see the [authentication docs](authentication-config.md) to learn how to configure other open authentication providers.
@@ -0,0 +1,64 @@
---
title: Configure FreeIPA
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-freeipa"/>
</head>
If your organization uses FreeIPA for user authentication, you can configure Rancher to allow your users to login using their FreeIPA credentials.
:::note Prerequisites:
- You must have a [FreeIPA Server](https://www.freeipa.org/) configured.
- Create a service account in FreeIPA with `read-only` access. Rancher uses this account to verify group membership when a user makes a request using an API key.
- Read [External Authentication Configuration and Principal Users](authentication-config.md#external-authentication-configuration-and-principal-users).
:::
1. Sign into Rancher using a local user assigned the `administrator` role (i.e., the _local principal_).
1. In the top left corner, click **☰ > Users & Authentication**.
1. In the left navigation menu, click **Auth Provider**.
1. Click **FreeIPA**.
1. Complete the **Configure an FreeIPA server** form.
You may need to log in to your domain controller to find the information requested in the form.
:::note Using TLS?
If the certificate is self-signed or not from a recognized certificate authority, make sure you provide the complete chain. That chain is needed to verify the server's certificate.
:::
:::note User Search Base vs. Group Search Base
Search base allows Rancher to search for users and groups that are in your FreeIPA. These fields are only for search bases and not for search filters.
* If your users and groups are in the same search base, complete only the User Search Base.
* If your groups are in a different search base, you can optionally complete the Group Search Base. This field is dedicated to searching groups, but is not required.
:::
1. If your FreeIPA deviates from the standard AD schema, complete the **Customize Schema** form to match it. Otherwise, skip this step.
:::note Search Attribute
The Search Attribute field defaults with three specific values: `uid|sn|givenName`. After FreeIPA is configured, when a user enters text to add users or groups, Rancher automatically queries the FreeIPA server and attempts to match fields by user id, last name, or first name. Rancher specifically searches for users/groups that begin with the text entered in the search field.
The default field value `uid|sn|givenName`, but you can configure this field to a subset of these fields. The pipe (`|`) between the fields separates these fields.
* `uid`: User ID
* `sn`: Last Name
* `givenName`: First Name
With this search attribute, Rancher creates search filters for users and groups, but you *cannot* add your own search filters in this field.
:::
1. Enter your FreeIPA username and password in **Authenticate with FreeIPA** to confirm that Rancher is configured to use FreeIPA authentication.
1. Click **Enable**.
**Result:**
- FreeIPA authentication is configured.
- You are signed into Rancher with your FreeIPA account (i.e., the _external principal_).
@@ -0,0 +1,60 @@
---
title: Configure GitHub
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-github"/>
</head>
In environments using GitHub, you can configure Rancher to allow sign on using GitHub credentials.
:::note Prerequisites:
Read [External Authentication Configuration and Principal Users](authentication-config.md#external-authentication-configuration-and-principal-users).
:::
1. Sign into Rancher using a local user assigned the `administrator` role (i.e., the _local principal_).
1. In the top left corner, click **☰ > Users & Authentication**.
1. In the left navigation menu, click **Auth Provider**.
1. Click **GitHub**.
1. Follow the directions displayed to set up a GitHub Application. Rancher redirects you to GitHub to complete registration.
:::note What's an Authorization Callback URL?
The Authorization Callback URL is the URL where users go to begin using your application (i.e. the splash screen).
When you use external authentication, authentication does not actually take place in your application. Instead, authentication takes place externally (in this case, GitHub). After this external authentication completes successfully, the Authorization Callback URL is the location where the user re-enters your application.
:::
1. From GitHub, copy the **Client ID** and **Client Secret**. Paste them into Rancher.
:::note Where do I find the Client ID and Client Secret?
From GitHub, select Settings > Developer Settings > OAuth Apps. The Client ID and Client Secret are displayed prominently.
:::
1. Click **Authenticate with GitHub**.
1. Use the **Site Access** options to configure the scope of user authorization.
- **Allow any valid Users**
_Any_ GitHub user can access Rancher. We generally discourage use of this setting!
- **Allow members of Clusters, Projects, plus Authorized Users and Organizations**
Any GitHub user or group added as a **Cluster Member** or **Project Member** can log in to Rancher. Additionally, any GitHub user or group you add to the **Authorized Users and Organizations** list may log in to Rancher.
- **Restrict access to only Authorized Users and Organizations**
Only GitHub users or groups added to the Authorized Users and Organizations can log in to Rancher.
<br/>
1. Click **Enable**.
**Result:**
- GitHub authentication is configured.
- You are signed into Rancher with your GitHub account (i.e., the _external principal_).
@@ -0,0 +1,115 @@
---
title: Configure Google OAuth
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-google-oauth"/>
</head>
If your organization uses G Suite for user authentication, you can configure Rancher to allow your users to log in using their G Suite credentials.
Only admins of the G Suite domain have access to the Admin SDK. Therefore, only G Suite admins can configure Google OAuth for Rancher.
Within Rancher, only administrators or users with the **Manage Authentication** [global role](../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md) can configure authentication.
## Prerequisites
- You must have a [G Suite admin account](https://admin.google.com) configured.
- G Suite requires a [top private domain FQDN](https://github.com/google/guava/wiki/InternetDomainNameExplained#public-suffixes-and-private-domains) as an authorized domain. One way to get an FQDN is by creating an A-record in Route53 for your Rancher server. You do not need to update your Rancher Server URL setting with that record, because there could be clusters using that URL.
- You must have the Admin SDK API enabled for your G Suite domain. You can enable it using the steps on [this page.](https://support.google.com/a/answer/60757?hl=en)
After the Admin SDK API is enabled, your G Suite domain's API screen should look like this:
![Enable Admin APIs](/img/Google-Enable-APIs-Screen.png)
## Setting up G Suite for OAuth with Rancher
Before you can set up Google OAuth in Rancher, you need to log in to your G Suite account and do the following:
1. [Add Rancher as an authorized domain in G Suite](#1-adding-rancher-as-an-authorized-domain)
1. [Generate OAuth2 credentials for the Rancher server](#2-creating-oauth2-credentials-for-the-rancher-server)
1. [Create service account credentials for the Rancher server](#3-creating-service-account-credentials)
1. [Register the service account key as an OAuth Client](#4-register-the-service-account-key-as-an-oauth-client)
### 1. Adding Rancher as an Authorized Domain
1. Click [here](https://console.developers.google.com/apis/credentials) to go to credentials page of your Google domain.
1. Select your project and click **OAuth consent screen**.
![OAuth Consent Screen](/img/Google-OAuth-consent-screen-tab.png)
1. Go to **Authorized Domains** and enter the top private domain of your Rancher server URL in the list. The top private domain is the rightmost superdomain. So for example, www.foo.co.uk a top private domain of foo.co.uk. For more information on top-level domains, refer to [this article.](https://github.com/google/guava/wiki/InternetDomainNameExplained#public-suffixes-and-private-domains)
1. Go to **Scopes for Google APIs** and make sure **email,** **profile** and **openid** are enabled.
**Result:** Rancher has been added as an authorized domain for the Admin SDK API.
### 2. Creating OAuth2 Credentials for the Rancher Server
1. Go to the Google API console, select your project, and go to the [credentials page.](https://console.developers.google.com/apis/credentials)
![Credentials](/img/Google-Credentials-tab.png)
1. On the **Create Credentials** dropdown, select **OAuth client ID**.
1. Click **Web application**.
1. Provide a name.
1. Fill out the **Authorized JavaScript origins** and **Authorized redirect URIs**. Note: The Rancher UI page for setting up Google OAuth (available from the Global view under **Security > Authentication > Google**) provides you the exact links to enter for this step.
- Under **Authorized JavaScript origins,** enter your Rancher server URL.
- Under **Authorized redirect URIs,** enter your Rancher server URL appended with the path `verify-auth`. For example, if your URI is `https://rancherServer`, you will enter `https://rancherServer/verify-auth`.
1. Click on **Create**.
1. After the credential is created, you will see a screen with a list of your credentials. Choose the credential you just created, and in that row on rightmost side, click **Download JSON**. Save the file so that you can provide these credentials to Rancher.
**Result:** Your OAuth credentials have been successfully created.
### 3. Creating Service Account Credentials
Since the Google Admin SDK is available only to admins, regular users cannot use it to retrieve profiles of other users or their groups. Regular users cannot even retrieve their own groups.
Since Rancher provides group-based membership access, we require the users to be able to get their own groups, and look up other users and groups when needed.
As a workaround to get this capability, G Suite recommends creating a service account and delegating authority of your G Suite domain to that service account.
This section describes how to:
- Create a service account
- Create a key for the service account and download the credentials as JSON
1. Click [here](https://console.developers.google.com/iam-admin/serviceaccounts) and select your project for which you generated OAuth credentials.
1. Click on **Create Service Account**.
1. Enter a name and click **Create**.
![Service account creation Step 1](/img/Google-svc-acc-step1.png)
1. Don't provide any roles on the **Service account permissions** page and click **Continue**
![Service account creation Step 2](/img/Google-svc-acc-step2.png)
1. Click on **Create Key** and select the JSON option. Download the JSON file and save it so that you can provide it as the service account credentials to Rancher.
![Service account creation Step 3](/img/Google-svc-acc-step3-key-creation.png)
**Result:** Your service account is created.
### 4. Register the Service Account Key as an OAuth Client
You will need to grant some permissions to the service account you created in the last step. Rancher requires you to grant only read-only permissions for users and groups.
Using the Unique ID of the service account key, register it as an Oauth Client using the following steps:
1. Get the Unique ID of the key you just created. If it's not displayed in the list of keys right next to the one you created, you will have to enable it. To enable it, click **Unique ID** and click **OK**. This will add a **Unique ID** column to the list of service account keys. Save the one listed for the service account you created. NOTE: This is a numeric key, not to be confused with the alphanumeric field **Key ID**.
![Service account Unique ID](/img/Google-Select-UniqueID-column.png)
1. Go to the [**Domain-wide Delegation** page.](https://admin.google.com/ac/owl/domainwidedelegation)
1. Add the Unique ID obtained in the previous step in the **Client Name** field.
1. In the **One or More API Scopes** field, add the following scopes:
```
openid,profile,email,https://www.googleapis.com/auth/admin.directory.user.readonly,https://www.googleapis.com/auth/admin.directory.group.readonly
```
1. Click **Authorize**.
**Result:** The service account is registered as an OAuth client in your G Suite account.
## Configuring Google OAuth in Rancher
1. Sign into Rancher using a local user assigned the [administrator](../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md) role. This user is also called the local principal.
1. In the top left corner, click **☰ > Users & Authentication**.
1. In the left navigation menu, click **Auth Provider**.
1. Click **Google**. The instructions in the UI cover the steps to set up authentication with Google OAuth.
1. Admin Email: Provide the email of an administrator account from your GSuite setup. In order to perform user and group lookups, google apis require an administrator's email in conjunction with the service account key.
1. Domain: Provide the domain on which you have configured GSuite. Provide the exact domain and not any aliases.
1. Nested Group Membership: Check this box to enable nested group memberships. Rancher admins can disable this at any time after configuring auth.
- **Step One** is about adding Rancher as an authorized domain, which we already covered in [this section.](#1-adding-rancher-as-an-authorized-domain)
- For **Step Two,** provide the OAuth credentials JSON that you downloaded after completing [this section.](#2-creating-oauth2-credentials-for-the-rancher-server) You can upload the file or paste the contents into the **OAuth Credentials** field.
- For **Step Three,** provide the service account credentials JSON that downloaded at the end of [this section.](#3-creating-service-account-credentials) The credentials will only work if you successfully [registered the service account key](#4-register-the-service-account-key-as-an-oauth-client) as an OAuth client in your G Suite account.
1. Click **Authenticate with Google**.
1. Click **Enable**.
**Result:** Google authentication is successfully configured.
@@ -0,0 +1,149 @@
---
title: Configure Keycloak (OIDC)
description: Create a Keycloak OpenID Connect (OIDC) client and configure Rancher to work with Keycloak. By the end your users will be able to sign into Rancher using their Keycloak logins
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-keycloak-oidc"/>
</head>
If your organization uses [Keycloak Identity Provider (IdP)](https://www.keycloak.org) for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials. Rancher supports integration with Keycloak using the OpenID Connect (OIDC) protocol and the SAML protocol. Both implementations are functionally equivalent when used with Rancher. This page describes the process to configure Rancher to work with Keycloak using the OIDC protocol.
If you prefer to use Keycloak with the SAML protocol instead, refer to [this page](configure-keycloak-saml.md).
If you have an existing configuration using the SAML protocol and want to switch to the OIDC protocol, refer to [this section](#migrating-from-saml-to-oidc).
## Prerequisites
- On Rancher, Keycloak (SAML) is disabled.
- You must have a [Keycloak IdP Server](https://www.keycloak.org/guides#getting-started) configured.
- In Keycloak, create a [new OIDC client](https://www.keycloak.org/docs/latest/server_admin/#oidc-clients), with the settings below. See the [Keycloak documentation](https://www.keycloak.org/docs/latest/server_admin/#oidc-clients) for help.
Setting | Value
------------|------------
`Client ID` | &lt;CLIENT_ID> (e.g. `rancher`)
`Name` | &lt;CLIENT_NAME> (e.g. `rancher`)
`Client Protocol` | `openid-connect`
`Access Type` | `confidential`
`Valid Redirect URI` | `https://yourRancherHostURL/verify-auth`
- In the new OIDC client, create [Mappers](https://www.keycloak.org/docs/latest/server_admin/#_protocol-mappers) to expose the users fields.
- Create a new "Groups Mapper" with the settings below.
Setting | Value
------------|------------
`Name` | `Groups Mapper`
`Mapper Type` | `Group Membership`
`Token Claim Name` | `groups`
`Add to ID token` | `OFF`
`Add to access token` | `OFF`
`Add to user info` | `ON`
- Create a new "Client Audience" with the settings below.
Setting | Value
------------|------------
`Name` | `Client Audience`
`Mapper Type` | `Audience`
`Included Client Audience` | &lt;CLIENT_NAME>
`Add to access token` | `ON`
- Create a new "Groups Path" with the settings below.
Setting | Value
------------|------------
`Name` | `Group Path`
`Mapper Type` | `Group Membership`
`Token Claim Name` | `full_group_path`
`Full group path` | `ON`
`Add to user info` | `ON`
## Configuring Keycloak in Rancher
1. In the Rancher UI, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Auth Provider**.
1. Select **Keycloak (OIDC)**.
1. Complete the **Configure a Keycloak OIDC account** form. For help with filling the form, see the [configuration reference](#configuration-reference).
1. After you complete the **Configure a Keycloak OIDC account** form, click **Enable**.
Rancher redirects you to the IdP login page. Enter credentials that authenticate with Keycloak IdP to validate your Rancher Keycloak configuration.
:::note
You may need to disable your popup blocker to see the IdP login page.
:::
**Result:** Rancher is configured to work with Keycloak using the OIDC protocol. Your users can now sign into Rancher using their Keycloak logins.
## Configuration Reference
| Field | Description |
| ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Client ID | The `Client ID` of your Keycloak client. |
| Client Secret | The generated `Secret` of your Keycloak client. In the Keycloak console, select **Clients**, select the client you created, select the **Credentials** tab and copy the value of the `Secret` field. |
| Private Key / Certificate | A key/certificate pair to create a secure shell between Rancher and your IdP. Required if HTTPS/SSL is enabled on your Keycloak server. |
| Endpoints | Choose whether to use the generated values for the `Rancher URL`, `Issue`, and `Auth Endpoint` fields or to provide manual overrides if incorrect. |
| Keycloak URL | The URL for your Keycloak server. |
| Keycloak Realm | The name of the realm in which the Keycloak client was created in. |
| Rancher URL | The URL for your Rancher Server. |
| Issuer | The URL of your IdP. |
| Auth Endpoint | The URL where users are redirected to authenticate. |
## Migrating from SAML to OIDC
This section describes the process to transition from using Rancher with Keycloak (SAML) to Keycloak (OIDC).
### Reconfigure Keycloak
1. Change the existing client to use the OIDC protocol. In the Keycloak console, select **Clients**, select the SAML client to migrate, select the **Settings** tab, change `Client Protocol` from `saml` to `openid-connect`, and click **Save**
1. Verify the `Valid Redirect URIs` are still valid.
1. Select the **Mappers** tab and create a new Mapper with the settings below.
Setting | Value
------------|------------
`Name` | `Groups Mapper`
`Mapper Type` | `Group Membership`
`Token Claim Name` | `groups`
`Add to ID token` | `ON`
`Add to access token` | `ON`
`Add to user info` | `ON`
### Reconfigure Rancher
Before configuring Rancher to use Keycloak (OIDC), Keycloak (SAML) must be first disabled.
1. In the Rancher UI, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Auth Provider**.
1. Select **Keycloak (SAML)**.
1. Click **Disable**.
Configure Rancher to use Keycloak (OIDC) by following the steps in [this section](#configuring-keycloak-in-rancher).
:::note
After configuration is completed, Rancher user permissions will need to be reapplied as they are not automatically migrated.
:::
## Annex: Troubleshooting
If you are experiencing issues while testing the connection to the Keycloak server, first double-check the configuration options of your OIDC client. You may also inspect the Rancher logs to help pinpoint what's causing issues. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging](../../../../faq/technical-items.md#how-can-i-enable-debug-logging) in this documentation.
All Keycloak related log entries will be prepended with either `[generic oidc]` or `[keycloak oidc]`.
### You are not redirected to Keycloak
When you fill the **Configure a Keycloak OIDC account** form and click on **Enable**, you are not redirected to your IdP.
* Verify your Keycloak client configuration.
### The generated `Issuer` and `Auth Endpoint` are incorrect
* On the **Configure a Keycloak OIDC account** form, change **Endpoints** to `Specify (advanced)` and override the `Issuer` and `Auth Endpoint` values. To find the values, go to the Keycloak console and select **Realm Settings**, select the **General** tab, and click **OpenID Endpoint Configuration**. The JSON output will display values for `issuer` and `authorization_endpoint`.
### Keycloak Error: "Invalid grant_type"
* In some cases, this error message may be misleading and is actually caused by setting the `Valid Redirect URI` incorrectly.
@@ -0,0 +1,194 @@
---
title: Configure Keycloak (SAML)
description: Create a Keycloak SAML client and configure Rancher to work with Keycloak. By the end your users will be able to sign into Rancher using their Keycloak logins
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-keycloak-saml"/>
</head>
If your organization uses Keycloak Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials.
## Prerequisites
- You must have a [Keycloak IdP Server](https://www.keycloak.org/guides#getting-started) configured.
- In Keycloak, create a [new SAML client](https://www.keycloak.org/docs/latest/server_admin/#saml-clients), with the settings below. See the [Keycloak documentation](https://www.keycloak.org/docs/latest/server_admin/#saml-clients) for help.
Setting | Value
------------|------------
`Sign Documents` | `ON` <sup>1</sup>
`Sign Assertions` | `ON` <sup>1</sup>
All other `ON/OFF` Settings | `OFF`
`Client ID` | Either `https://yourRancherHostURL/v1-saml/keycloak/saml/metadata` or the value configured in the `Entry ID Field` of the Rancher Keycloak configuration<sup>2</sup>
`Client Name` | <CLIENT_NAME> (e.g. `rancher`)
`Client Protocol` | `SAML`
`Valid Redirect URI` | `https://yourRancherHostURL/v1-saml/keycloak/saml/acs`
><sup>1</sup>: Optionally, you can enable either one or both of these settings.
><sup>2</sup>: Rancher SAML metadata won't be generated until a SAML provider is configured and saved.
![](/img/keycloak/keycloak-saml-client-configuration.png)
- In the new SAML client, create Mappers to expose the users fields
- Add all "Builtin Protocol Mappers"
![](/img/keycloak/keycloak-saml-client-builtin-mappers.png)
- Create a new "Group list" mapper to map the member attribute to a user's groups
![](/img/keycloak/keycloak-saml-client-group-mapper.png)
## Getting the IDP Metadata
<Tabs>
<TabItem value="Keycloak 5 and earlier">
To get the IDP metadata, export a `metadata.xml` file from your Keycloak client.
From the **Installation** tab, choose the **SAML Metadata IDPSSODescriptor** format option and download your file.
</TabItem>
<TabItem value="Keycloak 6-13">
1. From the **Configure** section, click the **Realm Settings** tab.
1. Click the **General** tab.
1. From the **Endpoints** field, click **SAML 2.0 Identity Provider Metadata**.
Verify the IDP metadata contains the following attributes:
```
xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
xmlns:ds="http://www.w3.org/2000/09/xmldsig#"
```
Some browsers, such as Firefox, may render/process the document such that the contents appear to have been modified, and some attributes appear to be missing. In this situation, use the raw response data that can be found using your browser.
The following is an example process for Firefox, but will vary slightly for other browsers:
1. Press **F12** to access the developer console.
1. Click the **Network** tab.
1. From the table, click the row containing `descriptor`.
1. From the details pane, click the **Response** tab.
1. Copy the raw response data.
The XML obtained contains `EntitiesDescriptor` as the root element. Rancher expects the root element to be `EntityDescriptor` rather than `EntitiesDescriptor`. So before passing this XML to Rancher, follow these steps to adjust it:
1. Copy all the attributes from `EntitiesDescriptor` to the `EntityDescriptor` that are not present.
1. Remove the `<EntitiesDescriptor>` tag from the beginning.
1. Remove the `</EntitiesDescriptor>` from the end of the xml.
You are left with something similar as the example below:
```
<EntityDescriptor xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" entityID="https://{KEYCLOAK-URL}/auth/realms/{REALM-NAME}">
....
</EntityDescriptor>
```
</TabItem>
<TabItem value="Keycloak 14+">
1. From the **Configure** section, click the **Realm Settings** tab.
1. Click the **General** tab.
1. From the **Endpoints** field, click **SAML 2.0 Identity Provider Metadata**.
Verify the IDP metadata contains the following attributes:
```
xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
xmlns:ds="http://www.w3.org/2000/09/xmldsig#"
```
Some browsers, such as Firefox, may render/process the document such that the contents appear to have been modified, and some attributes appear to be missing. In this situation, use the raw response data that can be found using your browser.
The following is an example process for Firefox, but will vary slightly for other browsers:
1. Press **F12** to access the developer console.
1. Click the **Network** tab.
1. From the table, click the row containing `descriptor`.
1. From the details pane, click the **Response** tab.
1. Copy the raw response data.
</TabItem>
</Tabs>
## Configuring Keycloak in Rancher
1. In the top left corner, click **☰ > Users & Authentication**.
1. In the left navigation menu, click **Auth Provider**.
1. Click **Keycloak SAML**.
1. Complete the **Configure Keycloak Account** form. For help with filling the form, see the [configuration reference](#configuration-reference).
1. After you complete the **Configure a Keycloak Account** form, click **Enable**.
Rancher redirects you to the IdP login page. Enter credentials that authenticate with Keycloak IdP to validate your Rancher Keycloak configuration.
:::note
You may have to disable your popup blocker to see the IdP login page.
:::
**Result:** Rancher is configured to work with Keycloak. Your users can now sign into Rancher using their Keycloak logins.
:::note SAML Provider Caveats:
- SAML Protocol does not support search or lookup for users or groups. Therefore, there is no validation on users or groups when adding them to Rancher.
- When adding users, the exact user IDs (i.e. `UID Field`) must be entered correctly. As you type the user ID, there will be no search for other user IDs that may match.
- When adding groups, you must select the group from the drop-down that is next to the text box. Rancher assumes that any input from the text box is a user.
- The group drop-down shows only the groups that you are a member of. You will not be able to add groups that you are not a member of.
:::
## Configuration Reference
| Field | Description |
| ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Display Name Field | The attribute that contains the display name of users. <br/><br/>Example: `givenName` |
| User Name Field | The attribute that contains the user name/given name. <br/><br/>Example: `email` |
| UID Field | An attribute that is unique to every user. <br/><br/>Example: `email` |
| Groups Field | Make entries for managing group memberships. <br/><br/>Example: `member` |
| Entity ID Field | The ID that needs to be configured as a client ID in the Keycloak client. <br/><br/>Default: `https://yourRancherHostURL/v1-saml/keycloak/saml/metadata` |
| Rancher API Host | The URL for your Rancher Server. |
| Private Key / Certificate | A key/certificate pair to create a secure shell between Rancher and your IdP. |
| IDP-metadata | The `metadata.xml` file that you exported from your IdP server. |
:::tip
You can generate a key/certificate pair using an openssl command. For example:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout myservice.key -out myservice.cert
:::
## Annex: Troubleshooting
If you are experiencing issues while testing the connection to the Keycloak server, first double-check the configuration option of your SAML client. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging](../../../../faq/technical-items.md#how-can-i-enable-debug-logging) in this documentation.
### You are not redirected to Keycloak
When you click on **Authenticate with Keycloak**, you are not redirected to your IdP.
* Verify your Keycloak client configuration.
* Make sure `Force Post Binding` set to `OFF`.
### Forbidden message displayed after IdP login
You are correctly redirected to your IdP login page and you are able to enter your credentials, however you get a `Forbidden` message afterwards.
* Check the Rancher debug log.
* If the log displays `ERROR: either the Response or Assertion must be signed`, make sure either `Sign Documents` or `Sign assertions` is set to `ON` in your Keycloak client.
### HTTP 502 when trying to access /v1-saml/keycloak/saml/metadata
This is usually due to the metadata not being created until a SAML provider is configured.
Try configuring and saving keycloak as your SAML provider and then accessing the metadata.
### Keycloak Error: "We're sorry, failed to process response"
* Check your Keycloak log.
* If the log displays `failed: org.keycloak.common.VerificationException: Client does not have a public key`, set `Encrypt Assertions` to `OFF` in your Keycloak client.
### Keycloak Error: "We're sorry, invalid requester"
* Check your Keycloak log.
* If the log displays `request validation failed: org.keycloak.common.VerificationException: SigAlg was null`, set `Client Signature Required` to `OFF` in your Keycloak client.
@@ -0,0 +1,111 @@
---
title: Configure Okta (SAML)
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-okta-saml"/>
</head>
If your organization uses Okta Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials.
:::note
Okta integration only supports Service Provider initiated logins.
:::
## Prerequisites
In Okta, create a SAML Application with the settings below. See the [Okta documentation](https://developer.okta.com/standards/SAML/setting_up_a_saml_application_in_okta) for help.
Setting | Value
------------|------------
`Single Sign on URL` | `https://yourRancherHostURL/v1-saml/okta/saml/acs`
`Audience URI (SP Entity ID)` | `https://yourRancherHostURL/v1-saml/okta/saml/metadata`
## Configuring Okta in Rancher
You can integrate Okta with Rancher, so that authenticated users can access Rancher resources through their group permissions. Okta returns a SAML assertion that authenticates a user, including which groups a user belongs to.
1. In the top left corner, click **☰ > Users & Authentication**.
1. In the left navigation menu, click **Auth Provider**.
1. Click **Okta**.
1. Complete the **Configure Okta Account** form. The examples below describe how you can map Okta attributes from attribute statements to fields within Rancher.
| Field | Description |
| ------------------------- | ----------------------------------------------------------------------------- |
| Display Name Field | The attribute name from an attribute statement that contains the display name of users. |
| User Name Field | The attribute name from an attribute statement that contains the user name/given name. |
| UID Field | The attribute name from an attribute statement that is unique to every user. |
| Groups Field | The attribute name in a group attribute statement that exposes your groups. |
| Rancher API Host | The URL for your Rancher Server. |
| Private Key / Certificate | A key/certificate pair used for Assertion Encryption. |
| Metadata XML | The `Identity Provider metadata` file that you find in the application `Sign On` section. |
:::tip
You can generate a key/certificate pair using an openssl command. For example:
```
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout myservice.key -out myservice.crt
```
:::
1. After you complete the **Configure Okta Account** form, click **Enable**.
Rancher redirects you to the IdP login page. Enter credentials that authenticate with Okta IdP to validate your Rancher Okta configuration.
:::note
If nothing seems to happen, it's likely because your browser blocked the pop-up. Make sure you disable the pop-up blocker for your rancher domain and whitelist it in any other extensions you might utilize.
:::
**Result:** Rancher is configured to work with Okta. Your users can now sign into Rancher using their Okta logins.
:::note SAML Provider Caveats:
If you configure Okta without OpenLDAP, you won't be able to search for or directly lookup users or groups. This brings several caveats:
- Users and groups aren't validated when you assign permissions to them in Rancher.
- When adding users, the exact user IDs (i.e. `UID Field`) must be entered correctly. As you type the user ID, there will be no search for other user IDs that may match.
- When adding groups, you must select the group from the drop-down that is next to the text box. Rancher assumes that any input from the text box is a user.
- The group drop-down shows only the groups that you are a member of. You will not be able to add groups that you are not a member of.
:::
## Okta with OpenLDAP search
You can add an OpenLDAP backend to assist with user and group search. Rancher will display additional users and groups from the OpenLDAP service. This allows assigning permissions to groups that the logged-in user is not already a member of.
### OpenLDAP Prerequisites
If you use Okta as your IdP, you can [set up an LDAP interface](https://help.okta.com/en-us/Content/Topics/Directory/LDAP-interface-main.htm) for Rancher to use. You can also configure an external OpenLDAP server.
You must configure Rancher with a LDAP bind account (aka service account) so that you can search and retrieve LDAP entries for users and groups that should have access. Don't use an administrator account or personal account as an LDAP bind account. [Create](https://help.okta.com/en-us/Content/Topics/users-groups-profiles/usgp-add-users.htm) a dedicated account in OpenLDAP, with read-only access to users and groups under the configured searchbase.
:::warning Security Considerations
The OpenLDAP service account is used for all searches. Rancher users will see users and groups that the OpenLDAP service account can view, regardless of their individual SAML permissions.
:::
> **Using TLS?**
>
> If the certificate used by the OpenLDAP server is self-signed or from an unrecognized certificate authority, Rancher needs the CA certificate (concatenated with any intermediate certificates) in PEM format. Provide this certificate during the configuration so that Rancher can validate the certificate chain.
### Configure OpenLDAP in Rancher
[Configure the settings](../configure-openldap/openldap-config-reference.md) for the OpenLDAP server, groups and users. Note that nested group membership isn't available.
> Before you proceed with the configuration, please familiarise yourself with [external authentication configuration and principal users](authentication-config.md#external-authentication-configuration-and-principal-users).
1. Sign into Rancher using a local user assigned the [administrator](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions) role (i.e., the _local principal_).
1. In the top left corner, click **☰ > Users & Authentication**.
1. In the left navigation menu, click **Auth Provider**.
1. Click **Okta** or, if SAML is already configured, **Edit Config**
1. Under **User and Group Search**, check **Configure an OpenLDAP server**
If you experience issues when you test the connection to the OpenLDAP server, ensure that you entered the credentials for the service account and configured the search base correctly. Inspecting the Rancher logs can help pinpoint the root cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging](../../../../faq/technical-items.md#how-can-i-enable-debug-logging) for more information.
@@ -0,0 +1,66 @@
---
title: Configure PingIdentity (SAML)
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-pingidentity"/>
</head>
If your organization uses Ping Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials.
>**Prerequisites:**
>
>- You must have a [Ping IdP Server](https://www.pingidentity.com/) configured.
>- Following are the Rancher Service Provider URLs needed for configuration:
Metadata URL: `https://<rancher-server>/v1-saml/ping/saml/metadata`
Assertion Consumer Service (ACS) URL: `https://<rancher-server>/v1-saml/ping/saml/acs`
Note that these URLs will not return valid data until the authentication configuration is saved in Rancher.
>- Export a `metadata.xml` file from your IdP Server. For more information, see the [PingIdentity documentation](https://documentation.pingidentity.com/pingfederate/pf83/index.shtml#concept_exportingMetadata.html).
1. In the top left corner, click **☰ > Users & Authentication**.
1. In the left navigation menu, click **Auth Provider**.
1. Click **Ping Identity**.
1. Complete the **Configure a Ping Account** form. Ping IdP lets you specify what data store you want to use. You can either add a database or use an existing ldap server. For example, if you select your Active Directory (AD) server, the examples below describe how you can map AD attributes to fields within Rancher.
1. **Display Name Field**: Enter the AD attribute that contains the display name of users (example: `displayName`).
1. **User Name Field**: Enter the AD attribute that contains the user name/given name (example: `givenName`).
1. **UID Field**: Enter an AD attribute that is unique to every user (example: `sAMAccountName`, `distinguishedName`).
1. **Groups Field**: Make entries for managing group memberships (example: `memberOf`).
1. **Entity ID Field** (optional): The published, protocol-dependent, unique identifier of your partner. This ID defines your organization as the entity operating the server for SAML 2.0 transactions. This ID may have been obtained out-of-band or via a SAML metadata file.
1. **Rancher API Host**: Enter the URL for your Rancher Server.
1. **Private Key** and **Certificate**: This is a key-certificate pair to create a secure shell between Rancher and your IdP.
You can generate one using an openssl command. For example:
```
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com"
```
1. **IDP-metadata**: The `metadata.xml` file that you [exported from your IdP server](https://documentation.pingidentity.com/pingfederate/pf83/index.shtml#concept_exportingMetadata.html).
1. After you complete the **Configure Ping Account** form, click **Enable**.
Rancher redirects you to the IdP login page. Enter credentials that authenticate with Ping IdP to validate your Rancher PingIdentity configuration.
:::note
You may have to disable your popup blocker to see the IdP login page.
:::
**Result:** Rancher is configured to work with PingIdentity. Your users can now sign into Rancher using their PingIdentity logins.
:::note SAML Provider Caveats:
- SAML Protocol does not support search or lookup for users or groups. Therefore, there is no validation on users or groups when adding them to Rancher.
- When adding users, the exact user IDs (i.e. `UID Field`) must be entered correctly. As you type the user ID, there will be no search for other user IDs that may match.
- When adding groups, you must select the group from the drop-down that is next to the text box. Rancher assumes that any input from the text box is a user.
- The group drop-down shows only the groups that you are a member of. You will not be able to add groups that you are not a member of.
:::
@@ -0,0 +1,19 @@
---
title: Local Authentication
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/create-local-users"/>
</head>
Local authentication is the default until you configure an external authentication provider. Rancher stores user account information, such as usernames and passwords, locally. By default, the `admin` user that logs in to Rancher for the first time is a local user.
## Adding Local Users
Regardless of whether you use external authentication, you should create a few local authentication users so that you can continue using Rancher if your external authentication service encounters issues.
1. In the top left corner, click **☰ > Users & Authentication**.
1. In the left navigation menu, click **Users**.
1. Click **Create**.
1. Complete the **Add User** form.
1. Click **Create**.
@@ -0,0 +1,89 @@
---
title: Users and Groups
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/manage-users-and-groups"/>
</head>
Rancher relies on users and groups to determine who is allowed to log in to Rancher and which resources they can access. When you configure an external authentication provider, users from that provider will be able to log in to your Rancher server. When a user logs in, the authentication provider will supply your Rancher server with a list of groups to which the user belongs.
Access to clusters, projects, and global DNS providers and entries can be controlled by adding either individual users or groups to these resources. When you add a group to a resource, all users who are members of that group in the authentication provider, will be able to access the resource with the permissions that you've specified for the group. For more information on roles and permissions, see [Role Based Access Control](../manage-role-based-access-control-rbac/manage-role-based-access-control-rbac.md).
## Managing Members
When adding a user or group to a resource, you can search for users or groups by beginning to type their name. The Rancher server will query the authentication provider to find users and groups that match what you've entered. Searching is limited to the authentication provider that you are currently logged in with. For example, if you've enabled GitHub authentication but are logged in using a [local](create-local-users.md) user account, you will not be able to search for GitHub users or groups.
All users, whether they are local users or from an authentication provider, can be viewed and managed. In the upper left corner, click **☰ > Users & Authentication**. In the left navigation bar, click **Users**.
:::note SAML Provider Caveats:
- SAML Protocol does not support search or lookup for users or groups. Therefore, there is no validation on users or groups when adding them to Rancher.
- When adding users, the exact user IDs (i.e. `UID Field`) must be entered correctly. As you type the user ID, there will be no search for other user IDs that may match.
- When adding groups, you must select the group from the drop-down that is next to the text box. Rancher assumes that any input from the text box is a user.
- The group drop-down shows only the groups that you are a member of. You will not be able to add groups that you are not a member of.
:::
## User Information
Rancher maintains information about each user that logs in through an authentication provider. This information includes whether the user is allowed to access your Rancher server and the list of groups that the user belongs to. Rancher keeps this user information so that the CLI, API, and kubectl can accurately reflect the access that the user has based on their group membership in the authentication provider.
Whenever a user logs in to the UI using an authentication provider, Rancher automatically updates this user information.
### Automatically Refreshing User Information
Rancher will periodically refresh the user information even before a user logs in through the UI. You can control how often Rancher performs this refresh.
Two settings control this behavior:
- **`auth-user-info-max-age-seconds`**
This setting controls how old a user's information can be before Rancher refreshes it. If a user makes an API call (either directly or by using the Rancher CLI or kubectl) and the time since the user's last refresh is greater than this setting, then Rancher will trigger a refresh. This setting defaults to `3600` seconds, i.e. 1 hour.
- **`auth-user-info-resync-cron`**
This setting controls a recurring schedule for resyncing authentication provider information for all users. Regardless of whether a user has logged in or used the API recently, this will cause the user to be refreshed at the specified interval. This setting defaults to `0 0 * * *`, i.e. once a day at midnight. See the [Cron documentation](https://en.wikipedia.org/wiki/Cron) for more information on valid values for this setting.
To change these settings,
1. In the upper left corner, click **☰ > Global Settings**.
1. Go to the setting you want to configure and click **⋮ > Edit Setting**.
:::note
Since SAML does not support user lookup, SAML-based authentication providers do not support periodically refreshing user information. User information will only be refreshed when the user logs into the Rancher UI.
:::
### Manually Refreshing User Information
If you are not sure the last time Rancher performed an automatic refresh of user information, you can perform a manual refresh of all users.
1. In the upper left corner, click **☰ > Users & Authentication**.
1. On the **Users** page, click on **Refresh Group Memberships**.
**Results:** Rancher refreshes the user information for all users. Requesting this refresh will update which users can access Rancher as well as all the groups that each user belongs to.
:::note
Since SAML does not support user lookup, SAML-based authentication providers do not support the ability to manually refresh user information. User information will only be refreshed when the user logs into the Rancher UI.
:::
## Minimum Password Length
By default, user passwords must be at least 12 characters long. However, you can customize the password length requirement:
1. In the upper left corner, click **☰ > Global Settings**.
1. Go to **`password-min-length`** and click **⋮ > Edit Setting**.
1. Enter an integer value between 2 and 256, and click **Save**.
## Session Length
The default length (TTL) of each user session is adjustable. The default session length is 16 hours.
1. In the upper left corner, click **☰ > Global Settings**.
1. Go to **`auth-user-session-ttl-minutes`** and click **⋮ > Edit Setting**.
1. Enter the amount of time in minutes a session length should last and click **Save**.
**Result:** Users are automatically logged out of Rancher after the set number of minutes.
@@ -0,0 +1,85 @@
---
title: Authentication, Permissions and Global Settings
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration"/>
</head>
After installation, the [system administrator](manage-role-based-access-control-rbac/global-permissions.md) should configure Rancher to configure authentication, authorization, security, default settings, security policies, drivers and global DNS entries.
## First Log In
After you log into Rancher for the first time, Rancher will prompt you for a **Rancher Server URL**.You should set the URL to the main entry point to the Rancher Server. When a load balancer sits in front a Rancher Server cluster, the URL should resolve to the load balancer. The system will automatically try to infer the Rancher Server URL from the IP address or host name of the host running the Rancher Server. This is only correct if you are running a single node Rancher Server installation. In most cases, therefore, you need to set the Rancher Server URL to the correct value yourself.
:::danger
After you set the Rancher Server URL, we do not support updating it. Set the URL with extreme care.
:::
## Authentication
One of the key features that Rancher adds to Kubernetes is centralized user authentication. This feature allows to set up local users and/or connect to an external authentication provider. By connecting to an external authentication provider, you can leverage that provider's user and groups.
For more information how authentication works and how to configure each provider, see [Authentication](authentication-config/authentication-config.md).
## Authorization
Within Rancher, each person authenticates as a _user_, which is a login that grants you access to Rancher. Once the user logs in to Rancher, their _authorization_, or their access rights within the system, is determined by the user's role. Rancher provides built-in roles to allow you to easily configure a user's permissions to resources, but Rancher also provides the ability to customize the roles for each Kubernetes resource.
For more information how authorization works and how to customize roles, see [Roles Based Access Control (RBAC)](manage-role-based-access-control-rbac/manage-role-based-access-control-rbac.md).
## Pod Security Policies
_Pod Security Policies_ (or PSPs) are objects that control security-sensitive aspects of pod specification, e.g. root privileges. If a pod does not meet the conditions specified in the PSP, Kubernetes will not allow it to start, and Rancher will display an error message.
For more information how to create and use PSPs, see [Pod Security Policies](create-pod-security-policies.md).
## Provisioning Drivers
Drivers in Rancher allow you to manage which providers can be used to provision [hosted Kubernetes clusters](../kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/set-up-clusters-from-hosted-kubernetes-providers.md) or [nodes in an infrastructure provider](../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md) to allow Rancher to deploy and manage Kubernetes.
For more information, see [Provisioning Drivers](about-provisioning-drivers/about-provisioning-drivers.md).
## Adding Kubernetes Versions into Rancher
With this feature, you can upgrade to the latest version of Kubernetes as soon as it is released, without upgrading Rancher. This feature allows you to easily upgrade Kubernetes patch versions (i.e. `v1.15.X`), but not intended to upgrade Kubernetes minor versions (i.e. `v1.X.0`) as Kubernetes tends to deprecate or add APIs between minor versions.
The information that Rancher uses to provision [RKE clusters](../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) is now located in the Rancher Kubernetes Metadata. For details on metadata configuration and how to change the Kubernetes version used for provisioning RKE clusters, see [Rancher Kubernetes Metadata.](../../../getting-started/installation-and-upgrade/upgrade-kubernetes-without-upgrading-rancher.md)
Rancher Kubernetes Metadata contains Kubernetes version information which Rancher uses to provision [RKE clusters](../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md).
For more information on how metadata works and how to configure metadata config, see [Rancher Kubernetes Metadata](../../../getting-started/installation-and-upgrade/upgrade-kubernetes-without-upgrading-rancher.md).
## Global Settings
Options that control certain global-level Rancher settings are available from the top navigation bar.
Click **☰** in the top left corner, then select **Global Settings**, to view and configure the following settings:
- **Settings**: Various Rancher defaults, such as the minimum length for a user's password (`password-min-length`). You should be cautious when modifying these settings, as invalid values may break your Rancher installation.
- **Feature Flags**: Rancher features that can be toggled on or off. Some of these flags are for [experimental features](#enabling-experimental-features).
- **Banners**: Elements you can add to fixed locations on the portal. For example, you can use these options to [set a custom banner](custom-branding.md#fixed-banners) for users when they login to Rancher.
- **Branding**: Rancher UI design elements that you can [customize](custom-branding.md). You can add a custom logo or favicon, and modify UI colors.
- **Performance**: Performance settings for the Rancher UI, such as incremental resource loading.
- **Home Links**: Links displayed on the Rancher UI **Home** page. You can modify visibility for the default links or add your own links.
### Enabling Experimental Features
Rancher includes some features that are experimental and/or disabled by default. Feature flags allow you to enable these features. For more information, refer to the section about [feature flags.](../../advanced-user-guides/enable-experimental-features/enable-experimental-features.md)
### Global Configuration
**Global Configuration** options aren't visible unless you activate the **legacy** [feature flag](../../advanced-user-guides/enable-experimental-features/enable-experimental-features.md). The **legacy** flag is disabled by default on fresh Rancher installs of v2.6 and later. If you upgrade from an earlier Rancher version, or activate the **legacy** feature flag on Rancher v2.6 and later, **Global Configuration** is available from the top navigation menu:
1. Click **☰** in the top left corner.
1. Select **Global Configuration** from the **Legacy Apps**.
The following features are available under **Global Configuration**:
- **Catalogs**
- **Global DNS Entries**
- **Global DNS Providers**
As these are legacy features, please see the Rancher v2.0—v2.4 docs on [catalogs](/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/helm-charts-in-rancher.md), [global DNS entries](/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/globaldns.md#adding-a-global-dns-entry), and [global DNS providers](/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/globaldns.md#editing-a-global-dns-provider) for more details.
@@ -0,0 +1,40 @@
---
title: Configuring Microsoft Active Directory Federation Service (SAML)
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-microsoft-ad-federation-service-saml"/>
</head>
If your organization uses Microsoft Active Directory Federation Services (AD FS) for user authentication, you can configure Rancher to allow your users to log in using their AD FS credentials.
## Prerequisites
You must have Rancher installed.
- Obtain your Rancher Server URL. During AD FS configuration, substitute this URL for the `<RANCHER_SERVER>` placeholder.
- You must have a global administrator account on your Rancher installation.
You must have a [Microsoft AD FS Server](https://docs.microsoft.com/en-us/windows-server/identity/active-directory-federation-services) configured.
- Obtain your AD FS Server IP/DNS name. During AD FS configuration, substitute this IP/DNS name for the `<AD_SERVER>` placeholder.
- You must have access to add [Relying Party Trusts](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/create-a-relying-party-trust) on your AD FS Server.
## Setup Outline
Setting up Microsoft AD FS with Rancher Server requires configuring AD FS on your Active Directory server, and configuring Rancher to utilize your AD FS server. The following pages serve as guides for setting up Microsoft AD FS authentication on your Rancher installation.
- [1. Configuring Microsoft AD FS for Rancher](configure-ms-adfs-for-rancher.md)
- [2. Configuring Rancher for Microsoft AD FS](configure-rancher-for-ms-adfs.md)
:::note SAML Provider Caveats:
- SAML Protocol does not support search or lookup for users or groups. Therefore, there is no validation on users or groups when adding them to Rancher.
- When adding users, the exact user IDs (i.e. `UID Field`) must be entered correctly. As you type the user ID, there will be no search for other user IDs that may match.
- When adding groups, you must select the group from the drop-down that is next to the text box. Rancher assumes that any input from the text box is a user.
- The group drop-down shows only the groups that you are a member of. You will not be able to add groups that you are not a member of.
:::
### [Next: Configuring Microsoft AD FS for Rancher](configure-ms-adfs-for-rancher.md)
@@ -0,0 +1,86 @@
---
title: 1. Configuring Microsoft AD FS for Rancher
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-microsoft-ad-federation-service-saml/configure-ms-adfs-for-rancher"/>
</head>
Before you configure Rancher to support Active Directory Federation Service (AD FS), you must add Rancher as a [relying party trust](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/technical-reference/understanding-key-ad-fs-concepts) in AD FS.
1. Log into your AD server as an administrative user.
1. Open the **AD FS Management** console. Select **Add Relying Party Trust..**. from the **Actions** menu and click **Start**.
![](/img/adfs/adfs-overview.png)
1. Select **Enter data about the relying party manually** as the option for obtaining data about the relying party.
![](/img/adfs/adfs-add-rpt-2.png)
1. Enter your desired **Display name** for your Relying Party Trust. For example, `Rancher`.
![](/img/adfs/adfs-add-rpt-3.png)
1. Select **AD FS profile** as the configuration profile for your relying party trust.
![](/img/adfs/adfs-add-rpt-4.png)
1. Leave the **optional token encryption certificate** empty, as Rancher AD FS will not be using one.
![](/img/adfs/adfs-add-rpt-5.png)
1. Select **Enable support for the SAML 2.0 WebSSO protocol**
and enter `https://<rancher-server>/v1-saml/adfs/saml/acs` for the service URL.
![](/img/adfs/adfs-add-rpt-6.png)
1. Add `https://<rancher-server>/v1-saml/adfs/saml/metadata` as the **Relying party trust identifier**.
![](/img/adfs/adfs-add-rpt-7.png)
1. This tutorial will not cover multi-factor authentication; please refer to the [Microsoft documentation](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/configure-additional-authentication-methods-for-ad-fs) if you would like to configure multi-factor authentication.
![](/img/adfs/adfs-add-rpt-8.png)
1. From **Choose Issuance Authorization RUles**, you may select either of the options available according to use case. However, for the purposes of this guide, select **Permit all users to access this relying party**.
![](/img/adfs/adfs-add-rpt-9.png)
1. After reviewing your settings, select **Next** to add the relying party trust.
![](/img/adfs/adfs-add-rpt-10.png)
1. Select **Open the Edit Claim Rules..**. and click **Close**.
![](/img/adfs/adfs-add-rpt-11.png)
1. On the **Issuance Transform Rules** tab, click **Add Rule..**..
![](/img/adfs/adfs-edit-cr.png)
1. Select **Send LDAP Attributes as Claims** as the **Claim rule template**.
![](/img/adfs/adfs-add-tcr-1.png)
1. Set the **Claim rule name** to your desired name (for example, `Rancher Attributes`) and select **Active Directory** as the **Attribute store**. Create the following mapping to reflect the table below:
| LDAP Attribute | Outgoing Claim Type |
| -------------------------------------------- | ------------------- |
| Given-Name | Given Name |
| User-Principal-Name | UPN |
| Token-Groups - Qualified by Long Domain Name | Group |
| SAM-Account-Name | Name |
<br/>
![](/img/adfs/adfs-add-tcr-2.png)
1. Download the `federationmetadata.xml` from your AD server at:
```
https://<AD_SERVER>/federationmetadata/2007-06/federationmetadata.xml
```
**Result:** You've added Rancher as a relying trust party. Now you can configure Rancher to leverage AD.
### [Next: Configuring Rancher for Microsoft AD FS](configure-rancher-for-ms-adfs.md)
@@ -0,0 +1,53 @@
---
title: 2. Configuring Rancher for Microsoft AD FS
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-microsoft-ad-federation-service-saml/configure-rancher-for-ms-adfs"/>
</head>
After you complete [Configuring Microsoft AD FS for Rancher](configure-ms-adfs-for-rancher.md), enter your Active Directory Federation Service (AD FS) information into Rancher so that AD FS users can authenticate with Rancher.
:::note Important Notes For Configuring Your ADFS Server:
- The SAML 2.0 WebSSO Protocol Service URL is: `https://<RANCHER_SERVER>/v1-saml/adfs/saml/acs`
- The Relying Party Trust identifier URL is: `https://<RANCHER_SERVER>/v1-saml/adfs/saml/metadata`
- You must export the `federationmetadata.xml` file from your AD FS server. This can be found at: `https://<AD_SERVER>/federationmetadata/2007-06/federationmetadata.xml`
:::
1. In the top left corner, click **☰ > Users & Authentication**.
1. In the left navigation menu, click **Auth Provider**.
1. Click **ADFS**.
1. Complete the **Configure AD FS Account** form. Microsoft AD FS lets you specify an existing Active Directory (AD) server. The [configuration section below](#configuration) describe how you can map AD attributes to fields within Rancher.
1. After you complete the **Configure AD FS Account** form, click **Enable**.
Rancher redirects you to the AD FS login page. Enter credentials that authenticate with Microsoft AD FS to validate your Rancher AD FS configuration.
:::note
You may have to disable your popup blocker to see the AD FS login page.
:::
**Result:** Rancher is configured to work with MS FS. Your users can now sign into Rancher using their MS FS logins.
## Configuration
| Field | Description |
|---------------------------|-----------------|
| Display Name Field | The AD attribute that contains the display name of users. <br/><br/>Example: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name` |
| User Name Field | The AD attribute that contains the user name/given name. <br/><br/>Example: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname` |
| UID Field | An AD attribute that is unique to every user. <br/><br/>Example: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` |
| Groups Field | Make entries for managing group memberships. <br/><br/>Example: `http://schemas.xmlsoap.org/claims/Group` |
| Rancher API Host | The URL for your Rancher Server. |
| Private Key / Certificate | This is a key-certificate pair to create a secure shell between Rancher and your AD FS. Ensure you set the Common Name (CN) to your Rancher Server URL.<br/><br/>[Certificate creation command](#example-certificate-creation-command) |
| Metadata XML | The `federationmetadata.xml` file exported from your AD FS server. <br/><br/>You can find this file at `https://<AD_SERVER>/federationmetadata/2007-06/federationmetadata.xml`. |
### Example Certificate Creation Command
You can generate a certificate using an openssl command. For example:
```
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com"
```
@@ -0,0 +1,56 @@
---
title: Configuring OpenLDAP
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-openldap"/>
</head>
If your organization uses LDAP for user authentication, you can configure Rancher to communicate with an OpenLDAP server to authenticate users. This allows Rancher admins to control access to clusters and projects based on users and groups managed externally in the organisation's central user repository, while allowing end-users to authenticate with their LDAP credentials when logging in to the Rancher UI.
## Prerequisites
Rancher must be configured with a LDAP bind account (aka service account) to search and retrieve LDAP entries pertaining to users and groups that should have access. It is recommended to not use an administrator account or personal account for this purpose and instead create a dedicated account in OpenLDAP with read-only access to users and groups under the configured search base (see below).
> **Using TLS?**
>
> If the certificate used by the OpenLDAP server is self-signed or not from a recognised certificate authority, make sure have at hand the CA certificate (concatenated with any intermediate certificates) in PEM format. You will have to paste in this certificate during the configuration so that Rancher is able to validate the certificate chain.
## Configure OpenLDAP in Rancher
Configure the settings for the OpenLDAP server, groups and users. For help filling out each field, refer to the [configuration reference.](openldap-config-reference.md)
> Before you proceed with the configuration, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users](../authentication-config/authentication-config.md#external-authentication-configuration-and-principal-users).
1. In the top left corner, click **☰ > Users & Authentication**.
1. In the left navigation menu, click **Auth Provider**.
1. Click **OpenLDAP**. Fill out the **Configure an OpenLDAP server** form.
1. Click **Enable**.
### Test Authentication
Once you have completed the configuration, proceed by testing the connection to the OpenLDAP server. Authentication with OpenLDAP will be enabled implicitly if the test is successful.
:::note
The OpenLDAP user pertaining to the credentials entered in this step will be mapped to the local principal account and assigned administrator privileges in Rancher. You should therefore make a conscious decision on which LDAP account you use to perform this step.
:::
1. Enter the **username** and **password** for the OpenLDAP account that should be mapped to the local principal account.
2. Click **Authenticate With OpenLDAP** to test the OpenLDAP connection and finalise the setup.
**Result:**
- OpenLDAP authentication is configured.
- The LDAP user pertaining to the entered credentials is mapped to the local principal (administrative) account.
:::note
You will still be able to login using the locally configured `admin` account and password in case of a disruption of LDAP services.
:::
## Annex: Troubleshooting
If you are experiencing issues while testing the connection to the OpenLDAP server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging](../../../../faq/technical-items.md#how-can-i-enable-debug-logging) in this documentation.
@@ -0,0 +1,82 @@
---
title: OpenLDAP Configuration Reference
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-openldap/openldap-config-reference"/>
</head>
For further details on configuring OpenLDAP authentication, refer to the [official documentation.](https://www.openldap.org/doc/)
> Before you proceed with the configuration, please familiarize yourself with the concepts of [External Authentication Configuration and Principal Users](../authentication-config/authentication-config.md#external-authentication-configuration-and-principal-users).
## Background: OpenLDAP Authentication Flow
1. When a user attempts to login with LDAP credentials, Rancher creates an initial bind to the LDAP server using a service account with permissions to search the directory and read user/group attributes.
2. Rancher then searches the directory for the user by using a search filter based on the provided username and configured attribute mappings.
3. Once the user has been found, they are authenticated with another LDAP bind request using the user's DN and provided password.
4. Once authentication succeeded, Rancher then resolves the group memberships both from the membership attribute in the user's object and by performing a group search based on the configured user mapping attribute.
## OpenLDAP Server Configuration
You will need to enter the address, port, and protocol to connect to your OpenLDAP server. `389` is the standard port for insecure traffic, `636` for TLS traffic.
> **Using TLS?**
>
> If the certificate used by the OpenLDAP server is self-signed or not from a recognized certificate authority, make sure have at hand the CA certificate (concatenated with any intermediate certificates) in PEM format. You will have to paste in this certificate during the configuration so that Rancher is able to validate the certificate chain.
If you are in doubt about the correct values to enter in the user/group Search Base configuration fields, consult your LDAP administrator or refer to the section [Identify Search Base and Schema using ldapsearch](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-active-directory.md#annex-identify-search-base-and-schema-using-ldapsearch) in the Active Directory authentication documentation.
<figcaption>OpenLDAP Server Parameters</figcaption>
| Parameter | Description |
|:--|:--|
| Hostname | Specify the hostname or IP address of the OpenLDAP server |
| Port | Specify the port at which the OpenLDAP server is listening for connections. Unencrypted LDAP normally uses the standard port of 389, while LDAPS uses port 636.|
| TLS | Check this box to enable LDAP over SSL/TLS (commonly known as LDAPS). You will also need to paste in the CA certificate if the server uses a self-signed/enterprise-signed certificate. |
| Server Connection Timeout | The duration in number of seconds that Rancher waits before considering the server unreachable. |
| Service Account Distinguished Name | Enter the Distinguished Name (DN) of the user that should be used to bind, search and retrieve LDAP entries. |
| Service Account Password | The password for the service account. |
| User Search Base | Enter the Distinguished Name of the node in your directory tree from which to start searching for user objects. All users must be descendents of this base DN. For example: "ou=people,dc=acme,dc=com".|
| Group Search Base | If your groups live under a different node than the one configured under `User Search Base` you will need to provide the Distinguished Name here. Otherwise leave this field empty. For example: "ou=groups,dc=acme,dc=com".|
## User/Group Schema Configuration
If your OpenLDAP directory deviates from the standard OpenLDAP schema, you must complete the **Customize Schema** section to match it.
Note that the attribute mappings configured in this section are used by Rancher to construct search filters and resolve group membership. It is therefore always recommended to verify that the configuration here matches the schema used in your OpenLDAP.
If you are unfamiliar with the user/group schema used in the OpenLDAP server, consult your LDAP administrator or refer to the section [Identify Search Base and Schema using ldapsearch](../../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-active-directory.md#annex-identify-search-base-and-schema-using-ldapsearch) in the Active Directory authentication documentation.
### User Schema Configuration
The table below details the parameters for the user schema configuration.
<figcaption>User Schema Configuration Parameters</figcaption>
| Parameter | Description |
|:--|:--|
| Object Class | The name of the object class used for user objects in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
| Username Attribute | The user attribute whose value is suitable as a display name. |
| Login Attribute | The attribute whose value matches the username part of credentials entered by your users when logging in to Rancher. This is typically `uid`. |
| User Member Attribute | The user attribute containing the Distinguished Name of groups a user is member of. Usually this is one of `memberOf` or `isMemberOf`. |
| Search Attribute | When a user enters text to add users or groups in the UI, Rancher queries the LDAP server and attempts to match users by the attributes provided in this setting. Multiple attributes can be specified by separating them with the pipe ("\|") symbol. |
| User Enabled Attribute | If the schema of your OpenLDAP server supports a user attribute whose value can be evaluated to determine if the account is disabled or locked, enter the name of that attribute. The default OpenLDAP schema does not support this and the field should usually be left empty. |
| Disabled Status Bitmask | This is the value for a disabled/locked user account. The parameter is ignored if `User Enabled Attribute` is empty. |
### Group Schema Configuration
The table below details the parameters for the group schema configuration.
<figcaption>Group Schema Configuration Parameters</figcaption>
| Parameter | Description |
|:--|:--|
| Object Class | The name of the object class used for group entries in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
| Name Attribute | The group attribute whose value is suitable for a display name. |
| Group Member User Attribute | The name of the **user attribute** whose format matches the group members in the `Group Member Mapping Attribute`. |
| Group Member Mapping Attribute | The name of the group attribute containing the members of a group. |
| Search Attribute | Attribute used to construct search filters when adding groups to clusters or projects in the UI. See description of user schema `Search Attribute`. |
| Group DN Attribute | The name of the group attribute whose format matches the values in the user's group membership attribute. See `User Member Attribute`. |
| Nested Group Membership | This settings defines whether Rancher should resolve nested group memberships. Use only if your organization makes use of these nested memberships (ie. you have groups that contain other groups as members). This option is disabled if you are using Shibboleth. |
@@ -0,0 +1,32 @@
---
title: Group Permissions with Shibboleth and OpenLDAP
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-shibboleth-saml/about-group-permissions"/>
</head>
Because Shibboleth is a SAML provider, it doesn't support searching for groups. While a Shibboleth integration can validate user credentials, it can't be used to assign permissions to groups in Rancher without additional configuration.
One solution to this problem is to configure an OpenLDAP identity provider. With an OpenLDAP back end for Shibboleth, you will be able to search for groups in Rancher and assign them to resources such as clusters, projects, or namespaces from the Rancher UI.
### Terminology
- **Shibboleth** is a single sign-on log-in system for computer networks and the Internet. It allows people to sign in using just one identity to various systems. It validates user credentials, but does not, on its own, handle group memberships.
- **SAML:** Security Assertion Markup Language, an open standard for exchanging authentication and authorization data between an identity provider and a service provider.
- **OpenLDAP:** a free, open-source implementation of the Lightweight Directory Access Protocol (LDAP). It is used to manage an organizations computers and users. OpenLDAP is useful for Rancher users because it supports groups. In Rancher, it is possible to assign permissions to groups so that they can access resources such as clusters, projects, or namespaces, as long as the groups already exist in the identity provider.
- **IdP or IDP:** An identity provider. OpenLDAP is an example of an identity provider.
### Adding OpenLDAP Group Permissions to Rancher Resources
The diagram below illustrates how members of an OpenLDAP group can access resources in Rancher that the group has permissions for.
For example, a cluster owner could add an OpenLDAP group to a cluster so that they have permissions view most cluster level resources and create new projects. Then the OpenLDAP group members will have access to the cluster as soon as they log in to Rancher.
In this scenario, OpenLDAP allows the cluster owner to search for groups when assigning persmissions. Without OpenLDAP, the functionality to search for groups would not be supported.
When a member of the OpenLDAP group logs in to Rancher, she is redirected to Shibboleth and enters her username and password.
Shibboleth validates her credentials, and retrieves user attributes from OpenLDAP, including groups. Then Shibboleth sends a SAML assertion to Rancher including the user attributes. Rancher uses the group data so that she can access all of the resources and permissions that her groups have permissions for.
![Adding OpenLDAP Group Permissions to Rancher Resources](/img/shibboleth-with-openldap-groups.svg)
@@ -0,0 +1,106 @@
---
title: Configuring Shibboleth (SAML)
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-shibboleth-saml"/>
</head>
If your organization uses Shibboleth Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in to Rancher using their Shibboleth credentials.
In this configuration, when Rancher users log in, they will be redirected to the Shibboleth IdP to enter their credentials. After authentication, they will be redirected back to the Rancher UI.
If you also configure OpenLDAP as the back end to Shibboleth, it will return a SAML assertion to Rancher with user attributes that include groups. Then the authenticated user will be able to access resources in Rancher that their groups have permissions for.
> The instructions in this section assume that you understand how Rancher, Shibboleth, and OpenLDAP work together. For a more detailed explanation of how it works, refer to [this page.](about-group-permissions.md)
## Setting up Shibboleth in Rancher
### Shibboleth Prerequisites
>
>- You must have a Shibboleth IdP Server configured.
>- Following are the Rancher Service Provider URLs needed for configuration:
Metadata URL: `https://<rancher-server>/v1-saml/shibboleth/saml/metadata`
Assertion Consumer Service (ACS) URL: `https://<rancher-server>/v1-saml/shibboleth/saml/acs`
>- Export a `metadata.xml` file from your IdP Server. For more information, see the [Shibboleth documentation.](https://wiki.shibboleth.net/confluence/display/SP3/Home)
### Configure Shibboleth in Rancher
If your organization uses Shibboleth for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials.
1. In the top left corner, click **☰ > Users & Authentication**.
1. In the left navigation menu, click **Auth Provider**.
1. Click **Shibboleth**.
1. Complete the **Configure Shibboleth Account** form. Shibboleth IdP lets you specify what data store you want to use. You can either add a database or use an existing ldap server. For example, if you select your Active Directory (AD) server, the examples below describe how you can map AD attributes to fields within Rancher.
1. **Display Name Field**: Enter the AD attribute that contains the display name of users (example: `displayName`).
1. **User Name Field**: Enter the AD attribute that contains the user name/given name (example: `givenName`).
1. **UID Field**: Enter an AD attribute that is unique to every user (example: `sAMAccountName`, `distinguishedName`).
1. **Groups Field**: Make entries for managing group memberships (example: `memberOf`).
1. **Rancher API Host**: Enter the URL for your Rancher Server.
1. **Private Key** and **Certificate**: This is a key-certificate pair to create a secure shell between Rancher and your IdP.
You can generate one using an openssl command. For example:
```
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com"
```
1. **IDP-metadata**: The `metadata.xml` file that you exported from your IdP server.
1. After you complete the **Configure Shibboleth Account** form, click **Enable**.
Rancher redirects you to the IdP login page. Enter credentials that authenticate with Shibboleth IdP to validate your Rancher Shibboleth configuration.
:::note
You may have to disable your popup blocker to see the IdP login page.
:::
**Result:** Rancher is configured to work with Shibboleth. Your users can now sign into Rancher using their Shibboleth logins.
### SAML Provider Caveats
If you configure Shibboleth without OpenLDAP, the following caveats apply due to the fact that SAML Protocol does not support search or lookup for users or groups.
- There is no validation on users or groups when assigning permissions to them in Rancher.
- When adding users, the exact user IDs (i.e. UID Field) must be entered correctly. As you type the user ID, there will be no search for other user IDs that may match.
- When adding groups, you must select the group from the drop-down that is next to the text box. Rancher assumes that any input from the text box is a user.
- The group drop-down shows only the groups that you are a member of. You will not be able to add groups that you are not a member of.
To enable searching for groups when assigning permissions in Rancher, you will need to configure a back end for the SAML provider that supports groups, such as OpenLDAP.
## Setting up OpenLDAP in Rancher
If you also configure OpenLDAP as the back end to Shibboleth, it will return a SAML assertion to Rancher with user attributes that include groups. Then authenticated users will be able to access resources in Rancher that their groups have permissions for.
### OpenLDAP Prerequisites
Rancher must be configured with a LDAP bind account (aka service account) to search and retrieve LDAP entries pertaining to users and groups that should have access. It is recommended to not use an administrator account or personal account for this purpose and instead create a dedicated account in OpenLDAP with read-only access to users and groups under the configured search base (see below).
> **Using TLS?**
>
> If the certificate used by the OpenLDAP server is self-signed or not from a recognized certificate authority, make sure have at hand the CA certificate (concatenated with any intermediate certificates) in PEM format. You will have to paste in this certificate during the configuration so that Rancher is able to validate the certificate chain.
### Configure OpenLDAP in Rancher
Configure the settings for the OpenLDAP server, groups and users. For help filling out each field, refer to the [configuration reference.](../configure-openldap/openldap-config-reference.md) Note that nested group membership is not available for Shibboleth.
> Before you proceed with the configuration, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users](../authentication-config/authentication-config.md#external-authentication-configuration-and-principal-users).
1. Log into the Rancher UI using the initial local `admin` account.
1. In the top left corner, click **☰ > Users & Authentication**.
1. In the left navigation menu, click **Auth Provider**.
1. Click **Shibboleth** or, if SAML is already configured, **Edit Config**
1. Under **User and Group Search**, check **Configure an OpenLDAP server**
## Troubleshooting
If you are experiencing issues while testing the connection to the OpenLDAP server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging](../../../../faq/technical-items.md#how-can-i-enable-debug-logging) in this documentation.
@@ -0,0 +1,82 @@
---
title: Creating Pod Security Policies
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/create-pod-security-policies"/>
</head>
:::caution
Pod Security Policies are only available in Kubernetes until v1.24. [Pod Security Standards](pod-security-standards.md) are the built-in alternative.
:::
[Pod Security Policies (PSPs)](https://kubernetes.io/docs/concepts/security/pod-security-policy/) are objects that control security-sensitive aspects of the pod specification (such as root privileges).
If a pod doesn't meet the conditions specified in the PSP, Kubernetes won't allow it to start, and Rancher will display the following error message: `Pod <NAME> is forbidden: unable to validate...`.
## How PSPs Work
You can assign PSPs at the cluster or project level.
PSPs work through inheritance:
- By default, PSPs assigned to a cluster are inherited by its projects, as well as any namespaces added to those projects.
- **Exception:** Namespaces that are not assigned to projects do not inherit PSPs, regardless of whether the PSP is assigned to a cluster or project. Because these namespaces have no PSPs, workload deployments to these namespaces will fail, which is the default Kubernetes behavior.
- You can override the default PSP by assigning a different PSP directly to the project.
Any workloads that are already running in a cluster or project before a PSP is assigned will not be checked if it complies with the PSP. Workloads would need to be cloned or upgraded to see if they pass the PSP.
Read more about Pod Security Policies in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/policy/pod-security-policy/).
## Default PSPs
Rancher ships with three default Pod Security Policies (PSPs): the `restricted-noroot`, `restricted` and `unrestricted` policies.
### Restricted-NoRoot
This policy is based on the Kubernetes [example restricted policy](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/restricted-psp.yaml). It significantly restricts what types of pods can be deployed to a cluster or project. This policy:
- Prevents pods from running as a privileged user and prevents escalation of privileges.
- Validates that server-required security mechanisms are in place, such as restricting what volumes can be mounted to only the core volume types and preventing root supplemental groups from being added.
### Restricted
This policy is a relaxed version of the `restricted-noroot` policy, with almost all the restrictions in place, except for the fact that it allows running containers as a privileged user.
### Unrestricted
This policy is equivalent to running Kubernetes with the PSP controller disabled. It has no restrictions on what pods can be deployed into a cluster or project.
:::note important
When disabling PSPs, default PSPs are **not** automatically deleted from your cluster. You must manually delete them if they're no longer needed.
:::
## Creating PSPs
Using Rancher, you can create a Pod Security Policy using our GUI rather than creating a YAML file.
### Requirements
Rancher can only assign PSPs for clusters that are [launched using RKE](../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md).
You must enable PSPs at the cluster level before you can assign them to a project. This can be configured by [editing the cluster](../../../reference-guides/cluster-configuration/cluster-configuration.md).
It is a best practice to set PSP at the cluster level.
We recommend adding PSPs during cluster and project creation instead of adding it to an existing one.
### Creating PSPs in the Rancher UI
1. In the upper left corner, click **☰ > Cluster Management**.
1. In the left navigation bar, click **Pod Security Policies**.
1. Click **Add Policy**.
1. Name the policy.
1. Complete each section of the form. Refer to the [Kubernetes documentation](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) for more information on what each policy does.
1. Click **Create**.
## Configuration
The Kubernetes documentation on PSPs is [here](https://kubernetes.io/docs/concepts/policy/pod-security-policy/).
@@ -0,0 +1,61 @@
---
title: Configuring a Global Default Private Registry
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry"/>
</head>
:::note
This page describes how to configure a global default private registry from the Rancher UI, after Rancher is already installed.
For instructions on how to set up a private registry during Rancher installation, refer to the [air-gapped installation guide](../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md).
:::
A private registry is a private, consistent, and centralized source of truth for the container images in your clusters. You can use a private container image registry to share custom base images within your organization.
There are two main ways to set up a private registry in Rancher:
* Set up a global default registry through the **Settings** tab in the global view.
* Set up a private registry in the advanced options under cluster-level settings.
The global default registry is intended to be used in air-gapped setups, for registries that don't require credentials. The cluster-level private registry is intended to be used in setups where the private registry requires credentials.
## Set a Private Registry with No Credentials as the Default Registry
1. Log into Rancher and configure the default administrator password.
1. Select **☰ > Global Settings**.
1. Go to `system-default-registry` and choose **⋮ > Edit Setting**.
1. Enter your registry's hostname and port (e.g. `registry.yourdomain.com:port`). Do not prefix the text with `http://` or `https://`.
**Result:** Rancher pulls system images from your private registry.
### Namespaced Private Registry with RKE2 Downstream Clusters
Most private registries should work, by default, with RKE2 downstream clusters.
However, you'll need to do some additional steps if you're trying to set a namespaced private registry whose URL is formated like this: `website/subdomain:portnumber`.
1. Select **☰ > Cluster Management**.
1. Find the RKE2 cluster in the list and click **⋮ >Edit Config**.
1. From the **Cluster config** menu, select **Registries**.
1. In the **Registries** pane, select the **Configure advanced containerd mirroring and registry authentication options** option.
1. In the text fields under **Mirrors**, enter the **Registry Hostname** and **Mirror Endpoints**.
1. Click **Save**.
1. Repeat as necessary for each downstream RKE2 cluster.
## Configure a Private Registry with Credentials when Creating a Cluster
There is no global way to set up a private registry with authorization for every Rancher-provisioned cluster. Therefore, if you want a Rancher-provisioned cluster to pull images from a private registry that requires credentials, you'll have to pass the registry credentials through the advanced cluster options every time you create a new cluster.
Since the private registry cannot be configured after the cluster is created, you'll need to perform these steps during initial cluster setup.
1. Select **☰ > Cluster Management**.
1. On the **Clusters** page, click **Create**.
1. Choose a cluster type.
1. In the **Cluster Configuration** go to the **Registries** tab and select **Pull images for Rancher from a private registry**.
1. Enter the registry hostname and credentials.
1. Click **Create**.
**Result:** The new cluster pulls images from the private registry.
@@ -0,0 +1,240 @@
---
title: Cluster and Project Roles
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles"/>
</head>
Cluster and project roles define user authorization inside a cluster or project.
To manage these roles,
1. Click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Role Templates** and go to the **Cluster** or **Project/Namespaces** tab.
### Membership and Role Assignment
The projects and clusters accessible to non-administrative users is determined by _membership_. Membership is a list of users who have access to a specific cluster or project based on the roles they were assigned in that cluster or project. Each cluster and project includes a tab that a user with the appropriate permissions can use to manage membership.
When you create a cluster or project, Rancher automatically assigns you as the `Owner` for it. Users assigned the `Owner` role can assign other users roles in the cluster or project.
:::note
Non-administrative users cannot access any existing projects/clusters by default. A user with appropriate permissions (typically the owner) must explicitly assign the project and cluster membership.
:::
### Cluster Roles
_Cluster roles_ are roles that you can assign to users, granting them access to a cluster. There are two primary cluster roles: `Owner` and `Member`.
- **Cluster Owner:**
These users have full control over the cluster and all resources in it.
- **Cluster Member:**
These users can view most cluster level resources and create new projects.
#### Custom Cluster Roles
Rancher lets you assign _custom cluster roles_ to a standard user instead of the typical `Owner` or `Member` roles. These roles can be either a built-in custom cluster role or one defined by a Rancher administrator. They are convenient for defining narrow or specialized access for a standard user within a cluster. See the table below for a list of built-in custom cluster roles.
#### Cluster Role Reference
The following table lists each built-in custom cluster role available and whether that level of access is included in the default cluster-level permissions, `Cluster Owner` and `Cluster Member`.
| Built-in Cluster Role | Owner | Member <a id="clus-roles"></a> |
| ---------------------------------- | ------------- | --------------------------------- |
| Create Projects | ✓ | ✓ |
| Manage Cluster Backups             | ✓ | |
| Manage Cluster Catalogs | ✓ | |
| Manage Cluster Members | ✓ | |
| Manage Nodes [(see table below)](#Manage-Nodes-Permissions)| ✓ | |
| Manage Storage | ✓ | |
| View All Projects | ✓ | |
| View Cluster Catalogs | ✓ | ✓ |
| View Cluster Members | ✓ | ✓ |
| View Nodes | ✓ | ✓ |
#### Manage Nodes Permissions
The following table lists the permissions available for the `Manage Nodes` role in RKE and RKE2.
| Manage Nodes Permissions | RKE | RKE2 |
|-----------------------------|-------- |--------- |
| SSH Access | ✓ | ✓ |
| Delete Nodes | ✓ | ✓ |
| Scale Clusters Up and Down | ✓ | * |
***In RKE2, you must have permission to edit a cluster to be able to scale clusters up and down.**
<br />
For details on how each cluster role can access Kubernetes resources, you can look them up in the Rancher UI:
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Role Templates**.
1. Click the **Cluster** tab.
1. Click the name of an individual role. The table shows all of the operations and resources that are permitted by the role.
:::note
When viewing the resources associated with default roles created by Rancher, if there are multiple Kubernetes API resources on one line item, the resource will have `(Custom)` appended to it. These are not custom resources but just an indication that there are multiple Kubernetes API resources as one resource.
:::
### Giving a Custom Cluster Role to a Cluster Member
After an administrator [sets up a custom cluster role,](custom-roles.md) cluster owners and admins can then assign those roles to cluster members.
To assign a custom role to a new cluster member, you can use the Rancher UI. To modify the permissions of an existing member, you will need to use the Rancher API view.
To assign the role to a new cluster member,
<Tabs>
<TabItem value="Rancher before v2.6.4">
1. Click **☰ > Cluster Management**.
1. Go to the cluster where you want to assign a role to a member and click **Explore**.
1. Click **RBAC > Cluster Members**.
1. Click **Add**.
1. In the **Cluster Permissions** section, choose the custom cluster role that should be assigned to the member.
1. Click **Create**.
</TabItem>
<TabItem value="Rancher v2.6.4+">
1. Click **☰ > Cluster Management**.
1. Go to the cluster where you want to assign a role to a member and click **Explore**.
1. Click **Cluster > Cluster Members**.
1. Click **Add**.
1. In the **Cluster Permissions** section, choose the custom cluster role that should be assigned to the member.
1. Click **Create**.
</TabItem>
</Tabs>
**Result:** The member has the assigned role.
To assign any custom role to an existing cluster member,
1. Click **☰ > Users & Authentication**.
1. Go to the member you want to give the role to. Click the **⋮ > Edit Config**.
1. If you have added custom roles, they will show in the **Custom** section. Choose the role you want to assign to the member.
1. Click **Save**.
**Result:** The member has the assigned role.
### Project Roles
_Project roles_ are roles that can be used to grant users access to a project. There are three primary project roles: `Owner`, `Member`, and `Read Only`.
- **Project Owner:**
These users have full control over the project and all resources in it.
- **Project Member:**
These users can manage project-scoped resources like namespaces and workloads, but cannot manage other project members.
:::note
By default, the Rancher role of `project-member` inherits from the `Kubernetes-edit` role, and the `project-owner` role inherits from the `Kubernetes-admin` role. As such, both `project-member` and `project-owner` roles will allow for namespace management, including the ability to create and delete namespaces.
:::
- **Read Only:**
These users can view everything in the project but cannot create, update, or delete anything.
:::danger
Users assigned the `Owner` or `Member` role for a project automatically inherit the `namespace creation` role. However, this role is a [Kubernetes ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole), meaning its scope extends to all projects in the cluster. Therefore, users explicitly assigned the `owner` or `member` role for a project can create namespaces in other projects they're assigned to, even with only the `Read Only` role assigned.
:::
#### Custom Project Roles
Rancher lets you assign _custom project roles_ to a standard user instead of the typical `Owner`, `Member`, or `Read Only` roles. These roles can be either a built-in custom project role or one defined by a Rancher administrator. They are convenient for defining narrow or specialized access for a standard user within a project. See the table below for a list of built-in custom project roles.
#### Project Role Reference
The following table lists each built-in custom project role available in Rancher and whether it is also granted by the `Owner`, `Member`, or `Read Only` role.
| Built-in Project Role | Owner | Member<a id="proj-roles"></a> | Read Only |
| ---------------------------------- | ------------- | ----------------------------- | ------------- |
| Manage Project Members | ✓ | | |
| Create Namespaces | ✓ | ✓ | |
| Manage Config Maps | ✓ | ✓ | |
| Manage Ingress | ✓ | ✓ | |
| Manage Project Catalogs | ✓ | | |
| Manage Secrets | ✓ | ✓ | |
| Manage Service Accounts | ✓ | ✓ | |
| Manage Services | ✓ | ✓ | |
| Manage Volumes | ✓ | ✓ | |
| Manage Workloads | ✓ | ✓ | |
| View Secrets | ✓ | ✓ | |
| View Config Maps | ✓ | ✓ | ✓ |
| View Ingress | ✓ | ✓ | ✓ |
| View Project Members | ✓ | ✓ | ✓ |
| View Project Catalogs | ✓ | ✓ | ✓ |
| View Service Accounts | ✓ | ✓ | ✓ |
| View Services | ✓ | ✓ | ✓ |
| View Volumes | ✓ | ✓ | ✓ |
| View Workloads | ✓ | ✓ | ✓ |
:::note Notes:
- Each project role listed above, including `Owner`, `Member`, and `Read Only`, is comprised of multiple rules granting access to various resources. You can view the roles and their rules on the Global > Security > Roles page.
- When viewing the resources associated with default roles created by Rancher, if there are multiple Kubernetes API resources on one line item, the resource will have `(Custom)` appended to it. These are not custom resources but just an indication that there are multiple Kubernetes API resources as one resource.
- The `Manage Project Members` role allows the project owner to manage any members of the project **and** grant them any project scoped role regardless of their access to the project resources. Be cautious when assigning this role out individually.
:::
### Defining Custom Roles
As previously mentioned, custom roles can be defined for use at the cluster or project level. The context field defines whether the role will appear on the cluster member page, project member page, or both.
When defining a custom role, you can grant access to specific resources or specify roles from which the custom role should inherit. A custom role can be made up of a combination of specific grants and inherited roles. All grants are additive. This means that defining a narrower grant for a specific resource **will not** override a broader grant defined in a role that the custom role is inheriting from.
### Default Cluster and Project Roles
By default, when a standard user creates a new cluster or project, they are automatically assigned an ownership role: either [cluster owner](#cluster-roles) or [project owner](#project-roles). However, in some organizations, these roles may overextend administrative access. In this use case, you can change the default role to something more restrictive, such as a set of individual roles or a custom role.
There are two methods for changing default cluster/project roles:
- **Assign Custom Roles**: Create a [custom role](custom-roles.md) for either your [cluster](#custom-cluster-roles) or [project](#custom-project-roles), and then set the custom role as default.
- **Assign Individual Roles**: Configure multiple [cluster](#cluster-role-reference)/[project](#project-role-reference) roles as default for assignment to the creating user.
For example, instead of assigning a role that inherits other roles (such as `cluster owner`), you can choose a mix of individual roles (such as `manage nodes` and `manage storage`).
:::note
- Although you can [lock](locked-roles.md) a default role, the system still assigns the role to users who create a cluster/project.
- Only users that create clusters/projects inherit their roles. Users added to the cluster/project membership afterward must be explicitly assigned their roles.
:::
### Configuring Default Roles for Cluster and Project Creators
You can change the cluster or project role(s) that are automatically assigned to the creating user.
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Role Templates**.
1. Click the **Cluster** or **Project/Namespaces** tab.
1. Find the custom or individual role that you want to use as default. Then edit the role by selecting **⋮ > Edit Config**.
1. In the **Cluster Creator Default** or **Project Creator Default** section, enable the role as the default.
1. Click **Save**.
**Result:** The default roles are configured based on your changes. Roles assigned to cluster/project creators display a check in the **Cluster/Project Creator Default** column.
If you want to remove a default role, edit the permission and select **No** from the default roles option.
### Cluster Membership Revocation Behavior
When you revoke the cluster membership for a standard user that's explicitly assigned membership to both the cluster _and_ a project within the cluster, that standard user [loses their cluster roles](#cluster-roles) but [retains their project roles](#project-roles). In other words, although you have revoked the user's permissions to access the cluster and its nodes, the standard user can still:
- Access the projects they hold membership in.
- Exercise any [individual project roles](#project-role-reference) they are assigned.
If you want to completely revoke a user's access within a cluster, revoke both their cluster and project memberships.
@@ -0,0 +1,124 @@
---
title: Custom Roles
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/custom-roles"/>
</head>
Within Rancher, _roles_ determine what actions a user can make within a cluster or project.
Note that _roles_ are different from _permissions_, which determine what clusters and projects you can access.
:::danger
It is possible for a custom role to enable privilege escalation. For details, see [this section.](#privilege-escalation)
:::
## Prerequisites
To complete the tasks on this page, one of the following permissions are required:
- [Administrator Global Permissions](global-permissions.md).
- [Custom Global Permissions](global-permissions.md#custom-global-permissions) with the [Manage Roles](global-permissions.md) role assigned.
## Creating A Custom Role
While Rancher comes out-of-the-box with a set of default user roles, you can also create default custom roles to provide users with very specific permissions within Rancher.
The steps to add custom roles differ depending on the version of Rancher.
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Role Templates**.
1. Select a tab to determine the scope of the role you're adding. The tabs are:
- **Global:** The role is valid for allowing members to manage global scoped resources.
- **Cluster:** The role is valid for assignment when adding/managing members to clusters.
- **Project/Namespaces:** The role is valid for assignment when adding/managing members to projects or namespaces.
1. Click **Create Global Role,** **Create Cluster Role** or **Create Project/Namespaces Role,** depending on the scope.
1. Enter a **Name** for the role.
1. Optional: Choose the **Cluster/Project Creator Default** option to assign this role to a user when they create a new cluster or project. Using this feature, you can expand or restrict the default roles for cluster/project creators.
> Out of the box, the Cluster Creator Default and the Project Creator Default roles are `Cluster Owner` and `Project Owner` respectively.
1. Use the **Grant Resources** options to assign individual [Kubernetes API endpoints](https://kubernetes.io/docs/reference/) to the role.
> When viewing the resources associated with default roles created by Rancher, if there are multiple Kubernetes API resources on one line item, the resource will have `(Custom)` appended to it. These are not custom resources but just an indication that there are multiple Kubernetes API resources as one resource.
> The Resource text field provides a method to search for pre-defined Kubernetes API resources, or enter a custom resource name for the grant. The pre-defined or `(Custom)` resource must be selected from the dropdown, after entering a resource name into this field.
You can also choose the individual cURL methods (`Create`, `Delete`, `Get`, etc.) available for use with each endpoint you assign.
1. Use the **Inherit from** options to assign individual Rancher roles to your custom roles. Note: When a custom role inherits from a parent role, the parent role cannot be deleted until the child role is deleted.
1. Click **Create**.
## Creating a Custom Role that Inherits from Another Role
If you have a group of individuals that need the same level of access in Rancher, it can save time to create a custom role in which all of the rules from another role, such as the administrator role, are copied into a new role. This allows you to only configure the variations between the existing role and the new role.
The custom role can then be assigned to a user or group so that the role takes effect the first time the user or users sign into Rancher.
To create a custom role based on an existing role,
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Role Templates**.
1. Click the **Cluster** or **Project/Namespaces** tab. Click **Create Cluster Role** or **Create Project/Namespaces Role** depending on the scope. Note: Only cluster roles and project/namespace roles can inherit from another role.
1. Enter a name for the role.
1. In the **Inherit From** tab, select the role(s) that the custom role will inherit permissions from.
1. In the **Grant Resources** tab, select the Kubernetes resource operations that will be enabled for users with the custom role.
> The Resource text field provides a method to search for pre-defined Kubernetes API resources, or enter a custom resource name for the grant. The pre-defined or `(Custom)` resource must be selected from the dropdown, after entering a resource name into this field.
1. Optional: Assign the role as default.
1. Click **Create**.
## Deleting a Custom Role
When deleting a custom role, all global role bindings with this custom role are deleted.
If a user is only assigned one custom role, and the role is deleted, the user would lose access to Rancher. For the user to regain access, an administrator would need to edit the user and apply new global permissions.
Custom roles can be deleted, but built-in roles cannot be deleted.
To delete a custom role,
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Role Templates**.
2. Go to the custom global role that should be deleted and click **⋮ (…) > Delete**.
3. Click **Delete**.
## Assigning a Custom Role to a Group
If you have a group of individuals that need the same level of access in Rancher, it can save time to create a custom role. When the role is assigned to a group, the users in the group have the appropriate level of access the first time they sign into Rancher.
When a user in the group logs in, they get the built-in Standard User global role by default. They will also get the permissions assigned to their groups.
If a user is removed from the external authentication provider group, they would lose their permissions from the custom role that was assigned to the group. They would continue to have their individual Standard User role.
:::note Prerequisites:
You can only assign a global role to a group if:
* You have set up an [external authentication provider](../authentication-config/authentication-config.md#external-vs-local-authentication)
* The external authentication provider supports [user groups](../../authentication-permissions-and-global-configuration/authentication-config/manage-users-and-groups.md)
* You have already set up at least one user group with the authentication provider
:::
To assign a custom role to a group, follow these steps:
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Groups**.
1. Go to the existing group that will be assigned the custom role and click **⋮ > Edit Config**.
1. If you have created roles, they will show in the **Custom** section. Choose any custom role that will be assigned to the group.
1. Optional: In the **Global Permissions** or **Built-in** sections, select any additional permissions that the group should have.
1. Click **Save.**.
**Result:** The custom role will take effect when the users in the group log into Rancher.
## Privilege Escalation
The `Configure Catalogs` custom permission is powerful and should be used with caution. When an admin assigns the `Configure Catalogs` permission to a standard user, it could result in privilege escalation in which the user could give themselves admin access to Rancher provisioned clusters. Anyone with this permission should be considered equivalent to an admin.
@@ -0,0 +1,364 @@
---
title: Global Permissions
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions"/>
</head>
_Permissions_ are individual access rights that you can assign when selecting a custom permission for a user.
Global Permissions define user authorization outside the scope of any particular cluster. Out-of-the-box, there are four default global permissions: `Administrator`, `Restricted Admin`,`Standard User` and `User-base`.
- **Administrator:** These users have full control over the entire Rancher system and all clusters within it.
- **Restricted Admin (Deprecated) :** These users have full control over downstream clusters, but cannot alter the local Kubernetes cluster.
- **Standard User:** These users can create new clusters and use them. Standard users can also assign other users permissions to their clusters.
- **User-Base:** User-Base users have login-access only.
You cannot update or delete the built-in Global Permissions.
## Global Permission Assignment
Global permissions for local users are assigned differently than users who log in to Rancher using external authentication.
### Global Permissions for New Local Users
When you create a new local user, you assign them a global permission as you complete the **Add User** form.
To see the default permissions for new users,
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Role Templates**.
1. The **Role Templates** page has tabs for roles grouped by scope. Each table lists the roles in that scope. In the **Global** tab, in the **New User Default** column, the permissions given to new users by default are indicated with a checkmark.
You can [change the default global permissions to meet your needs.](#configuring-default-global-permissions)
### Global Permissions for Users with External Authentication
When a user logs into Rancher using an external authentication provider for the first time, they are automatically assigned the **New User Default** global permissions. By default, Rancher assigns the **Standard User** permission for new users.
To see the default permissions for new users,
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Role Templates**.
1. The **Role Templates** page has tabs for roles grouped by scope. Each table lists the roles in that scope. In the **New User Default** column on each page, the permissions given to new users by default are indicated with a checkmark.
You can [change the default permissions to meet your needs.](#configuring-default-global-permissions)
Permissions can be [assigned](#configuring-global-permissions-for-individual-users) to an individual user.
You can [assign a role to everyone in the group at the same time](#configuring-global-permissions-for-groups) if the external authentication provider supports groups.
## Custom Global Permissions
Using custom permissions is convenient for providing users with narrow or specialized access to Rancher.
When a user from an [external authentication source](../authentication-config/authentication-config.md) signs into Rancher for the first time, they're automatically assigned a set of global permissions (hereafter, permissions). By default, after a user logs in for the first time, they are created as a user and assigned the default `user` permission. The standard `user` permission allows users to login and create clusters.
However, in some organizations, these permissions may extend too much access. Rather than assigning users the default global permissions of `Administrator` or `Standard User`, you can assign them a more restrictive set of custom global permissions.
The default roles, Administrator and Standard User, each come with multiple global permissions built into them. The Administrator role includes all global permissions, while the default user role includes three global permissions: Create Clusters, Use Catalog Templates, and User Base, which is equivalent to the minimum permission to log in to Rancher. In other words, the custom global permissions are modularized so that if you want to change the default user role permissions, you can choose which subset of global permissions are included in the new default user role.
Administrators can enforce custom global permissions in multiple ways:
- [Creating custom global roles](#custom-globalroles).
- [Changing the default permissions for new users](#configuring-default-global-permissions).
- [Configuring global permissions for individual users](#configuring-global-permissions-for-individual-users).
- [Configuring global permissions for groups](#configuring-global-permissions-for-groups).
### Combining Built-in GlobalRoles
Rancher provides several GlobalRoles which grant granular permissions for certain common use cases.
The following table lists each built-in global permission and whether it is included in the default global permissions, `Administrator`, `Standard User` and `User-Base`.
| Custom Global Permission | Administrator | Standard User | User-Base |
| ---------------------------------- | ------------- | ------------- |-----------|
| Create Clusters | ✓ | ✓ | |
| Create RKE Templates | ✓ | ✓ | |
| Manage Authentication | ✓ | | |
| Manage Catalogs | ✓ | | |
| Manage Cluster Drivers | ✓ | | |
| Manage Node Drivers | ✓ | | |
| Manage PodSecurityPolicy Templates | ✓ | | |
| Manage Roles | ✓ | | |
| Manage Settings | ✓ | | |
| Manage Users | ✓ | | |
| Use Catalog Templates | ✓ | ✓ | |
| User-Base (Basic log-in access) | ✓ | ✓ | |
For details on which Kubernetes resources correspond to each global permission,
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Role Templates**.
1. If you click the name of an individual role, a table shows all of the operations and resources that are permitted by the role.
:::note Notes:
- Each permission listed above is comprised of multiple individual permissions not listed in the Rancher UI. For a full list of these permissions and the rules they are comprised of, access through the API at `/v3/globalRoles`.
- When viewing the resources associated with default roles created by Rancher, if there are multiple Kubernetes API resources on one line item, the resource will have `(Custom)` appended to it. These are not custom resources but just an indication that there are multiple Kubernetes API resources as one resource.
:::
### Custom GlobalRoles
You can create custom GlobalRoles to satisfy use cases not directly addressed by built-in GlobalRoles.
Create custom GlobalRoles through the UI or through automation (such as the Rancher Kubernetes API). You can specify the same type of rules as the rules for upstream roles and clusterRoles.
#### Escalate and Bind verbs
When giving permissions on GlobalRoles, keep in mind that Rancher respects the `escalate` and `bind` verbs, in a similar fashion to [Kubernetes](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update).
Both of these verbs, which are given on the GlobalRoles resource, can grant users the permission to bypass Rancher's privilege escalation checks. This potentially allows users to become admins. Since this represents a serious security risk, `bind` and `escalate` should be distributed to users with great caution.
The `escalate` verb allows users to change a GlobalRole and add any permission, even if the users doesn't have the permissions in the current GlobalRole or the new version of the GlobalRole.
The `bind` verb allows users to create a GlobalRoleBinding to the specified GlobalRole, even if they do not have the permissions in the GlobalRole.
:::danger
The wildcard verb `*` also includes the `bind` and `escalate` verbs. This means that giving `*` on GlobalRoles to a user also gives them both `escalate` and `bind`.
:::
##### Custom GlobalRole Examples
To grant permission to escalate only the `test-gr` GlobalRole:
```yaml
rules:
- apiGroups:
- 'management.cattle.io'
resources:
- 'globalroles'
resourceNames:
- 'test-gr'
verbs:
- 'escalate'
```
To grant permission to escalate all GlobalRoles:
```yaml
rules:
- apiGroups:
- 'management.cattle.io'
resources:
- 'globalroles'
verbs:
- 'escalate'
```
To grant permission to create bindings (which bypass escalation checks) to only the `test-gr` GlobalRole:
```yaml
rules:
- apiGroups:
- 'management.cattle.io'
resources:
- 'globalroles'
resourceNames:
- 'test-gr'
verbs:
- 'bind'
- apiGroups:
- 'management.cattle.io'
resources:
- 'globalrolebindings'
verbs:
- 'create'
```
Granting `*` permissions (which includes both `escalate` and `bind`):
```yaml
rules:
- apiGroups:
- 'management.cattle.io'
resources:
- 'globalroles'
verbs:
- '*'
```
#### GlobalRole Permissions on Downstream Clusters
GlobalRoles can grant one or more RoleTemplates on every downstream cluster through the `inheritedClusterRoles` field. Values in this field must refer to a RoleTemplate which exists and has a `context` of Cluster.
With this field, users gain the specified permissions on all current or future downstream clusters. For example, consider the following GlobalRole:
```yaml
apiVersion: management.cattle.io/v3
kind: GlobalRole
displayName: All Downstream Owner
metadata:
name: all-downstream-owner
inheritedClusterRoles:
- cluster-owner
```
Any user with this permission will be a cluster-owner on all downstream clusters. If a new cluster is added, regardless of type, the user will be an owner on that cluster as well.
:::danger
Using this field on [default GlobalRoles](#configuring-default-global-permissions) may result in users gaining excessive permissions.
:::
### Configuring Default Global Permissions
If you want to restrict the default permissions for new users, you can remove the `user` permission as default role and then assign multiple individual permissions as default instead. Conversely, you can also add administrative permissions on top of a set of other standard permissions.
:::note
Default roles are only assigned to users added from an external authentication provider. For local users, you must explicitly assign global permissions when adding a user to Rancher. You can customize these global permissions when adding the user.
:::
To change the default global permissions that are assigned to external users upon their first log in, follow these steps:
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Role Templates**. On the **Role Templates** page, make sure the **Global** tab is selected.
1. Find the permissions set that you want to add or remove as a default. Then edit the permission by selecting **⋮ > Edit Config**.
1. If you want to add the permission as a default, Select **Yes: Default role for new users** and then click **Save**. If you want to remove a default permission, edit the permission and select **No**.
**Result:** The default global permissions are configured based on your changes. Permissions assigned to new users display a check in the **New User Default** column.
### Configuring Global Permissions for Individual Users
To configure permission for a user,
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Users**.
1. Go to the user whose access level you want to change and click **⋮ > Edit Config**.
1. In the **Global Permissions** and **Built-in** sections, check the boxes for each permission you want the user to have. If you have created roles from the **Role Templates** page, they will appear in the **Custom** section and you can choose from them as well.
1. Click **Save**.
**Result:** The user's global permissions have been updated.
### Configuring Global Permissions for Groups
If you have a group of individuals that need the same level of access in Rancher, it can save time to assign permissions to the entire group at once, so that the users in the group have the appropriate level of access the first time they sign into Rancher.
After you assign a custom global role to a group, the custom global role will be assigned to a user in the group when they log in to Rancher.
For existing users, the new permissions will take effect when the users log out of Rancher and back in again, or when an administrator [refreshes the group memberships.](#refreshing-group-memberships)
For new users, the new permissions take effect when the users log in to Rancher for the first time. New users from this group will receive the permissions from the custom global role in addition to the **New User Default** global permissions. By default, the **New User Default** permissions are equivalent to the **Standard User** global role, but the default permissions can be [configured.](#configuring-default-global-permissions)
If a user is removed from the external authentication provider group, they would lose their permissions from the custom global role that was assigned to the group. They would continue to have any remaining roles that were assigned to them, which would typically include the roles marked as **New User Default**. Rancher will remove the permissions that are associated with the group when the user logs out, or when an administrator [refreshes group memberships,](#refreshing-group-memberships) whichever comes first.
:::note Prerequisites:
You can only assign a global role to a group if:
* You have set up an [external authentication provider](../authentication-config/authentication-config.md#external-vs-local-authentication)
* The external authentication provider supports [user groups](../authentication-config/manage-users-and-groups.md)
* You have already set up at least one user group with the authentication provider
:::
To assign a custom global role to a group, follow these steps:
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Groups**.
1. Go to the group you want to assign a custom global role to and click **⋮ > Edit Config**.
1. In the **Global Permissions,** **Custom,** and/or **Built-in** sections, select the permissions that the group should have.
1. Click **Create**.
**Result:** The custom global role will take effect when the users in the group log into Rancher.
### Refreshing Group Memberships
When an administrator updates the global permissions for a group, the changes take effect for individual group members after they log out of Rancher and log in again.
To make the changes take effect immediately, an administrator or cluster owner can refresh group memberships.
An administrator might also want to refresh group memberships if a user is removed from a group in the external authentication service. In that case, the refresh makes Rancher aware that the user was removed from the group.
To refresh group memberships,
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Users**.
1. Click **Refresh Group Memberships**.
**Result:** Any changes to the group members' permissions will take effect.
## Restricted Admin
:::warning Deprecated
The Restricted Admin role is deprecated, and will be removed in a future version of Rancher (2.10 or higher). You should make a custom role with the desired permissions instead of relying on this built-in role.
:::
A new `restricted-admin` role was created in Rancher v2.5 in order to prevent privilege escalation on the local Rancher server Kubernetes cluster. This role has full administrator access to all downstream clusters managed by Rancher, but it does not have permission to alter the local Kubernetes cluster.
The `restricted-admin` can create other `restricted-admin` users with an equal level of access.
A new setting was added to Rancher to set the initial bootstrapped administrator to have the `restricted-admin` role. This applies to the first user created when the Rancher server is started for the first time. If the environment variable is set, then no global administrator would be created, and it would be impossible to create the global administrator through Rancher.
To bootstrap Rancher with the `restricted-admin` as the initial user, the Rancher server should be started with the following environment variable:
```
CATTLE_RESTRICTED_DEFAULT_ADMIN=true
```
### List of `restricted-admin` Permissions
The following table lists the permissions and actions that a `restricted-admin` should have in comparison with the `Administrator` and `Standard User` roles:
| Category | Action | Global Admin | Standard User | Restricted Admin | Notes for Restricted Admin role |
| -------- | ------ | ------------ | ------------- | ---------------- | ------------------------------- |
| Local Cluster functions | Manage Local Cluster (List, Edit, Import Host) | Yes | No | No | |
| | Create Projects/namespaces | Yes | No | No | |
| | Add cluster/project members | Yes | No | No | |
| | Global DNS | Yes | No | No | |
| | Access to management cluster for CRDs and CRs | Yes | No | Yes | |
| | Save as RKE Template | Yes | No | No | |
| Security | | | | | |
| Enable auth | Configure Authentication | Yes | No | Yes | |
| Roles | Create/Assign GlobalRoles | Yes | No (Can list) | Yes | Auth webhook allows creating globalrole for perms already present |
| | Create/Assign ClusterRoles | Yes | No (Can list) | Yes | Not in local cluster |
| | Create/Assign ProjectRoles | Yes | No (Can list) | Yes | Not in local cluster |
| Users | Add User/Edit/Delete/Deactivate User | Yes | No | Yes | |
| Groups | Assign Global role to groups | Yes | No | Yes | As allowed by the webhook |
| | Refresh Groups | Yes | No | Yes | |
| PSP's | Manage PSP templates | Yes | No (Can list) | Yes | Same privileges as Global Admin for PSPs |
| Tools | | | | | |
| | Manage RKE Templates | Yes | No | Yes | |
| | Manage Global Catalogs | Yes | No | Yes | Cannot edit/delete built-in system catalog. Can manage Helm library |
| | Cluster Drivers | Yes | No | Yes | |
| | Node Drivers | Yes | No | Yes | |
| | GlobalDNS Providers | Yes | Yes (Self) | Yes | |
| | GlobalDNS Entries | Yes | Yes (Self) | Yes | |
| Settings | | | | | |
| | Manage Settings | Yes | No (Can list) | No (Can list) | |
| User | | | | | |
| | Manage API Keys | Yes (Manage all) | Yes (Manage self) | Yes (Manage self) | |
| | Manage Node Templates | Yes | Yes (Manage self) | Yes (Manage self) | Can only manage their own node templates and not those created by other users |
| | Manage Cloud Credentials | Yes | Yes (Manage self) | Yes (Manage self) | Can only manage their own cloud credentials and not those created by other users |
| Downstream Cluster | Create Cluster | Yes | Yes | Yes | |
| | Edit Cluster | Yes | Yes | Yes | |
| | Rotate Certificates | Yes | | Yes | |
| | Snapshot Now | Yes | | Yes | |
| | Restore Snapshot | Yes | | Yes | |
| | Save as RKE Template | Yes | No | Yes | |
| | Run CIS Scan | Yes | Yes | Yes | |
| | Add Members | Yes | Yes | Yes | |
| | Create Projects | Yes | Yes | Yes | |
| Feature Charts since v2.5 | | | | | |
| | Install Fleet | Yes | | Yes | Should not be able to run Fleet in local cluster |
| | Deploy EKS cluster | Yes | Yes | Yes | |
| | Deploy GKE cluster | Yes | Yes | Yes | |
| | Deploy AKS cluster | Yes | Yes | Yes | |
### Changing Global Administrators to Restricted Admins
In previous version, the docs recommended that all users should be changed over to Restricted Admin if the role was in use. Users are now encouraged to use a custom-built role using the cluster permissions feature, and migrate any current restricted admins to use that approach.
This can be done through **Security > Users** and moving any Administrator role over to Restricted Administrator.
Signed-in users can change themselves over to the `restricted-admin` if they wish, but they should only do that as the last step, otherwise they won't have the permissions to do so.
@@ -0,0 +1,42 @@
---
title: Locked Roles
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/locked-roles"/>
</head>
You can set roles to a status of `locked`. Locking roles prevent them from being assigned to users in the future.
Locked roles:
- Cannot be assigned to users that don't already have it assigned.
- Are not listed in the **Member Roles** drop-down when you are adding a user to a cluster or project.
- Do not affect users assigned the role before you lock the role. These users retain access that the role provides.
**Example:** let's say your organization creates an internal policy that users assigned to a cluster are prohibited from creating new projects. It's your job to enforce this policy.
To enforce it, before you add new users to the cluster, you should lock the following roles: `Cluster Owner`, `Cluster Member`, and `Create Projects`. Then you could create a new custom role that includes the same permissions as a __Cluster Member__, except the ability to create projects. Then, you use this new custom role when adding users to a cluster.
Roles can be locked by the following users:
- Any user assigned the `Administrator` global permission.
- Any user assigned the `Custom Users` permission, along with the `Manage Roles` role.
## Locking/Unlocking Roles
If you want to prevent a role from being assigned to users, you can set it to a status of `locked`.
You can lock roles in two contexts:
- When you're [adding a custom role](custom-roles.md).
- When you editing an existing role (see below).
Cluster roles and project/namespace roles can be locked, but global roles cannot.
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation bar, click **Role Templates**.
1. Go to the **Cluster** tab or the **Project/Namespaces** tab.
1. From the role that you want to lock (or unlock), select **⋮ > Edit Config**.
1. From the **Locked** option, choose the **Yes** or **No** radio button. Then click **Save**.
@@ -0,0 +1,29 @@
---
title: Managing Role-Based Access Control (RBAC)
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac"/>
</head>
Within Rancher, each person authenticates as a _user_, which is a login that grants you access to Rancher. As mentioned in [Authentication](../authentication-config/authentication-config.md), users can either be local or external.
After you configure external authentication, the users that display on the **Users** page changes.
- If you are logged in as a local user, only local users display.
- If you are logged in as an external user, both external and local users display.
## Users and Roles
Once the user logs in to Rancher, their _authorization_, or their access rights within the system, is determined by _global permissions_, and _cluster and project roles_.
- [Global Permissions](global-permissions.md):
Define user authorization outside the scope of any particular cluster.
- [Cluster and Project Roles](cluster-and-project-roles.md):
Define user authorization inside the specific cluster or project where they are assigned the role.
Both global permissions and cluster and project roles are implemented on top of [Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/). Therefore, enforcement of permissions and roles is performed by Kubernetes.
@@ -0,0 +1,130 @@
---
title: Pod Security Standards (PSS) & Pod Security Admission (PSA)
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards"/>
</head>
[Pod Security Standards (PSS)](https://kubernetes.io/docs/concepts/security/pod-security-standards/) and [Pod Security Admission (PSA)](https://kubernetes.io/docs/concepts/security/pod-security-admission/) define security restrictions for a broad set of workloads.
They became available and were turned on by default in Kubernetes v1.23, and replace [Pod Security Policies (PSP)](https://kubernetes.io/docs/concepts/security/pod-security-policy/) in Kubernetes v1.25 and above.
PSS define security levels for workloads. PSAs describe requirements for pod security contexts and related fields. PSAs reference PSS levels to define security restrictions.
## Upgrade to Pod Security Standards (PSS)
Ensure that you migrate all PSPs to another workload security mechanism. This includes mapping your current PSPs to Pod Security Standards for enforcement with the [PSA controller](https://kubernetes.io/docs/concepts/security/pod-security-admission/). If the PSA controller won't meet all of your organization's needs, we recommend that you use a policy engine, such as [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper), [Kubewarden](https://www.kubewarden.io/), [Kyverno](https://kyverno.io/), or [NeuVector](https://neuvector.com/). Refer to the documentation of your policy engine of choice for more information on how to migrate from PSPs.
:::caution
You must add your new policy enforcement mechanisms _before_ you remove the PodSecurityPolicy objects. If you don't, you may create an opportunity for privilege escalation attacks within the cluster.
:::
### Removing PodSecurityPolicies from Rancher-Maintained Apps & Marketplace Workloads
Rancher v2.7.2 offers a new major version of Rancher-maintained Helm charts. v102.x.y allows you to remove PSPs that were installed with previous versions of the chart. This new version replaces non-standard PSPs switches with the standardized `global.cattle.psp.enabled` switch, which is turned off by default.
You must perform the following steps _while still in Kubernetes v1.24_:
1. Configure the PSA controller to suit your needs. You can use one of Rancher's built-in [PSA Configuration Templates](#pod-security-admission-configuration-templates), or create a custom template and apply it to the clusters that you are migrating.
1. Map your active PSPs to Pod Security Standards:
1. See which PSPs are still active in your cluster:
:::caution
This strategy may miss workloads that aren't currently running, such as CronJobs, workloads currently scaled to zero, or workloads that haven't rolled out yet.
:::
```shell
kubectl get pods \
--all-namespaces \
--output jsonpath='{.items[*].metadata.annotations.kubernetes\.io\/psp}' \
| tr " " "\n" | sort -u
```
1. Follow the Kubernetes guide on [Mapping PSPs to Pod Security Standards](https://kubernetes.io/docs/reference/access-authn-authz/psp-to-pod-security-standards/) to apply PSSs to your workloads that were relying on PSPs. See [Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission controller](https://kubernetes.io/docs/tasks/configure-pod-container/migrate-from-psp/) for more details.
1. To remove PSPs from Rancher charts, upgrade the charts to the latest v102.x.y version _before_ you upgrade to Kubernetes v1.25. Make sure that the **Enable PodSecurityPolicies** option is **disabled**. This will remove any PSPs that were installed with previous chart versions.
:::info important
If you want to upgrade your charts to v102.x.y, but don't plan on upgrading your clusters to Kubernetes v1.25 and moving away from PSPs, make sure that you select the option **Enable PodSecurityPolicies** for each chart that you are upgrading.
:::
### Cleaning Up Releases After a Kubernetes v1.25 Upgrade
If you experience problems while removing PSPs from your charts, or have charts that don't contain a built-in mechanism for removing PSPs, your chart upgrades or deletions might fail with an error message such as the following:
```console
Error: UPGRADE FAILED: resource mapping not found for name: "<object-name>" namespace: "<object-namespace>" from "": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first
```
This happens when Helm tries to query the cluster for objects that were stored in a previous release's data blob. To clean up these releases and avoid this error, use the `helm-mapkubeapis` Helm plugin. To learn more about `helm-mapkubeapis`, how it works, and how it can be fine-tuned for your use case, see the [official Helm documentation](https://github.com/helm/helm-mapkubeapis#readme).
Note that Helm plugin installation is local to the machine that you run the commands from. Therefore, make sure that you run both the installation and cleanup from the same machine.
#### Install `helm-mapkubeapis`
1. Open your terminal in the machine you intend to use `helm-mapkubeapis` from and install the plugin:
```shell
helm plugin install https://github.com/helm/helm-mapkubeapis
```
You will see output similar to the following:
```console
Downloading and installing helm-mapkubeapis v0.4.1 ...
https://github.com/helm/helm-mapkubeapis/releases/download/v0.4.1/helm-mapkubeapis_0.4.1_darwin_amd64.tar.gz
Installed plugin: mapkubeapis
```
:::info important
Ensure that the `helm-mapkubeapis` plugin is at least v0.4.1, as older versions _do not_ support removal of resources.
:::
1. Verify that the plugin was correctly installed:
```shell
helm mapkubeapis --help
```
You will see output similar to the following:
```console
Map release deprecated or removed Kubernetes APIs in-place
Usage:
mapkubeapis [flags] RELEASE
Flags:
--dry-run simulate a command
-h, --help help for mapkubeapis
--kube-context string name of the kubeconfig context to use
--kubeconfig string path to the kubeconfig file
--mapfile string path to the API mapping file
--namespace string namespace scope of the release
```
#### Cleaning Up Broken Releases
After you install the `helm-mapkubeapis` plugin, clean up the releases that became broken after the upgrade to Kubernetes v1.25.
1. Open your preferred terminal and make sure it's connected to the cluster you wish to target by running `kubectl cluster-info`.
1. List all the releases you have installed in your cluster by running `helm list --all-namespaces`.
1. Perform a dry run for each release you would like to clean up by running `helm mapkubeapis --dry-run <release-name> --namespace <release-namespace>`. The result of this command will inform you what resources are going to be replaced or removed.
1. Finally, after reviewing the changes, perform a full run with `helm mapkubeapis <release-name> --namespace <release-namespace>`.
#### Upgrading Charts to a Version That Supports Kubernetes v1.25
You can proceed with your upgrade once any releases that had lingering PSPs are cleaned up. For Rancher-maintained workloads, follow the steps outlined in the [Removing PodSecurityPolicies from Rancher-maintained Apps & Marketplace workloads](#removing-podsecuritypolicies-from-rancher-maintained-apps--marketplace-workloads) section of this document.
For workloads not maintained by Rancher, refer to the vendor documentation.
:::caution
Do not skip this step. Applications incompatible with Kubernetes v1.25 aren't guaranteed to work after a cleanup.
:::
## Pod Security Admission Configuration Templates
Rancher offers PSA configuration templates. These are pre-defined security configurations that you can apply to a cluster. Rancher admins (or those with the right permissions) can [create, manage, and edit](./psa-config-templates.md) PSA templates.
### Rancher on PSA-restricted Clusters
Rancher system namespaces are also affected by the restrictive security policies described by PSA templates. You need to exempt Rancher's system namespaces after you assign the template, or else the cluster won't operate correctly. See [Pod Security Admission (PSA) Configuration Templates](./psa-config-templates.md#exempting-required-rancher-namespaces) for more details.
For a complete file which has all the exemptions you need to run Rancher, please refer to this [sample Admission Configuration](../../../reference-guides/rancher-security/psa-restricted-exemptions.md).
@@ -0,0 +1,145 @@
---
title: Pod Security Admission (PSA) Configuration Templates
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates"/>
</head>
[Pod Security admission (PSA)](./pod-security-standards.md) configuration templates are a Rancher custom-defined resource (CRD), available in Rancher v2.7.2 and above. The templates provide pre-defined security configurations that you can apply to a cluster:
- `rancher-privileged`: The most permissive configuration. It doesn't restrict the behavior of any pods. This allows for known privilege escalations. This policy has no exemptions.
- `rancher-restricted`: A heavily restricted configuration that follows current best practices for hardening pods. You must make [namespace-level exemptions](./pod-security-standards.md#rancher-on-psa-restricted-clusters) for Rancher components.
## Assign a Pod Security Admissions (PSA) Configuration Template
You can assign a PSA template at the same time that you create a downstream cluster. You can also add a template by configuring an existing cluster.
### Assign a Template During Cluster Creation
<Tabs>
<TabItem value="RKE2 and K3s">
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, click the **Create** button.
1. Select a provider.
1. On the **Cluster: Create** page, go to **Basics > Security**.
1. In the **Pod Security Admission Configuration Template** dropdown menu, select the template you want to assign.
1. Click **Create**.
### Assign a Template to an Existing Cluster
1. In the upper left corner, click **☰ > Cluster Management**.
1. Find the cluster you want to update in the **Clusters** table, and click the **⋮**.
1. Select **Edit Config** .
1. In the **Pod Security Admission Configuration Template** dropdown menu, select the template you want to assign.
1. Click **Save**.
### Hardening the Cluster
If you select the **rancher-restricted** template but don't select a **CIS Profile**, you won't meet required CIS benchmarks. See the [RKE2 hardening guide](../../../reference-guides/rancher-security/hardening-guides/rke2-hardening-guide/rke2-hardening-guide.md) for more details.
</TabItem>
<TabItem value="RKE1">
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, click the **Create** button.
1. Select a provider.
1. On the **Add Cluster** page, under **Cluster Options**, click **Advanced Options**.
1. In the **Pod Security Admission Configuration Template** dropdown menu, select the template you want to assign.
1. Click **Create**.
### Assign a Template to an Existing Cluster
1. In the upper left corner, click **☰ > Cluster Management**.
1. Find the cluster you want to update in the **Clusters** table, and click the **⋮**.
1. Select **Edit Config**.
1. On the **Edit Cluster** page, go to **Cluster Options > Advanced Options**.
1. In the **Pod Security Admission Configuration Template**, select the template you want to assign.
1. Click **Save**.
</TabItem>
</Tabs>
## Add or Edit a Pod Security Admissions (PSA) Configuration Template
If you have administrator privileges, you can customize security restrictions and permissions by creating additional PSA templates, or by editing existing templates.
:::caution
If you edit an existing PSA template while it is still in use, changes will be applied to all clusters that have been assigned to that template.
:::
1. In the upper left corner, click **☰ > Cluster Management**.
1. Click **Advanced** to open the dropdown menu.
1. Select **Pod Security Admissions**.
1. Find the template you want to modify, and click the **⋮**.
1. Select **Edit Config** to edit the template.
1. When you're done editing the configuration, click **Save**.
### Allow Non-Admin Users to Manage PSA Templates
If you want to allow other users to manage templates, you can bind that user to a role that grants all verbs (`"*"`) on `management.cattle.io/podsecurityadmissionconfigurationtemplates`.
:::caution
Any user that is bound to the above permission will be able to change the restriction levels on _all_ managed clusters which use a given PSA template, including ones that they have no permissions on.
:::
## Exempting Required Rancher Namespaces
When you run Rancher on a Kubernetes cluster that enforces a restrictive security policy by default, you'll need to [exempt the following namespaces](#exempting-namespaces), otherwise the policy might prevent Rancher system pods from running properly.
- `calico-apiserver`
- `calico-system`
- `cattle-alerting`
- `cattle-csp-adapter-system`
- `cattle-elemental-system`
- `cattle-epinio-system`
- `cattle-externalip-system`
- `cattle-fleet-local-system`
- `cattle-fleet-system`
- `cattle-gatekeeper-system`
- `cattle-global-data`
- `cattle-global-nt`
- `cattle-impersonation-system`
- `cattle-istio`
- `cattle-istio-system`
- `cattle-logging`
- `cattle-logging-system`
- `cattle-monitoring-system`
- `cattle-neuvector-system`
- `cattle-prometheus`
- `cattle-provisioning-capi-system`
- `cattle-resources-system`
- `cattle-sriov-system`
- `cattle-system`
- `cattle-ui-plugin-system`
- `cattle-windows-gmsa-system`
- `cert-manager`
- `cis-operator-system`
- `fleet-default`
- `ingress-nginx`
- `istio-system`
- `kube-node-lease`
- `kube-public`
- `kube-system`
- `longhorn-system`
- `rancher-alerting-drivers`
- `security-scan`
- `tigera-operator`
Rancher, some Rancher owned charts, and RKE2 and K3s distributions all use these namespaces. A subset of the listed namespaces are already exempt in the built-in Rancher `rancher-restricted` policy, for use in downstream clusters. For a complete template which has all the exemptions you need to run Rancher, please refer to this [sample Admission Configuration](../../../reference-guides/rancher-security/psa-restricted-exemptions.md).
## Exempting Namespaces
If you assign the `rancher-restricted` template to a cluster, by default the restrictions are applied across the entire cluster at the namespace level. To exempt certain namespaces from this highly restricted policy, do the following:
1. In the upper left corner, click **☰ > Cluster Management**.
1. Click **Advanced** to open the dropdown menu.
1. Select **Pod Security Admissions**.
1. Find the template you want to modify, and click the **⋮**.
1. Select **Edit Config**.
1. Click the **Namespaces** checkbox under **Exemptions** to edit the **Namespaces** field.
1. When you're done exempting namespaces, click **Save**.
:::note
You need to update the target cluster to make the new template take effect in that cluster. An update can be triggered by editing and saving the cluster without changing values.
:::
@@ -0,0 +1,78 @@
---
title: Backing up Rancher Installed with Docker
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-docker-installed-rancher"/>
</head>
After completing your Docker installation of Rancher, we recommend creating backups of it on a regular basis. Having a recent backup will let you recover quickly from an unexpected disaster.
## Before You Start
During the creation of your backup, you'll enter a series of commands, replacing placeholders with data from your environment. These placeholders are denoted with angled brackets and all capital letters (`<EXAMPLE>`). Here's an example of a command with a placeholder:
```
docker run --name busybox-backup-<DATE> --volumes-from rancher-data-<DATE> -v $PWD:/backup busybox tar pzcvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz /var/lib/rancher
```
In this command, `<DATE>` is a placeholder for the date that the data container and backup were created. `9-27-18` for example.
Cross reference the image and reference table below to learn how to obtain this placeholder data. Write down or copy this information before starting the [procedure below](#creating-a-backup).
<sup>Terminal <code>docker ps</code> Command, Displaying Where to Find <code>&lt;RANCHER_CONTAINER_TAG&gt;</code> and <code>&lt;RANCHER_CONTAINER_NAME&gt;</code></sup>
![Placeholder Reference](/img/placeholder-ref.png)
| Placeholder | Example | Description |
| -------------------------- | -------------------------- | --------------------------------------------------------- |
| `<RANCHER_CONTAINER_TAG>` | `v2.0.5` | The rancher/rancher image you pulled for initial install. |
| `<RANCHER_CONTAINER_NAME>` | `festive_mestorf` | The name of your Rancher container. |
| `<RANCHER_VERSION>` | `v2.0.5` | The version of Rancher that you're creating a backup for. |
| `<DATE>` | `9-27-18` | The date that the data container or backup was created. |
<br/>
You can obtain `<RANCHER_CONTAINER_TAG>` and `<RANCHER_CONTAINER_NAME>` by logging into your Rancher Server by remote connection and entering the command to view the containers that are running: `docker ps`. You can also view containers that are stopped with `docker ps -a`. Use these commands for help anytime while creating backups.
## Creating a Backup
This procedure creates a backup that you can restore if Rancher encounters a disaster scenario.
1. Using a remote Terminal connection, log into the node running your Rancher Server.
1. Stop the container currently running Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the name of your Rancher container.
```
docker stop <RANCHER_CONTAINER_NAME>
```
1. <a id="backup"></a>Use the command below, replacing each placeholder, to create a data container from the Rancher container that you just stopped.
```
docker create --volumes-from <RANCHER_CONTAINER_NAME> --name rancher-data-<DATE> rancher/rancher:<RANCHER_CONTAINER_TAG>
```
1. <a id="tarball"></a>From the data container that you just created (<code>rancher-data-&lt;DATE&gt;</code>), create a backup tarball (<code>rancher-data-backup-&lt;RANCHER_VERSION&gt;-&lt;DATE&gt;.tar.gz</code>). Use the following command, replacing each placeholder:
```
docker run --name busybox-backup-<DATE> --volumes-from rancher-data-<DATE> -v $PWD:/backup:z busybox tar pzcvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz /var/lib/rancher
```
**Step Result:** A stream of commands runs on the screen.
1. Enter the `ls` command to confirm that the backup tarball was created. It will have a name similar to `rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`.
1. Move your backup tarball to a safe location external to your Rancher Server. Then delete the `rancher-data-<DATE>` and `busybox-backup-<DATE>` containers from your Rancher Server.
```
docker rm rancher-data-<DATE>
docker rm busybox-backup-<DATE>
```
1. Restart Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the name of your Rancher container:
```
docker start <RANCHER_CONTAINER_NAME>
```
**Result:** A backup tarball of your Rancher Server data is created. See [Restoring Backups: Docker Installs](restore-docker-installed-rancher.md) if you need to restore backup data.
@@ -0,0 +1,311 @@
---
title: Backing up a Cluster
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher-launched-kubernetes-clusters"/>
</head>
In the Rancher UI, etcd backup and recovery for [Rancher launched Kubernetes clusters](../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) can be easily performed.
Rancher recommends configuring recurrent `etcd` snapshots for all production clusters. Additionally, one-time snapshots can be taken as well.
Snapshots of the etcd database are taken and saved either [locally onto the etcd nodes](#local-backup-target) or to a [S3 compatible target](#s3-backup-target). The advantages of configuring S3 is that if all etcd nodes are lost, your snapshot is saved remotely and can be used to restore the cluster.
## How Snapshots Work
### Snapshot Components
<Tabs groupId="k8s-distro">
<TabItem value="RKE">
When Rancher creates a snapshot, it includes three components:
- The cluster data in etcd
- The Kubernetes version
- The cluster configuration in the form of the `cluster.yml`
Because the Kubernetes version is now included in the snapshot, it is possible to restore a cluster to a prior Kubernetes version.
</TabItem>
<TabItem value="RKE2/K3s">
Rancher delegates snapshot creation to the downstream Kubernetes engine. When the Kubernetes engine creates a snapshot, it includes three components:
- The cluster data in etcd
- The Kubernetes version
- The cluster configuration
Because the Kubernetes version is included in the snapshot, it is possible to restore a cluster to a prior Kubernetes version while also restoring an etcd snapshot.
</TabItem>
</Tabs>
The multiple components of the snapshot allow you to select from the following options if you need to restore a cluster from a snapshot:
- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher before v2.4.0.
- **Restore etcd and Kubernetes version:** This option should be used if a Kubernetes upgrade is the reason that your cluster is failing, and you haven't made any cluster configuration changes.
- **Restore etcd, Kubernetes versions and cluster configuration:** This option should be used if you changed both the Kubernetes version and cluster configuration when upgrading.
It is always recommended to take a new snapshot before performing any configuration changes or upgrades.
### Generating the Snapshot from etcd Nodes
<Tabs groupId="k8s-distro">
<TabItem value="RKE">
For each etcd node in the cluster, the etcd cluster health is checked. If the node reports that the etcd cluster is healthy, a snapshot is created from it and optionally uploaded to S3.
The snapshot is stored in `/opt/rke/etcd-snapshots`. If the directory is configured on the nodes as a shared mount, it will be overwritten. On S3, the snapshot will always be from the last node that uploads it, as all etcd nodes upload it and the last will remain.
In the case when multiple etcd nodes exist, any created snapshot is created after the cluster has been health checked, so it can be considered a valid snapshot of the data in the etcd cluster.
</TabItem>
<TabItem value="RKE2/K3s">
Snapshots are enabled by default.
The snapshot directory defaults to `/var/lib/rancher/<RUNTIME>/server/db/snapshots`, where `<RUNTIME>` is either `rke2` or `k3s`.
In RKE2, snapshots are stored on each etcd node. If you have multiple etcd or etcd + control-plane nodes, you will have multiple copies of local etcd snapshots.
</TabItem>
</Tabs>
### Snapshot Naming Conventions
<Tabs groupId="k8s-distro">
<TabItem value="RKE">
The name of the snapshot is auto-generated. The `--name` option can be used to override the name of the snapshot when creating one-time snapshots with the RKE CLI.
When Rancher creates a snapshot of an RKE cluster, the snapshot name is based on the type (whether the snapshot is manual or recurring) and the target (whether the snapshot is saved locally or uploaded to S3). The naming convention is as follows:
- `m` stands for manual
- `r` stands for recurring
- `l` stands for local
- `s` stands for S3
Some example snapshot names are:
- c-9dmxz-rl-8b2cx
- c-9dmxz-ml-kr56m
- c-9dmxz-ms-t6bjb
- c-9dmxz-rs-8gxc8
</TabItem>
<TabItem value="RKE2/K3s">
The name of the snapshot is auto-generated. The `--name` option can be used to override the base name of the snapshot when creating one-time snapshots with the RKE2 or K3s CLI.
When Rancher creates a snapshot of an RKE2 or K3s cluster, the snapshot name is based on the type (whether the snapshot is manual or recurring) and the target (whether the snapshot is saved locally or uploaded to S3). The naming convention is as follows:
`<name>-<node>-<timestamp>`
`<name>`: is the base name set by `--name` and can be one of the the following
- `etcd-snapshot` is prepended on recurring snapshots
- `on-demand` is prepended on manual, on-demand snapshots
`<node>`: Node is the name of the node that the snapshot was taken on.
`<timestamp>` is a unix-time stamp of the snapshot creation date.
Some example snapshot names are:
- `on-demand-my-super-rancher-k8s-node1-1652288934`
- `on-demand-my-super-rancher-k8s-node2-1652288936`
- `etcd-snapshot-my-super-rancher-k8s-node1-1652289945`
- `etcd-snapshot-my-super-rancher-k8s-node2-1652289948`
</TabItem>
</Tabs>
### How Restoring from a Snapshot Works
<Tabs groupId="k8s-distro">
<TabItem value="RKE">
On restore, the following process is used:
1. The snapshot is retrieved from S3, if S3 is configured.
2. The snapshot is unzipped (if zipped).
3. One of the etcd nodes in the cluster serves that snapshot file to the other nodes.
4. The other etcd nodes download the snapshot and validate the checksum so that they all use the same snapshot for the restore.
5. The cluster is restored and post-restore actions will be done in the cluster.
</TabItem>
<TabItem value="RKE2/K3s">
On restore, Rancher delivers a few sets of plans to perform a restoration. A set of phases are used, namely:
- Started
- Shutdown
- Restore
- RestartCluster
- Finished
If the etcd snapshot restore fails, the phase will be set to `Failed`.
1. The etcd snapshot restore request is received, and depending on `restoreRKEConfig`, the cluster configuration/kubernetes version are reconciled.
1. The phase is set to `Started`.
1. The phase is set to `Shutdown`, and the entire cluster is shut down using plans that run the distribution `killall.sh` script. A new init node is elected. If the snapshot being restored is a local snapshot, the node that the snapshot resides on will be selected as the init node. If the snapshot is being restored from S3, the existing init node will be used.
1. The phase is set to `Restore`, and the init node has the snapshot restored onto it.
1. The phase is set to `RestartCluster`, and the cluster is restarted/rejoined to the new init node that has the freshly restored snapshot information.
1. The phase is set to `Finished`, and the cluster is deemed successfully restored. The `cattle-cluster-agent` will reconnect, and the cluster will finish reconciliation.
</TabItem>
</Tabs>
## Configuring Recurring Snapshots
<Tabs groupId="k8s-distro">
<TabItem value="RKE">
Select how often you want recurring snapshots to be taken as well as how many snapshots to keep. The amount of time is measured in hours. With timestamped snapshots, the user has the ability to do a point-in-time recovery.
By default, [Rancher launched Kubernetes clusters](../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) are configured to take recurring snapshots (saved to local disk). To protect against local disk failure, using the [S3 Target](#s3-backup-target) or replicating the path on disk is advised.
During cluster provisioning or editing the cluster, the configuration for snapshots can be found in the advanced section for **Cluster Options**. Click on **Show advanced options**.
In the **Advanced Cluster Options** section, there are several options available to configure:
| Option | Description | Default Value|
| --- | ---| --- |
| etcd Snapshot Backup Target | Select where you want the snapshots to be saved. Options are either local or in S3 | local|
|Recurring etcd Snapshot Enabled| Enable/Disable recurring snapshots | Yes|
| Recurring etcd Snapshot Creation Period | Time in hours between recurring snapshots| 12 hours |
| Recurring etcd Snapshot Retention Count | Number of snapshots to retain| 6 |
</TabItem>
<TabItem value="RKE2/K3s">
Set the schedule for how you want recurring snapshots to be taken as well as how many snapshots to keep. The schedule is conventional cron format. The retention policy dictates the number of snapshots matching a name to keep per node.
By default, [Rancher launched Kubernetes clusters](../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) are configured to take recurring snapshots (saved to local disk) every 5 hours starting at 12 AM. To protect against local disk failure, using the [S3 Target](#s3-backup-target) or replicating the path on disk is advised.
During cluster provisioning or editing the cluster, the configuration for snapshots can be found under **Cluster Configuration**. Click on **etcd**.
| Option | Description | Default Value|
| --- | ---| --- |
| Recurring etcd Snapshot Enabled | Enable/Disable recurring snapshots | Yes |
| Recurring etcd Snapshot Creation Period | Cron schedule for recurring snapshot | `0 */5 * * *` |
| Recurring etcd Snapshot Retention Count | Number of snapshots to retain | 5 |
</TabItem>
</Tabs>
## One-Time Snapshots
<Tabs groupId="k8s-distro">
<TabItem value="RKE">
In addition to recurring snapshots, you may want to take a "one-time" snapshot. For example, before upgrading the Kubernetes version of a cluster it's best to backup the state of the cluster to protect against upgrade failure.
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, navigate to the cluster where you want to take a one-time snapshot.
1. Click **⋮ > Take Snapshot**.
</TabItem>
<TabItem value="RKE2/K3s">
In addition to recurring snapshots, you may want to take a "one-time" snapshot. For example, before upgrading the Kubernetes version of a cluster it's best to backup the state of the cluster to protect against upgrade failure.
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, navigate to the cluster where you want to take a one-time snapshot.
1. Navigate to the `Snapshots` tab and click `Snapshot Now`
### How Taking One-Time Snapshots Works
On one-time snapshot creation, the Rancher delivers a few sets of plans to perform snapshot creation. A set of phases are used, namely:
- Started
- RestartCluster
- Finished
If the etcd snapshot creation fails, the phase will be set to `Failed`.
1. The etcd snapshot creation request is received.
1. The phase is set to `Started`. All etcd nodes in the cluster receive a plan to create an etcd snapshot, per the cluster configuration.
1. The phase is set to `RestartCluster`, and the plans on every etcd node are reset to the original plan for the etcd nodes.
1. The phase is set to `Finished`.
</TabItem>
</Tabs>
**Result:** Based on your [snapshot backup target](#snapshot-backup-targets), a one-time snapshot will be taken and saved in the selected backup target.
## Snapshot Backup Targets
Rancher supports two different backup targets:
- [Local Target](#local-backup-target)
- [S3 Target](#s3-backup-target)
### Local Backup Target
<Tabs groupId="k8s-distro">
<TabItem value="RKE">
By default, the `local` backup target is selected. The benefits of this option is that there is no external configuration. Snapshots are automatically saved locally to the etcd nodes in the [Rancher launched Kubernetes clusters](../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) in `/opt/rke/etcd-snapshots`. All recurring snapshots are taken at configured intervals. The downside of using the `local` backup target is that if there is a total disaster and _all_ etcd nodes are lost, there is no ability to restore the cluster.
</TabItem>
<TabItem value="RKE2/K3s">
By default, the `local` backup target is selected. The benefits of this option is that there is no external configuration. Snapshots are automatically saved locally to the etcd nodes in the [Rancher launched Kubernetes clusters](../launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) in `/var/lib/rancher/<runtime>/server/db/snapshots` where `<runtime>` is either `k3s` or `rke2`. All recurring snapshots are taken per the cron schedule. The downside of using the `local` backup target is that if there is a total disaster and _all_ etcd nodes are lost, there is no ability to restore the cluster.
</TabItem>
</Tabs>
### S3 Backup Target
We recommend that you use the `S3` backup target. It lets you store snapshots externally, on an S3 compatible backend. Since the snapshots aren't stored locally, you can still restore the cluster even if you lose all etcd nodes.
Although the `S3` target offers advantages over local backup, it does require extra configuration.
:::caution
If you use an S3 backup target, make sure that every cluster has its own bucket or folder. Rancher populates snapshot information from any available snapshot listed in the S3 bucket or folder configured for that cluster.
:::
| Option | Description | Required|
|---|---|---|
|S3 Bucket Name| Name of S3 bucket to store backups| *|
|S3 Region|S3 region for the backup bucket| |
|S3 Region Endpoint|S3 regions endpoint for the backup bucket|* |
|S3 Access Key|S3 access key with permission to access the backup bucket|*|
|S3 Secret Key|S3 secret key with permission to access the backup bucket|*|
| Custom CA Certificate | A custom certificate used to access private S3 backends ||
### Using a custom CA certificate for S3
The backup snapshot can be stored on a custom `S3` backup like [minio](https://min.io/). If the S3 back end uses a self-signed or custom certificate, provide a custom certificate using the `Custom CA Certificate` option to connect to the S3 backend.
### IAM Support for Storing Snapshots in S3
The `S3` backup target supports using IAM authentication to AWS API in addition to using API credentials. An IAM role gives temporary permissions that an application can use when making API calls to S3 storage. To use IAM authentication, the following requirements must be met:
- The cluster etcd nodes must have an instance role that has read/write access to the designated backup bucket.
- The cluster etcd nodes must have network access to the specified S3 endpoint.
- The Rancher Server worker node(s) must have an instance role that has read/write to the designated backup bucket.
- The Rancher Server worker node(s) must have network access to the specified S3 endpoint.
To give an application access to S3, refer to the AWS documentation on [Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances.](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html)
## Viewing Available Snapshots
The list of all available snapshots for the cluster is available in the Rancher UI.
1. In the upper left corner, click **☰ > Cluster Management**.
1. In the **Clusters** page, go to the cluster where you want to view the snapshots and click its name.
1. Click the **Snapshots** tab to view the list of saved snapshots. These snapshots include a timestamp of when they were created.
## Safe Timestamps (RKE)
Snapshot files are timestamped to simplify processing the files using external tools and scripts, but in some S3 compatible backends, these timestamps were unusable.
The option `safe_timestamp` is added to support compatible file names. When this flag is set to `true`, all special characters in the snapshot filename timestamp are replaced.
This option is not available directly in the UI, and is only available through the `Edit as Yaml` interface.
@@ -0,0 +1,96 @@
---
title: Backing up Rancher
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher"/>
</head>
In this section, you'll learn how to back up Rancher running on any Kubernetes cluster. To backup Rancher installed with Docker, refer the instructions for [single node backups](back-up-docker-installed-rancher.md)
The backup-restore operator needs to be installed in the local cluster, and only backs up the Rancher app. The backup and restore operations are performed only in the local Kubernetes cluster.
Note that the rancher-backup operator version 2.x.x is for Rancher v2.6.x.
:::caution
When restoring a backup into a new Rancher setup, the version of the new setup should be the same as the one where the backup is made. The Kubernetes version should also be considered when restoring a backup, since the supported apiVersion in the cluster and in the backup file could be different.
:::
### Prerequisites
The Rancher version must be v2.5.0 and up.
Refer [here](migrate-rancher-to-new-cluster.md#2-restore-from-backup-using-a-restore-custom-resource) for help on restoring an existing backup file into a v1.22 cluster in Rancher v2.6.3.
### 1. Install the Rancher Backup operator
The backup storage location is an operator-level setting, so it needs to be configured when the Rancher Backups application is installed or upgraded.
Backups are created as .tar.gz files. These files can be pushed to S3 or Minio, or they can be stored in a persistent volume.
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the `local` cluster and click **Explore**. The `local` cluster runs the Rancher server.
1. Click **Apps > Charts**.
1. Click **Rancher Backups**.
1. Click **Install**.
1. Configure the default storage location. For help, refer to the [storage configuration section.](../../../reference-guides/backup-restore-configuration/storage-configuration.md)
1. Click **Install**.
:::note
There is a known issue in Fleet that occurs after performing a restoration using the backup-restore-operator: Secrets used for clientSecretName and helmSecretName are not included in Fleet gitrepos. Refer [here](../../../integrations-in-rancher/fleet/overview.md#troubleshooting) for a workaround.
:::
### 2. Perform a Backup
To perform a backup, a custom resource of type Backup must be created.
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the `local` cluster and click **Explore**.
1. In the left navigation bar, click **Rancher Backups > Backups**.
1. Click **Create**.
1. Create the Backup with the form, or with the YAML editor.
1. For configuring the Backup details using the form, click **Create** and refer to the [configuration reference](../../../reference-guides/backup-restore-configuration/backup-configuration.md) and to the [examples.](../../../reference-guides/backup-restore-configuration/examples.md#backup)
1. For using the YAML editor, we can click **Create > Create from YAML**. Enter the Backup YAML. This example Backup custom resource would create encrypted recurring backups in S3. The app uses the `credentialSecretNamespace` value to determine where to look for the S3 backup secret:
```yaml
apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
name: s3-recurring-backup
spec:
storageLocation:
s3:
credentialSecretName: s3-creds
credentialSecretNamespace: default
bucketName: rancher-backups
folder: rancher
region: us-west-2
endpoint: s3.us-west-2.amazonaws.com
resourceSetName: rancher-resource-set
encryptionConfigSecretName: encryptionconfig
schedule: "@every 1h"
retentionCount: 10
```
:::note
When creating the Backup resource using YAML editor, the `resourceSetName` must be set to `rancher-resource-set`
:::
For help configuring the Backup, refer to the [configuration reference](../../../reference-guides/backup-restore-configuration/backup-configuration.md) and to the [examples.](../../../reference-guides/backup-restore-configuration/examples.md#backup)
:::caution
The `rancher-backup` operator doesn't save the EncryptionConfiguration file. The contents of the EncryptionConfiguration file must be saved when an encrypted backup is created, and the same file must be used when restoring from this backup.
:::
1. Click **Create**.
**Result:** The backup file is created in the storage location configured in the Backup custom resource. The name of this file is used when performing a restore.
@@ -0,0 +1,133 @@
---
title: Backup Restore Usage Guide
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-restore-usage-guide"/>
</head>
The Rancher Backups chart is our solution for disaster recovery and migration. This chart allows you to take backups of your Kubernetes resources and save them to a variety of persistent storage locations.
This chart is a very simple tool which has its hands in many different areas of the Rancher ecosystem. As a result, edge cases have arisen which lead to undocumented functionality. The purpose of this document is to highlight the proper and defined usage for Rancher Backups, as well as discussing some of these edge cases weve run into.
## Functionality Overview
### Backup
The operator collects all the resources captured by the resourceSet in the chart as in-memory unstructured objects. After the resources have been collected, a compressed tar file of the resources are saved as a collection of manifests in JSON and then uploaded to a user-defined object store. This backup can be put on a repeating schedule and can also be encrypted. This encryption option is important since some of the resources are sensitive and the values are stored in plaintext without encryption.
See the [Backup Configuration documentation](../../../reference-guides/backup-restore-configuration/backup-configuration.md) for more information about the options, including encryption, to configure a backup.
:::note
As noted in the [Backing up Rancher documentation](./back-up-rancher.md), you must manually save the encryption configuration file contents since the operator will **not** back it up.
:::
### Restore
There are two main restore scenarios, restoring to a cluster with Rancher running and restoring to a fresh cluster. You can only restore to a cluster with Rancher running if it's the same cluster the backup was taken from and the [`prune` option](../../../reference-guides/backup-restore-configuration/restore-configuration.md#prune-during-restore) is enabled during the restore. A restore has similar inputs as a backup. It requires a backup filename, the encryptionConfigSecret name, and the storage location.
Resources are restored in this order:
1. Custom Resource Definitions (CRDs)
2. Cluster-scoped resources
3. Namespaced resources
See the [Restore Configuration documentation](../../../reference-guides/backup-restore-configuration/restore-configuration.md) for more information about the options to configure a restore.
### Resource Sets
The resourceSet determines which resources the backup-restore-operator collects in a backup. It's a set of ResourceSelectors, which define the selection requirements using key/value matching, regular expression matching, or the Kubernetes client labelSelector.
These are the different fields available for a resourceSelector:
- apiVersion
- excludeKinds
- excludeResourceNameRegexp
- kinds
- kindsRegexp
- labelSelectors
- namespaceRegexp
- namespaces
- resourceNameRegexp
- resourceNames
The Rancher Backups chart contains a [default resourceSet](https://github.com/rancher/backup-restore-operator/tree/release/v3.0/charts/rancher-backup/files/default-resourceset-contents), which is a combination of YAML files that are appended to one large resourceSet when the chart is installed. The file order does not matter. The resourceSets may differ between versions.
:::caution
If you wish to make edits to the resourceSet please edit it **before** installing the chart.
:::
## Proper Usage
This section outlines the guidelines for the proper usage of the Rancher Backups chart according to its use case.
### All Cases
- Rancher Backups must be installed on the local cluster.
- Note: Rancher Backups does not handle any cluster other than the one it is installed on to. It may restore cluster resources that are on the local cluster but will not communicate with the downstream clusters or back them up.
- The Rancher version being restored to must match the Rancher version from backup.
- The Kubernetes version should be be considered since you may be restoring outdated resources (resources deprecated from the version of Kubernetes you are restoring to).
### Backups
- Some user generated resources will not be backed up unless they can are captured by the default resourceSet or the resourceSet was altered to capture them.
- We provide a label `resources.cattle.io/backup:true` that when added to a secret in any namespace, will result in it being backed up.
- Backups are non-mutating
- Backups are only of the local cluster
### Restores
- A restore refers to restoring a backup to the same cluster it was taken from. This can be with Rancher installed (**prune must be enabled**) or with it not installed (no special instructions).
- One thing to note when restoring is that you may find yourself needing to “wipe” the cluster of any Rancher resources. This can be done by deploying the [Rancher cleanup script](https://github.com/rancher/rancher-cleanup) script as a job to the cluster. This allows you to install Rancher Backups again and restore to a completely fresh cluster.
- Make sure to use kubectl to deploy the scripts.
### Migrations
Migration have some more nuance since we are restoring to a different cluster. These are a few things to remember that are commonly missed or forgotten.
- The Rancher domain must be the same when migrating. This means your previous clusters domain name must now point to the new cluster.
- Rancher should **not** be running already on the cluster you are migrating to. This can cause many issues with Rancher backups and certain Rancher services as well.
- Install the **same** version of Rancher from the backup **after** the backup has been restored.
- If you choose to provision the new cluster on a different Kubernetes version know that this can cause a wide variety of unsupported behaviors because the Kubernetes API available may be different from the one you took a backup from. This can lead to deprecated resources being restored which will cause issues.
- You should **not** perform any upgrades during a migration.
## Edge Cases and Improper Usage
Below are some examples of some **incorrect** uses or expectations of Rancher Backups.
### Upgrades
- Using Rancher backups for upgrading Rancher versions is not a valid use case. The recommended procedure is to take a backup of the current version, then upgrade your Rancher instance using [these instructions](../../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades.md), and then taking **another** backup after the upgrade is complete. This way if the upgrade fails you have a backup to restore to, and the second backup will be valid to restore to the upgraded Rancher version.
- Using Rancher backups for upgrading Kubernetes versions is not a valid use case either. Because the Kubernetes API and available resources are tied to the version, upgrading using backup restore can lead to issues with misaligned sets of resources which may be deprecated, unsupported, or updated. How to upgrade your cluster version will depend on how it was provisioned however the same format as above is recommended (backup, upgrade, backup).
### ResourceSet
- Because of evolving resources and services from various teams, developers should take note if new resources need to be added to or removed from the default resourceSet.
- Rancher backups only backs up what is captured by the default resourceSets (unless edited). We have added a specific label for user created secrets that will back up a secret of any name and namespace that has said label (see [Proper Usage on Backups](#backups)).
### Downstream Clusters
- Rancher Backups **only** backs up Kubernetes resources on the local cluster. This means downstream clusters are **not** touched or backed up other than their presence in resources in the local cluster. The updating and communication of downstream clusters falls upon the rancher-agent and rancher-webhook.
### Restoring Deleted Resources
- Some resources have external results produced, such as provisioning a downstream cluster. Deleting a downstream cluster and restoring the cluster resource on the local cluster does **not** cause Rancher to reprovision said cluster. Depending on the resource, restoring may not fully bring back the resource to an available state.
- The corner case of "restoring a deleted cluster" is **not** a supported feature. When it comes to downstream clusters, whether provisioned or imported, deleting them causes a series of cleanup tasks which doesn't allow the user to restore the deleted clusters. Provisioned clusters will have their nodes and Rancher-related provisioning resources destroyed, and imported clusters will likely have their Rancher agents and other resources/services related to registration with a local cluster destroyed.
:::caution
Trying to delete and restore a downstream cluster can lead to a variety of issues with Rancher, Rancher Backups, rancher-webhook, Fleet, and more. It is not recommended to do this.
:::
### Fleet, Harvester, and Other Services
Other services, which are backed up by Rancher Backups, often change and evolve. As this happens, their resources and backup needs may change as well. Some resources may not need to be backed up and some may not be backed up at all. It is important for teams to consider this in their development process and assess whether their related resourceSets are correctly capturing the proper set of resources for their services to be restored correctly.
## Conclusion
Rancher Backups is a very useful tool, however it is somewhat limited in its scope and intended purposes. In order to avoid possible difficulties, it is important to follow the specific procedures described to ensure the proper operation of the chart.

Some files were not shown because too many files have changed in this diff Show More