Reorganize docs for Rancher v2.5

This commit is contained in:
Catherine Luse
2020-08-19 14:10:00 -07:00
parent 706c773763
commit 9cdf2e0032
333 changed files with 308 additions and 26590 deletions
-12
View File
@@ -1,12 +0,0 @@
---
title: "Rancher 2.5"
shortTitle: "Rancher 2.5"
description: "Rancher adds significant value on top of Kubernetes: managing hundreds of clusters from one interface, centralizing RBAC, enabling monitoring and alerting. Read more."
metaTitle: "Rancher 2.5 Docs: What is New?"
metaDescription: "Rancher 2 adds significant value on top of Kubernetes: managing hundreds of clusters from one interface, centralizing RBAC, enabling monitoring and alerting. Read more."
insertOneSix: false
weight: 1
ctaBanner: 0
---
> This page is under construction.
-52
View File
@@ -1,52 +0,0 @@
---
title: API
weight: 19
---
## How to use the API
The API has its own user interface accessible from a web browser. This is an easy way to see resources, perform actions, and see the equivalent cURL or HTTP request & response. To access it, click on your user avatar in the upper right corner. Under **API & Keys**, you can find the URL endpoint as well as create [API keys]({{<baseurl>}}/rancher/v2.x/en/user-settings/api-keys/).
## Authentication
API requests must include authentication information. Authentication is done with HTTP basic authentication using [API Keys]({{<baseurl>}}/rancher/v2.x/en/user-settings/api-keys/). API keys can create new clusters and have access to multiple clusters via `/v3/clusters/`. [Cluster and project roles]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/) apply to these keys and restrict what clusters and projects the account can see and what actions they can take.
By default, some cluster-level API tokens are generated with infinite time-to-live (`ttl=0`). In other words, API tokens with `ttl=0` never expire unless you invalidate them. For details on how to invalidate them, refer to the [API tokens page]({{<baseurl>}}/rancher/v2.x/en/api/api-tokens).
## Making requests
The API is generally RESTful but has several features to make the definition of everything discoverable by a client so that generic clients can be written instead of having to write specific code for every type of resource. For detailed info about the generic API spec, [see here](https://github.com/rancher/api-spec/blob/master/specification.md).
- Every type has a Schema which describes:
- The URL to get to the collection of this type of resources
- Every field the resource can have, along with their type, basic validation rules, whether they are required or optional, etc.
- Every action that is possible on this type of resource, with their inputs and outputs (also as schemas).
- Every field that filtering is allowed on
- What HTTP verb methods are available for the collection itself, or for individual resources in the collection.
- So the theory is that you can load just the list of schemas and know everything about the API. This is in fact how the UI for the API works, it contains no code specific to Rancher itself. The URL to get Schemas is sent in every HTTP response as a `X-Api-Schemas` header. From there you can follow the `collection` link on each schema to know where to list resources, and other `links` inside of the returned resources to get any other information.
- In practice, you will probably just want to construct URL strings. We highly suggest limiting this to the top-level to list a collection (`/v3/<type>`) or get a specific resource (`/v3/<type>/<id>`). Anything deeper than that is subject to change in future releases.
- Resources have relationships between each other called links. Each resource includes a map of `links` with the name of the link and the URL to retrieve that information. Again you should `GET` the resource and then follow the URL in the `links` map, not construct these strings yourself.
- Most resources have actions, which do something or change the state of the resource. To use these, send a HTTP `POST` to the URL in the `actions` map for the action you want. Some actions require input or produce output, see the individual documentation for each type or the schemas for specific information.
- To edit a resource, send a HTTP `PUT` to the `links.update` link on the resource with the fields that you want to change. If the link is missing then you don't have permission to update the resource. Unknown fields and ones that are not editable are ignored.
- To delete a resource, send a HTTP `DELETE` to the `links.remove` link on the resource. If the link is missing then you don't have permission to update the resource.
- To create a new resource, HTTP `POST` to the collection URL in the schema (which is `/v3/<type>`).
## Filtering
Most collections can be filtered on the server-side by common fields using HTTP query parameters. The `filters` map shows you what fields can be filtered on and what the filtered values were for the request you made. The API UI has controls to setup filtering and show you the appropriate request. For simple "equals" matches it's just `field=value`. Modifiers can be added to the field name, e.g. `field_gt=42` for "field is greater than 42". See the [API spec](https://github.com/rancher/api-spec/blob/master/specification.md#filtering) for full details.
## Sorting
Most collections can be sorted on the server-side by common fields using HTTP query parameters. The `sortLinks` map shows you what sorts are available, along with the URL to get the collection sorted by that. It also includes info about what the current response was sorted by, if specified.
## Pagination
API responses are paginated with a limit of 100 resources per page by default. This can be changed with the `limit` query parameter, up to a maximum of 1000, e.g. `/v3/pods?limit=1000`. The `pagination` map in collection responses tells you whether or not you have the full result set and has a link to the next page if you do not.
@@ -1,50 +0,0 @@
---
title: API Keys
weight: 3
---
If you want to access your Rancher clusters, projects, or other objects using external applications, you can do so using the Rancher API. However, before your application can access the API, you must provide the app with a key used to authenticate with Rancher. You can obtain a key using the Rancher UI.
An API key is also required for using Rancher CLI.
API Keys are composed of four components:
- **Endpoint:** This is the IP address and path that other applications use to send requests to the Rancher API.
- **Access Key:** The token's username.
- **Secret Key:** The token's password. For applications that prompt you for two different strings for API authentication, you usually enter the two keys together.
- **Bearer Token:** The token username and password concatenated together. Use this string for applications that prompt you for one authentication string.
## Creating an API Key
1. Select **User Avatar** > **API & Keys** from the **User Settings** menu in the upper-right.
2. Click **Add Key**.
3. **Optional:** Enter a description for the API key and select an expiration period or a scope. We recommend setting an expiration date.
The API key won't be valid after expiration. Shorter expiration periods are more secure.
A scope will limit the API key so that it will only work against the Kubernetes API of the specified cluster. If the cluster is configured with an Authorized Cluster Endpoint, you will be able to use a scoped token directly against the cluster's API without proxying through the Rancher server. See [Authorized Cluster Endpoints]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/#4-authorized-cluster-endpoint) for more information.
4. Click **Create**.
**Step Result:** Your API Key is created. Your API **Endpoint**, **Access Key**, **Secret Key**, and **Bearer Token** are displayed.
Use the **Bearer Token** to authenticate with Rancher CLI.
5. Copy the information displayed to a secure location. This information is only displayed once, so if you lose your key, you'll have to make a new one.
## What's Next?
- Enter your API key information into the application that will send requests to the Rancher API.
- Learn more about the Rancher endpoints and parameters by selecting **View in API** for an object in the Rancher UI.
- API keys are used for API calls and [Rancher CLI]({{<baseurl>}}/rancher/v2.x/en/cli).
## Deleting API Keys
If you need to revoke an API key, delete it. You should delete API keys:
- That may have been compromised.
- That have expired.
To delete an API, select the stale key and click **Delete**.
@@ -1,29 +0,0 @@
---
title: API Tokens
weight: 1
---
By default, some cluster-level API tokens are generated with infinite time-to-live (`ttl=0`). In other words, API tokens with `ttl=0` never expire unless you invalidate them. Tokens are not invalidated by changing a password.
You can deactivate API tokens by deleting them or by deactivating the user account.
To delete a token,
1. Go to the list of all tokens in the Rancher API view at `https://<Rancher-Server-IP>/v3/tokens`.
1. Access the token you want to delete by its ID. For example, `https://<Rancher-Server-IP>/v3/tokens/kubectl-shell-user-vqkqt`
1. Click **Delete.**
Here is the complete list of tokens that are generated with `ttl=0`:
| Token | Description |
|-------|-------------|
| `kubeconfig-*` | Kubeconfig token |
| `kubectl-shell-*` | Access to `kubectl` shell in the browser |
| `agent-*` | Token for agent deployment |
| `compose-token-*` | Token for compose |
| `helm-token-*` | Token for Helm chart deployment |
| `*-pipeline*` | Pipeline token for project |
| `telemetry-*` | Telemetry token |
| `drain-node-*` | Token for drain (we use `kubectl` for drain because there is no native Kubernetes API) |
-14
View File
@@ -1,14 +0,0 @@
---
title: Backups and Disaster Recovery
weight: 8
---
> This section is under construction.
This section is devoted to protecting your data in a disaster scenario.
To protect yourself from a disaster scenario, you should create backups on a regular basis.
We recommend using the backup/restore application to back up Rancher and to restore it from backup.
The Helm chart for the application is available as in Rancher. After you have enabled the application, you will be able to use backup templates for Rancher, Fleet, and the Enterprise Cluster Manager.
@@ -1,6 +0,0 @@
---
title: Legacy Backup and Restore Docs
weight: 2
---
> This section is under construction.
@@ -1,6 +0,0 @@
---
title: Rancher Installed with Docker
weight: 4
---
> This section is under construction.
@@ -1,69 +0,0 @@
---
title: Backing up Rancher Installed with Docker
weight: 4
---
After completing your Docker installation of Rancher, we recommend creating backups of it on a regular basis. Having a recent backup will let you recover quickly from an unexpected disaster.
## Before You Start
During the creation of your backup, you'll enter a series of commands, replacing placeholders with data from your environment. These placeholders are denoted with angled brackets and all capital letters (`<EXAMPLE>`). Here's an example of a command with a placeholder:
```
docker run --volumes-from rancher-data-<DATE> -v $PWD:/backup busybox tar pzcvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz /var/lib/rancher
```
In this command, `<DATE>` is a placeholder for the date that the data container and backup were created. `9-27-18` for example.
Cross reference the image and reference table below to learn how to obtain this placeholder data. Write down or copy this information before starting the [procedure below](#creating-a-backup).
<sup>Terminal `docker ps` Command, Displaying Where to Find `<RANCHER_CONTAINER_TAG>` and `<RANCHER_CONTAINER_NAME>`</sup>
![Placeholder Reference]({{<baseurl>}}/img/rancher/placeholder-ref.png)
| Placeholder | Example | Description |
| -------------------------- | -------------------------- | --------------------------------------------------------- |
| `<RANCHER_CONTAINER_TAG>` | `v2.0.5` | The rancher/rancher image you pulled for initial install. |
| `<RANCHER_CONTAINER_NAME>` | `festive_mestorf` | The name of your Rancher container. |
| `<RANCHER_VERSION>` | `v2.0.5` | The version of Rancher that you're creating a backup for. |
| `<DATE>` | `9-27-18` | The date that the data container or backup was created. |
<br/>
You can obtain `<RANCHER_CONTAINER_TAG>` and `<RANCHER_CONTAINER_NAME>` by logging into your Rancher Server by remote connection and entering the command to view the containers that are running: `docker ps`. You can also view containers that are stopped with `docker ps -a`. Use these commands for help anytime while creating backups.
## Creating a Backup
This procedure creates a backup that you can restore if Rancher encounters a disaster scenario.
1. Using a remote Terminal connection, log into the node running your Rancher Server.
1. Stop the container currently running Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the [name of your Rancher container](#before-you-start).
```
docker stop <RANCHER_CONTAINER_NAME>
```
1. <a id="backup"></a>Use the command below, replacing each [placeholder](#before-you-start), to create a data container from the Rancher container that you just stopped.
```
docker create --volumes-from <RANCHER_CONTAINER_NAME> --name rancher-data-<DATE> rancher/rancher:<RANCHER_CONTAINER_TAG>
```
1. <a id="tarball"></a>From the data container that you just created (`rancher-data-<DATE>`), create a backup tarball (`rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`). Use the following command, replacing each [placeholder](#before-you-start).
```
docker run --volumes-from rancher-data-<DATE> -v $PWD:/backup:z busybox tar pzcvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz /var/lib/rancher
```
**Step Result:** A stream of commands runs on the screen.
1. Enter the `ls` command to confirm that the backup tarball was created. It will have a name similar to `rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`.
1. Move your backup tarball to a safe location external to your Rancher Server. Then delete the `rancher-data-<DATE>` container from your Rancher Server.
1. Restart Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the name of your [Rancher container](#before-you-start).
```
docker start <RANCHER_CONTAINER_NAME>
```
**Result:** A backup tarball of your Rancher Server data is created. See [Restoring Backups: Docker Installs]({{<baseurl>}}/rancher/v2.x/en/backups/restorations/single-node-restoration) if you need to restore backup data.
@@ -1,68 +0,0 @@
---
title: Restoring Rancher Installed with Docker
weight: 3
---
If you encounter a disaster scenario, you can restore your Rancher Server to your most recent backup.
## Before You Start
When restoring your backup, you'll enter a series of commands, filling placeholders with data from your environment. These placeholders are denoted with angled brackets and all capital letters (`<EXAMPLE>`). Here's an example of a command with a placeholder:
```
docker run --volumes-from <RANCHER_CONTAINER_NAME> -v $PWD:/backup \
busybox sh -c "rm /var/lib/rancher/* -rf && \
tar pzxvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>"
```
In this command, `<RANCHER_CONTAINER_NAME>` and `<RANCHER_VERSION>-<DATE>` are environment variables for your Rancher deployment.
Cross reference the image and reference table below to learn how to obtain this placeholder data. Write down or copy this information before starting the [procedure below](#creating-a-backup).
<sup>Terminal `docker ps` Command, Displaying Where to Find `<RANCHER_CONTAINER_TAG>` and `<RANCHER_CONTAINER_NAME>`</sup>
![Placeholder Reference]({{<baseurl>}}/img/rancher/placeholder-ref.png)
| Placeholder | Example | Description |
| -------------------------- | -------------------------- | --------------------------------------------------------- |
| `<RANCHER_CONTAINER_TAG>` | `v2.0.5` | The rancher/rancher image you pulled for initial install. |
| `<RANCHER_CONTAINER_NAME>` | `festive_mestorf` | The name of your Rancher container. |
| `<RANCHER_VERSION>` | `v2.0.5` | The version number for your Rancher backup. |
| `<DATE>` | `9-27-18` | The date that the data container or backup was created. |
<br/>
You can obtain `<RANCHER_CONTAINER_TAG>` and `<RANCHER_CONTAINER_NAME>` by logging into your Rancher Server by remote connection and entering the command to view the containers that are running: `docker ps`. You can also view containers that are stopped using a different command: `docker ps -a`. Use these commands for help anytime during while creating backups.
## Restoring Backups
Using a [backup]({{<baseurl>}}/rancher/v2.x/en/backups/backups/single-node-backups/) that you created earlier, restore Rancher to its last known healthy state.
1. Using a remote Terminal connection, log into the node running your Rancher Server.
1. Stop the container currently running Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the [name of your Rancher container](#before-you-start).
```
docker stop <RANCHER_CONTAINER_NAME>
```
1. Move the backup tarball that you created during completion of [Creating Backups—Docker Installs]({{<baseurl>}}/rancher/v2.x/en/backups/backups/single-node-backups/) onto your Rancher Server. Change to the directory that you moved it to. Enter `dir` to confirm that it's there.
If you followed the naming convention we suggested in [Creating Backups—Docker Installs]({{<baseurl>}}/rancher/v2.x/en/backups/backups/single-node-backups/), it will have a name similar to `rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`.
1. Enter the following command to delete your current state data and replace it with your backup data, replacing the [placeholders](#before-you-start). Don't forget to close the quotes.
>**Warning!** This command deletes all current state data from your Rancher Server container. Any changes saved after your backup tarball was created will be lost.
```
docker run --volumes-from <RANCHER_CONTAINER_NAME> -v $PWD:/backup \
busybox sh -c "rm /var/lib/rancher/* -rf && \
tar pzxvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz"
```
**Step Result:** A series of commands should run.
1. Restart your Rancher Server container, replacing the [placeholder](#before-you-start). It will restart using your backup data.
```
docker start <RANCHER_CONTAINER_NAME>
```
1. Wait a few moments and then open Rancher in a web browser. Confirm that the restoration succeeded and that your data is restored.
@@ -1,74 +0,0 @@
---
title: Special Scenarios for Rollbacks
weight: 40
---
If you are rolling back to versions in either of these scenarios, you must follow some extra instructions in order to get your clusters working.
- Rolling back from v2.1.6+ to any version between v2.1.0 - v2.1.5 or v2.0.0 - v2.0.10.
- Rolling back from v2.0.11+ to any version between v2.0.0 - v2.0.10.
Because of the changes necessary to address [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321), special steps are necessary if the user wants to roll back to a previous version of Rancher where this vulnerability exists. The steps are as follows:
1. Record the `serviceAccountToken` for each cluster. To do this, save the following script on a machine with `kubectl` access to the Rancher management plane and execute it. You will need to run these commands on the machine where the rancher container is running. Ensure JQ is installed before running the command. The commands will vary depending on how you installed Rancher.
**Rancher Installed with Docker**
```
docker exec <NAME OF RANCHER CONTAINER> kubectl get clusters -o json | jq '[.items[] | select(any(.status.conditions[]; .type == "ServiceAccountMigrated")) | {name: .metadata.name, token: .status.serviceAccountToken}]' > tokens.json
```
**Rancher Installed on a Kubernetes Cluster**
```
kubectl get clusters -o json | jq '[.items[] | select(any(.status.conditions[]; .type == "ServiceAccountMigrated")) | {name: .metadata.name, token: .status.serviceAccountToken}]' > tokens.json
```
2. After executing the command a `tokens.json` file will be created. Important! Back up this file in a safe place.** You will need it to restore functionality to your clusters after rolling back Rancher. **If you lose this file, you may lose access to your clusters.**
3. Rollback Rancher following the [normal instructions]({{<baseurl>}}/rancher/v2.x/en/upgrades/rollbacks/).
4. Once Rancher comes back up, every cluster managed by Rancher (except for Imported clusters) will be in an `Unavailable` state.
5. Apply the backed up tokens based on how you installed Rancher.
**Rancher Installed with Docker**
Save the following script as `apply_tokens.sh` to the machine where the Rancher docker container is running. Also copy the `tokens.json` file created previously to the same directory as the script.
```
set -e
tokens=$(jq .[] -c tokens.json)
for token in $tokens; do
name=$(echo $token | jq -r .name)
value=$(echo $token | jq -r .token)
docker exec $1 kubectl patch --type=merge clusters $name -p "{\"status\": {\"serviceAccountToken\": \"$value\"}}"
done
```
the script to allow execution (`chmod +x apply_tokens.sh`) and execute the script as follows:
```
./apply_tokens.sh <DOCKER CONTAINER NAME>
```
After a few moments the clusters will go from Unavailable back to Available.
**Rancher Installed on a Kubernetes Cluster**
Save the following script as `apply_tokens.sh` to a machine with kubectl access to the Rancher management plane. Also copy the `tokens.json` file created previously to the same directory as the script.
```
set -e
tokens=$(jq .[] -c tokens.json)
for token in $tokens; do
name=$(echo $token | jq -r .name)
value=$(echo $token | jq -r .token)
kubectl patch --type=merge clusters $name -p "{\"status\": {\"serviceAccountToken\": \"$value\"}}"
done
```
Set the script to allow execution (`chmod +x apply_tokens.sh`) and execute the script as follows:
```
./apply_tokens.sh
```
After a few moments the clusters will go from `Unavailable` back to `Available`.
6. Continue using Rancher as normal.
-80
View File
@@ -1,80 +0,0 @@
---
title: The Rancher Command Line Interface
description: The Rancher CLI is a unified tool that you can use to interact with Rancher. With it, you can operate Rancher using a command line interface rather than the GUI
metaTitle: "Using the Rancher Command Line Interface "
metaDescription: "The Rancher CLI is a unified tool that you can use to interact with Rancher. With it, you can operate Rancher using a command line interface rather than the GUI"
weight: 16
---
The Rancher CLI (Command Line Interface) is a unified tool that you can use to interact with Rancher. With this tool, you can operate Rancher using a command line rather than the GUI.
### Download Rancher CLI
The binary can be downloaded directly from the UI. The link can be found in the right hand side of the footer in the UI. We have binaries for Windows, Mac, and Linux. You can also check the [releases page for our CLI](https://github.com/rancher/cli/releases) for direct downloads of the binary.
### Requirements
After you download the Rancher CLI, you need to make a few configurations. Rancher CLI requires:
- Your [Rancher Server URL]({{<baseurl>}}/rancher/v2.x/en/admin-settings/server-url), which is used to connect to Rancher Server.
- An API Bearer Token, which is used to authenticate with Rancher. For more information about obtaining a Bearer Token, see [Creating an API Key]({{<baseurl>}}/rancher/v2.x/en/user-settings/api-keys/).
### CLI Authentication
Before you can use Rancher CLI to control your Rancher Server, you must authenticate using an API Bearer Token. Log in using the following command (replace `<BEARER_TOKEN>` and `<SERVER_URL>` with your information):
```bash
$ ./rancher login https://<SERVER_URL> --token <BEARER_TOKEN>
```
If Rancher Server uses a self-signed certificate, Rancher CLI prompts you to continue with the connection.
### Project Selection
Before you can perform any commands, you must select a Rancher project to perform those commands against. To select a [project]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/) to work on, use the command `./rancher context switch`. When you enter this command, a list of available projects displays. Enter a number to choose your project.
**Example: `./rancher context switch` Output**
```
User:rancher-cli-directory user$ ./rancher context switch
NUMBER CLUSTER NAME PROJECT ID PROJECT NAME
1 cluster-2 c-7q96s:p-h4tmb project-2
2 cluster-2 c-7q96s:project-j6z6d Default
3 cluster-1 c-lchzv:p-xbpdt project-1
4 cluster-1 c-lchzv:project-s2mch Default
Select a Project:
```
After you enter a number, the console displays a message that you've changed projects.
```
INFO[0005] Setting new context to project project-1
INFO[0005] Saving config to /Users/markbishop/.rancher/cli2.json
```
### Commands
The following commands are available for use in Rancher CLI.
| Command | Result |
|---|---|
| `apps, [app]` | Performs operations on catalog applications (i.e. individual [Helm charts](https://docs.helm.sh/developing_charts/) or [Rancher charts]({{<baseurl>}}/rancher/v2.x/en/catalog/custom/#chart-directory-structure)). |
| `catalog` | Performs operations on [catalogs]({{<baseurl>}}/rancher/v2.x/en/catalog/). |
| `clusters, [cluster]` | Performs operations on your [clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/). |
| `context` | Switches between Rancher [projects]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). For an example, see [Project Selection](#project-selection). |
| `inspect [OPTIONS] [RESOURCEID RESOURCENAME]` | Displays details about [Kubernetes resources](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#resource-types) or Rancher resources (i.e.: [projects]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/) and [workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/)). Specify resources by name or ID. |
| `kubectl` |Runs [kubectl commands](https://kubernetes.io/docs/reference/kubectl/overview/#operations). |
| `login, [l]` | Logs into a Rancher Server. For an example, see [CLI Authentication](#cli-authentication). |
| `namespaces, [namespace]` |Performs operations on [namespaces]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces). |
| `nodes, [node]` |Performs operations on [nodes]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/#kubernetes). |
| `projects, [project]` | Performs operations on [projects]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). |
| `ps` | Displays [workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads) in a project. |
| `settings, [setting]` | Shows the current settings for your Rancher Server. |
| `ssh` | Connects to one of your cluster nodes using the SSH protocol. |
| `help, [h]` | Shows a list of commands or help for one command. |
### Rancher CLI Help
Once logged into Rancher Server using the CLI, enter `./rancher --help` for a list of commands.
All commands accept the `--help` flag, which documents each command's usage.
@@ -1,8 +0,0 @@
---
title: Backups - Etcd snapshot
weight: 1
---
The Rancher Kubernetes cluster can be restored from an etcd snapshot.
If Rancher was installed on another type of Kubernetes, refer to the official documentation of the Kubernetes distribution for more information about backing up the cluster.
@@ -1,8 +0,0 @@
---
title: Disaster Recovery - Etcd Restore Snapshot
weight: 1
---
The Rancher Kubernetes cluster can be restored from an etcd snapshot.
If Rancher was installed on another type of Kubernetes, refer to the official documentation of the Kubernetes distribution for more information about backing up the cluster.
@@ -1,16 +0,0 @@
---
title: Rancher Kubernetes
weight: 1
---
> This page is under construction.
The Rancher CLI comes with a Kubernetes distribution called Rancher Kubernetes, which allows you to set up a Kubernetes cluster more easily as a prerequisite to installing Rancher.
Rancher Kubernetes is based on K3s, and has more secure default settings. It is a new feature in Rancher 2.5.
Rancher Kubernetes clusters can also be imported into Rancher.
Rancher Kubernetes is not to be confused with RKE Kubernetes or K3s Kubernetes, which are separate Kubernetes distributions provided by Rancher. RKE is the oldest of the three distributions. When the Enterprise Cluster Manager is enabled, Rancher can provision RKE Kubernetes clusters, but Rancher Kubernetes clusters and K3s Kubernetes clusters have to be installed separately and imported into Rancher.
In other words, Rancher can only install Rancher Kubernetes when you are using the Rancher CLI to set up a local Kubernetes cluster for the Rancher server.
@@ -1,113 +0,0 @@
---
title: Cluster Explorer
weight: 5
---
> This section is under construction.
After you provision a cluster in Rancher, you can begin using powerful Kubernetes features to deploy and scale your containerized applications in development, testing, or production environments.
This page covers the following topics:
- [Switching between clusters](#switching-between-clusters)
- [Managing clusters in Rancher](#managing-clusters-in-rancher)
- [Configuring tools](#configuring-tools)
> This section assumes a basic familiarity with Docker and Kubernetes. For a brief explanation of how Kubernetes components work together, refer to the [concepts]({{<baseurl>}}/rancher/v2.x/en/overview/concepts) page.
## Switching between Clusters
To switch between clusters, use the drop-down available in the navigation bar.
Alternatively, you can switch between projects and clusters directly in the navigation bar. Open the **Global** view and select **Clusters** from the main menu. Then select the name of the cluster you want to open.
## Managing Clusters in Rancher
After clusters have been [provisioned into Rancher]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/), [cluster owners]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) will need to manage these clusters. There are many different options of how to manage your cluster.
{{% include file="/rancher/v2.x/en/cluster-provisioning/cluster-capabilities-table" %}}
## Configuring Tools
Rancher contains a variety of tools that aren't included in Kubernetes to assist in your DevOps operations. Rancher can integrate with external services to help your clusters run more efficiently. Tools are divided into following categories:
- Alerts
- Notifiers
- Logging
- Monitoring
- Istio Service Mesh
- OPA Gatekeeper
For more information, see [Tools]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/)
When your project is set up, [project members]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can start managing their applications and all the components that comprise it.
## Workloads
Deploy applications to your cluster nodes using [workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/), which are objects that contain pods that run your apps, along with metadata that set rules for the deployment's behavior. Workloads can be deployed within the scope of the entire clusters or within a namespace.
When deploying a workload, you can deploy from any image. There are a variety of [workload types]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/#workload-types) to choose from which determine how your application should run.
Following a workload deployment, you can continue working with it. You can:
- [Upgrade]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads) the workload to a newer version of the application it's running.
- [Roll back]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/rollback-workloads) a workload to a previous version, if an issue occurs during upgrade.
- [Add a sidecar]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/add-a-sidecar), which is a workload that supports a primary workload.
## Load Balancing and Ingress
### Load Balancers
After you launch an application, it's only available within the cluster. It can't be reached externally.
If you want your applications to be externally accessible, you must add a load balancer to your cluster. Load balancers create a gateway for external connections to access your cluster, provided that the user knows the load balancer's IP address and the application's port number.
Rancher supports two types of load balancers:
- [Layer-4 Load Balancers]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#layer-4-load-balancer)
- [Layer-7 Load Balancers]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#layer-7-load-balancer)
For more information, see [load balancers]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers).
#### Ingress
Load Balancers can only handle one IP address per service, which means if you run multiple services in your cluster, you must have a load balancer for each service. Running multiples load balancers can be expensive. You can get around this issue by using an ingress.
Ingress is a set or rules that act as a load balancer. Ingress works in conjunction with one or more ingress controllers to dynamically route service requests. When the ingress receives a request, the ingress controller(s) in your cluster program the load balancer to direct the request to the correct service based on service subdomains or path rules that you've configured.
For more information, see [Ingress]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress).
When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a Global DNS entry.
For more information, see [Global DNS]({{<baseurl>}}/rancher/v2.x/en/catalog/globaldns/).
## Service Discovery
After you expose your cluster to external requests using a load balancer and/or ingress, it's only available by IP address. To create a resolveable hostname, you must create a service record, which is a record that maps an IP address, external hostname, DNS record alias, workload(s), or labelled pods to a specific hostname.
For more information, see [Service Discovery]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/service-discovery).
## Pipelines
After your project has been [configured to a version control provider]({{<baseurl>}}/rancher/v2.x/en/project-admin/pipelines/#version-control-providers), you can add the repositories and start configuring a pipeline for each repository.
For more information, see [Pipelines]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/).
## Applications
Besides launching individual components of an application, you can use the Rancher catalog to start launching applications, which are Helm charts.
For more information, see [Applications in a Project]({{<baseurl>}}/rancher/v2.x/en/catalog/apps/).
## Kubernetes Resources
Within the context of a Rancher project or namespace, _resources_ are files and data that support operation of your pods. Within Rancher, certificates, registries, and secrets are all considered resources. However, Kubernetes classifies resources as different types of [secrets](https://kubernetes.io/docs/concepts/configuration/secret/). Therefore, within a single project or namespace, individual resources must have unique names to avoid conflicts. Although resources are primarily used to carry sensitive information, they have other uses as well.
Resources include:
- [Certificates]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/certificates/): Files used to encrypt/decrypt data entering or leaving the cluster.
- [ConfigMaps]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/configmaps/): Files that store general configuration information, such as a group of config files.
- [Secrets]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/secrets/): Files that store sensitive data like passwords, tokens, or keys.
- [Registries]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/registries/): Files that carry credentials used to authenticate with private registries.
@@ -1,37 +0,0 @@
---
title: Certificate Rotation
weight: 2
---
> **Warning:** Rotating Kubernetes certificates may result in your cluster being temporarily unavailable as components are restarted. For production environments, it's recommended to perform this action during a maintenance window.
By default, Kubernetes clusters require certificates and Rancher launched Kubernetes clusters automatically generate certificates for the Kubernetes components. Rotating these certificates is important before the certificates expire as well as if a certificate is compromised. After the certificates are rotated, the Kubernetes components are automatically restarted.
Certificates can be rotated for the following services:
- etcd
- kubelet
- kube-apiserver
- kube-proxy
- kube-scheduler
- kube-controller-manager
### Certificate Rotation
Rancher launched Kubernetes clusters have the ability to rotate the auto-generated certificates through the UI.
1. In the **Global** view, navigate to the cluster that you want to rotate certificates.
2. Select the **&#8942; > Rotate Certificates**.
3. Select which certificates that you want to rotate.
* Rotate all Service certificates (keep the same CA)
* Rotate an individual service and choose one of the services from the drop down menu
4. Click **Save**.
**Results:** The selected certificates will be rotated and the related services will be restarted to start using the new certificate.
> **Note:** Even though the RKE CLI can use custom certificates for the Kubernetes cluster components, Rancher currently doesn't allow the ability to upload these in Rancher Launched Kubernetes clusters.
@@ -1,43 +0,0 @@
---
title: Encrypting HTTP Communication
description: Learn how to add an SSL (Secure Sockets Layer) certificate or TLS (Transport Layer Security) certificate to either a project, a namespace, or both, so that you can add it to deployments
weight: 1
---
When you create an ingress within Rancher/Kubernetes, you must provide it with a secret that includes a TLS private key and certificate, which are used to encrypt and decrypt communications that come through the ingress. You can make certificates available for ingress use by navigating to its project or namespace, and then uploading the certificate. You can then add the certificate to the ingress deployment.
Add SSL certificates to either projects, namespaces, or both. A project scoped certificate will be available in all its namespaces.
>**Prerequisites:** You must have a TLS private key and certificate available to upload.
1. From the **Global** view, select the project where you want to deploy your ingress.
1. From the main menu, select **Resources > Secrets > Certificates**. Click **Add Certificate**. (For Rancher prior to v2.3, click **Resources > Certificates.**)
1. Enter a **Name** for the certificate.
>**Note:** Kubernetes classifies SSL certificates as [secrets](https://kubernetes.io/docs/concepts/configuration/secret/), and no two secrets in a project or namespace can have duplicate names. Therefore, to prevent conflicts, your SSL certificate must have a unique name among the other certificates, registries, and secrets within your project/workspace.
1. Select the **Scope** of the certificate.
- **Available to all namespaces in this project:** The certificate is available for any deployment in any namespaces in the project.
- **Available to a single namespace:** The certificate is only available for the deployments in one [namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces). If you choose this option, select a **Namespace** from the drop-down list or click **Add to a new namespace** to add the certificate to a namespace you create on the fly.
1. From **Private Key**, either copy and paste your certificate's private key into the text box (include the header and footer), or click **Read from a file** to browse to the private key on your file system. If possible, we recommend using **Read from a file** to reduce likelihood of error.
Private key files end with an extension of `.key`.
1. From **Certificate**, either copy and paste your certificate into the text box (include the header and footer), or click **Read from a file** to browse to the certificate on your file system. If possible, we recommend using **Read from a file** to reduce likelihood of error.
Certificate files end with an extension of `.crt`.
**Result:** Your certificate is added to the project or namespace. You can now add it to deployments.
- If you added an SSL certificate to the project, the certificate is available for deployments created in any project namespace.
- If you added an SSL certificate to a namespace, the certificate is available only for deployments in that namespace.
- Your certificate is added to the **Resources > Secrets > Certificates** view. (For Rancher prior to v2.3, it is added to **Resources > Certificates.**)
## What's Next?
Now you can add the certificate when launching an ingress within the current project or namespace. For more information, see [Adding Ingress]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/).
@@ -1,25 +0,0 @@
---
title: Cluster Autoscaler
weight: 1
---
In this section, you'll learn how to install and use the [Kubernetes cluster-autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/) on Rancher custom clusters using AWS EC2 Auto Scaling Groups.
The cluster autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:
* There are pods that failed to run in the cluster due to insufficient resources.
* There are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.
To prevent your pod from being evicted, set a `priorityClassName: system-cluster-critical` property on your pod spec.
Cluster Autoscaler is designed to run on Kubernetes master nodes. It can run in the `kube-system` namespace. Cluster Autoscaler doesn't scale down nodes with non-mirrored `kube-system` pods running on them.
It's possible to run a customized deployment of Cluster Autoscaler on worker nodes, but extra care needs to be taken to ensure that Cluster Autoscaler remains up and running.
# Cloud Providers
Cluster Autoscaler provides support to distinct cloud providers. For more information, go to [cluster-autoscaler supported cloud providers.](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#deployment)
### Setting up Cluster Autoscaler on Amazon Cloud Provider
For details on running the cluster autoscaler on Amazon cloud provider, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/cluster-autoscaler/amazon)
@@ -1,580 +0,0 @@
---
title: Cluster Autoscaler with AWS EC2 Auto Scaling Groups
weight: 1
---
This guide will show you how to install and use [Kubernetes cluster-autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/) on Rancher custom clusters using AWS EC2 Auto Scaling Groups.
We are going to install a Rancher RKE custom cluster with a fixed number of nodes with the etcd and controlplane roles, and a variable nodes with the worker role, managed by `cluster-autoscaler`.
- [Prerequisites](#prerequisites)
- [1. Create a Custom Cluster](#1-create-a-custom-cluster)
- [2. Configure the Cloud Provider](#2-configure-the-cloud-provider)
- [3. Deploy Nodes](#3-deploy-nodes)
- [4. Install cluster-autoscaler](#4-install-cluster-autoscaler)
- [Parameters](#parameters)
- [Deployment](#deployment)
- [Testing](#testing)
- [Generating Load](#generating-load)
- [Checking Scale](#checking-scale)
# Prerequisites
These elements are required to follow this guide:
* The Rancher server is up and running
* You have an AWS EC2 user with proper permissions to create virtual machines, auto scaling groups, and IAM profiles and roles
### 1. Create a Custom Cluster
On Rancher server, we should create a custom k8s cluster v1.18.x. Be sure that cloud_provider name is set to `amazonec2`. Once cluster is created we need to get:
* clusterID: `c-xxxxx` will be used on EC2 `kubernetes.io/cluster/<clusterID>` instance tag
* clusterName: will be used on EC2 `k8s.io/cluster-autoscaler/<clusterName>` instance tag
* nodeCommand: will be added on EC2 instance user_data to include new nodes on cluster
```sh
sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:<RANCHER_VERSION> --server https://<RANCHER_URL> --token <RANCHER_TOKEN> --ca-checksum <RANCHER_CHECKSUM> <roles>
```
### 2. Configure the Cloud Provider
On AWS EC2, we should create a few objects to configure our system. We've defined three distinct groups and IAM profiles to configure on AWS.
1. Autoscaling group: Nodes that will be part of the EC2 Auto Scaling Group (ASG). The ASG will be used by `cluster-autoscaler` to scale up and down.
* IAM profile: Required by k8s nodes where cluster-autoscaler will be running. It is recommended for Kubernetes master nodes. This profile is called `K8sAutoscalerProfile`.
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"autoscaling:DescribeTags",
"autoscaling:DescribeLaunchConfigurations",
"ec2:DescribeLaunchTemplateVersions"
],
"Resource": [
"*"
]
}
]
}
```
2. Master group: Nodes that will be part of the Kubernetes etcd and/or control planes. This will be out of the ASG.
* IAM profile: Required by the Kubernetes cloud_provider integration. Optionally, `AWS_ACCESS_KEY` and `AWS_SECRET_KEY` can be used instead [using-aws-credentials.](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#using-aws-credentials) This profile is called `K8sMasterProfile`.
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVolumes",
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyVolume",
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateRoute",
"ec2:DeleteRoute",
"ec2:DeleteSecurityGroup",
"ec2:DeleteVolume",
"ec2:DetachVolume",
"ec2:RevokeSecurityGroupIngress",
"ec2:DescribeVpcs",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:AttachLoadBalancerToSubnets",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateLoadBalancerPolicy",
"elasticloadbalancing:CreateLoadBalancerListeners",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteLoadBalancerListeners",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DetachLoadBalancerFromSubnets",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancerPolicies",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
"iam:CreateServiceLinkedRole",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage",
"kms:DescribeKey"
],
"Resource": [
"*"
]
}
]
}
```
* IAM role: `K8sMasterRole: [K8sMasterProfile,K8sAutoscalerProfile]`
* Security group: `K8sMasterSg` More info at[RKE ports (custom nodes tab)]({{<baseurl>}}/rancher/v2.x/en/installation/requirements/ports/#downstream-kubernetes-cluster-nodes)
* Tags:
`kubernetes.io/cluster/<clusterID>: owned`
* User data: `K8sMasterUserData` Ubuntu 18.04(ami-0e11cbb34015ff725), installs docker and add etcd+controlplane node to the k8s cluster
```sh
#!/bin/bash -x
cat <<EOF > /etc/sysctl.d/90-kubelet.conf
vm.overcommit_memory = 1
vm.panic_on_oom = 0
kernel.panic = 10
kernel.panic_on_oops = 1
kernel.keys.root_maxkeys = 1000000
kernel.keys.root_maxbytes = 25000000
EOF
sysctl -p /etc/sysctl.d/90-kubelet.conf
curl -sL https://releases.rancher.com/install-docker/19.03.sh | sh
sudo usermod -aG docker ubuntu
TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
PRIVATE_IP=$(curl -H "X-aws-ec2-metadata-token: ${TOKEN}" -s http://169.254.169.254/latest/meta-data/local-ipv4)
PUBLIC_IP=$(curl -H "X-aws-ec2-metadata-token: ${TOKEN}" -s http://169.254.169.254/latest/meta-data/public-ipv4)
K8S_ROLES="--etcd --controlplane"
sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:<RANCHER_VERSION> --server https://<RANCHER_URL> --token <RANCHER_TOKEN> --ca-checksum <RANCHER_CA_CHECKSUM> --address ${PUBLIC_IP} --internal-address ${PRIVATE_IP} ${K8S_ROLES}
```
3. Worker group: Nodes that will be part of the k8s worker plane. Worker nodes will be scaled by cluster-autoscaler using the ASG.
* IAM profile: Provides cloud_provider worker integration.
This profile is called `K8sWorkerProfile`.
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}
]
}
```
* IAM role: `K8sWorkerRole: [K8sWorkerProfile]`
* Security group: `K8sWorkerSg` More info at [RKE ports (custom nodes tab)]({{<baseurl>}}/rancher/v2.x/en/installation/requirements/ports/#downstream-kubernetes-cluster-nodes)
* Tags:
* `kubernetes.io/cluster/<clusterID>: owned`
* `k8s.io/cluster-autoscaler/<clusterName>: true`
* `k8s.io/cluster-autoscaler/enabled: true`
* User data: `K8sWorkerUserData` Ubuntu 18.04(ami-0e11cbb34015ff725), installs docker and add worker node to the k8s cluster
```sh
#!/bin/bash -x
cat <<EOF > /etc/sysctl.d/90-kubelet.conf
vm.overcommit_memory = 1
vm.panic_on_oom = 0
kernel.panic = 10
kernel.panic_on_oops = 1
kernel.keys.root_maxkeys = 1000000
kernel.keys.root_maxbytes = 25000000
EOF
sysctl -p /etc/sysctl.d/90-kubelet.conf
curl -sL https://releases.rancher.com/install-docker/19.03.sh | sh
sudo usermod -aG docker ubuntu
TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
PRIVATE_IP=$(curl -H "X-aws-ec2-metadata-token: ${TOKEN}" -s http://169.254.169.254/latest/meta-data/local-ipv4)
PUBLIC_IP=$(curl -H "X-aws-ec2-metadata-token: ${TOKEN}" -s http://169.254.169.254/latest/meta-data/public-ipv4)
K8S_ROLES="--worker"
sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:<RANCHER_VERSION> --server https://<RANCHER_URL> --token <RANCHER_TOKEN> --ca-checksum <RANCHER_CA_CHECKCSUM> --address ${PUBLIC_IP} --internal-address ${PRIVATE_IP} ${K8S_ROLES}
```
More info is at [RKE clusters on AWS]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/amazon/) and [Cluster Autoscaler on AWS.](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md)
### 3. Deploy Nodes
Once we've configured AWS, let's create VMs to bootstrap our cluster:
* master (etcd+controlplane): Depending your needs, deploy three master instances with proper size. More info is at [the recommendations for production-ready clusters.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/production/)
* IAM role: `K8sMasterRole`
* Security group: `K8sMasterSg`
* Tags:
* `kubernetes.io/cluster/<clusterID>: owned`
* User data: `K8sMasterUserData`
* worker: Define an ASG on EC2 with the following settings:
* Name: `K8sWorkerAsg`
* IAM role: `K8sWorkerRole`
* Security group: `K8sWorkerSg`
* Tags:
* `kubernetes.io/cluster/<clusterID>: owned`
* `k8s.io/cluster-autoscaler/<clusterName>: true`
* `k8s.io/cluster-autoscaler/enabled: true`
* User data: `K8sWorkerUserData`
* Instances:
* minimum: 2
* desired: 2
* maximum: 10
Once the VMs are deployed, you should have a Rancher custom cluster up and running with three master and two worker nodes.
### 4. Install Cluster-autoscaler
At this point, we should have rancher cluster up and running. We are going to install cluster-autoscaler on master nodes and `kube-system` namespace, following cluster-autoscaler recommendation.
#### Parameters
This table shows cluster-autoscaler parameters for fine tuning:
| Parameter | Default | Description |
|---|---|---|
|cluster-name|-|Autoscaled cluster name, if available|
|address|:8085|The address to expose Prometheus metrics|
|kubernetes|-|Kubernetes master location. Leave blank for default|
|kubeconfig|-|Path to kubeconfig file with authorization and master location information|
|cloud-config|-|The path to the cloud provider configuration file. Empty string for no configuration file|
|namespace|"kube-system"|Namespace in which cluster-autoscaler run|
|scale-down-enabled|true|Should CA scale down the cluster|
|scale-down-delay-after-add|"10m"|How long after scale up that scale down evaluation resumes|
|scale-down-delay-after-delete|0|How long after node deletion that scale down evaluation resumes, defaults to scanInterval|
|scale-down-delay-after-failure|"3m"|How long after scale down failure that scale down evaluation resumes|
|scale-down-unneeded-time|"10m"|How long a node should be unneeded before it is eligible for scale down|
|scale-down-unready-time|"20m"|How long an unready node should be unneeded before it is eligible for scale down|
|scale-down-utilization-threshold|0.5|Sum of cpu or memory of all pods running on the node divided by node's corresponding allocatable resource, below which a node can be considered for scale down|
|scale-down-gpu-utilization-threshold|0.5|Sum of gpu requests of all pods running on the node divided by node's allocatable resource, below which a node can be considered for scale down|
|scale-down-non-empty-candidates-count|30|Maximum number of non empty nodes considered in one iteration as candidates for scale down with drain|
|scale-down-candidates-pool-ratio|0.1|A ratio of nodes that are considered as additional non empty candidates for scale down when some candidates from previous iteration are no longer valid|
|scale-down-candidates-pool-min-count|50|Minimum number of nodes that are considered as additional non empty candidates for scale down when some candidates from previous iteration are no longer valid|
|node-deletion-delay-timeout|"2m"|Maximum time CA waits for removing delay-deletion.cluster-autoscaler.kubernetes.io/ annotations before deleting the node|
|scan-interval|"10s"|How often cluster is reevaluated for scale up or down|
|max-nodes-total|0|Maximum number of nodes in all node groups. Cluster autoscaler will not grow the cluster beyond this number|
|cores-total|"0:320000"|Minimum and maximum number of cores in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers|
|memory-total|"0:6400000"|Minimum and maximum number of gigabytes of memory in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers|
cloud-provider|-|Cloud provider type|
|max-bulk-soft-taint-count|10|Maximum number of nodes that can be tainted/untainted PreferNoSchedule at the same time. Set to 0 to turn off such tainting|
|max-bulk-soft-taint-time|"3s"|Maximum duration of tainting/untainting nodes as PreferNoSchedule at the same time|
|max-empty-bulk-delete|10|Maximum number of empty nodes that can be deleted at the same time|
|max-graceful-termination-sec|600|Maximum number of seconds CA waits for pod termination when trying to scale down a node|
|max-total-unready-percentage|45|Maximum percentage of unready nodes in the cluster. After this is exceeded, CA halts operations|
|ok-total-unready-count|3|Number of allowed unready nodes, irrespective of max-total-unready-percentage|
|scale-up-from-zero|true|Should CA scale up when there 0 ready nodes|
|max-node-provision-time|"15m"|Maximum time CA waits for node to be provisioned|
|nodes|-|sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: <min>:<max>:<other...>|
|node-group-auto-discovery|-|One or more definition(s) of node group auto-discovery. A definition is expressed `<name of discoverer>:[<key>[=<value>]]`|
|estimator|-|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|expander|"random"|Type of node group expander to be used in scale up. Available values: `["random","most-pods","least-waste","price","priority"]`|
|ignore-daemonsets-utilization|false|Should CA ignore DaemonSet pods when calculating resource utilization for scaling down|
|ignore-mirror-pods-utilization|false|Should CA ignore Mirror pods when calculating resource utilization for scaling down|
|write-status-configmap|true|Should CA write status information to a configmap|
|max-inactivity|"10m"|Maximum time from last recorded autoscaler activity before automatic restart|
|max-failing-time|"15m"|Maximum time from last recorded successful autoscaler run before automatic restart|
|balance-similar-node-groups|false|Detect similar node groups and balance the number of nodes between them|
|node-autoprovisioning-enabled|false|Should CA autoprovision node groups when needed|
|max-autoprovisioned-node-group-count|15|The maximum number of autoprovisioned groups in the cluster|
|unremovable-node-recheck-timeout|"5m"|The timeout before we check again a node that couldn't be removed before|
|expendable-pods-priority-cutoff|-10|Pods with priority below cutoff will be expendable. They can be killed without any consideration during scale down and they don't cause scale up. Pods with null priority (PodPriority disabled) are non expendable|
|regional|false|Cluster is regional|
|new-pod-scale-up-delay|"0s"|Pods less than this old will not be considered for scale-up|
|ignore-taint|-|Specifies a taint to ignore in node templates when considering to scale a node group|
|balancing-ignore-label|-|Specifies a label to ignore in addition to the basic and cloud-provider set of labels when comparing if two node groups are similar|
|aws-use-static-instance-list|false|Should CA fetch instance types in runtime or use a static list. AWS only|
|profiling|false|Is debug/pprof endpoint enabled|
#### Deployment
Based on [cluster-autoscaler-run-on-master.yaml](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-run-on-master.yaml) example, we've created our own `cluster-autoscaler-deployment.yaml` to use preferred [auto-discovery setup](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws#auto-discovery-setup), updating tolerations, nodeSelector, image version and command config:
```yml
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-autoscaler
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
rules:
- apiGroups: [""]
resources: ["events", "endpoints"]
verbs: ["create", "patch"]
- apiGroups: [""]
resources: ["pods/eviction"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["endpoints"]
resourceNames: ["cluster-autoscaler"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch", "list", "get", "update"]
- apiGroups: [""]
resources:
- "pods"
- "services"
- "replicationcontrollers"
- "persistentvolumeclaims"
- "persistentvolumes"
verbs: ["watch", "list", "get"]
- apiGroups: ["extensions"]
resources: ["replicasets", "daemonsets"]
verbs: ["watch", "list", "get"]
- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["watch", "list"]
- apiGroups: ["apps"]
resources: ["statefulsets", "replicasets", "daemonsets"]
verbs: ["watch", "list", "get"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses", "csinodes"]
verbs: ["watch", "list", "get"]
- apiGroups: ["batch", "extensions"]
resources: ["jobs"]
verbs: ["get", "list", "watch", "patch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["create"]
- apiGroups: ["coordination.k8s.io"]
resourceNames: ["cluster-autoscaler"]
resources: ["leases"]
verbs: ["get", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create","list","watch"]
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"]
verbs: ["delete", "get", "update", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-autoscaler
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-autoscaler
subjects:
- kind: ServiceAccount
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cluster-autoscaler
subjects:
- kind: ServiceAccount
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
app: cluster-autoscaler
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler
template:
metadata:
labels:
app: cluster-autoscaler
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '8085'
spec:
serviceAccountName: cluster-autoscaler
tolerations:
- effect: NoSchedule
operator: "Equal"
value: "true"
key: node-role.kubernetes.io/controlplane
nodeSelector:
node-role.kubernetes.io/controlplane: "true"
containers:
- image: eu.gcr.io/k8s-artifacts-prod/autoscaling/cluster-autoscaler:v1.18.1
name: cluster-autoscaler
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --expander=least-waste
- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/<clusterName>
volumeMounts:
- name: ssl-certs
mountPath: /etc/ssl/certs/ca-certificates.crt
readOnly: true
imagePullPolicy: "Always"
volumes:
- name: ssl-certs
hostPath:
path: "/etc/ssl/certs/ca-certificates.crt"
```
Once the manifest file is prepared, deploy it in the Kubernetes cluster (Rancher UI can be used instead):
```sh
kubectl -n kube-system apply -f cluster-autoscaler-deployment.yaml
```
**Note:** Cluster-autoscaler deployment can also be set up using [manual configuration](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws#manual-configuration)
# Testing
At this point, we should have a cluster-scaler up and running in our Rancher custom cluster. Cluster-scale should manage `K8sWorkerAsg` ASG to scale up and down between 2 and 10 nodes, when one of the following conditions is true:
* There are pods that failed to run in the cluster due to insufficient resources. In this case, the cluster is scaled up.
* There are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes. In this case, the cluster is scaled down.
### Generating Load
We've prepared a `test-deployment.yaml` just to generate load on the Kubernetes cluster and see if cluster-autoscaler is working properly. The test deployment is requesting 1000m CPU and 1024Mi memory by three replicas. Adjust the requested resources and/or replica to be sure you exhaust the Kubernetes cluster resources:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-world
name: hello-world
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: rancher/hello-world
imagePullPolicy: Always
name: hello-world
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 1000m
memory: 1024Mi
requests:
cpu: 1000m
memory: 1024Mi
```
Once the test deployment is prepared, deploy it in the Kubernetes cluster default namespace (Rancher UI can be used instead):
```
kubectl -n default apply -f test-deployment.yaml
```
### Checking Scale
Once the Kubernetes resources got exhausted, cluster-autoscaler should scale up worker nodes where pods failed to be scheduled. It should scale up until up until all pods became scheduled. You should see the new nodes on the ASG and on the Kubernetes cluster. Check the logs on the `kube-system` cluster-autoscaler pod.
Once scale up is checked, let check for scale down. To do it, reduce the replica number on the test deployment until you release enough Kubernetes cluster resources to scale down. You should see nodes disappear on the ASG and on the Kubernetes cluster. Check the logs on the `kube-system` cluster-autoscaler pod.
@@ -1,42 +0,0 @@
---
title: ConfigMaps
weight: 3
---
While most types of Kubernetes secrets store sensitive information, [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) store general configuration information, such as a group of config files. Because ConfigMaps don't store sensitive information, they can be updated automatically, and therefore don't require their containers to be restarted following update (unlike most secret types, which require manual updates and a container restart to take effect).
ConfigMaps accept key value pairs in common string formats, like config files or JSON blobs. After you upload a config map, any workload can reference it as either an environment variable or a volume mount.
>**Note:** ConfigMaps can only be applied to namespaces and not projects.
1. From the **Global** view, select the project containing the namespace that you want to add a ConfigMap to.
1. From the main menu, select **Resources > Config Maps**. Click **Add Config Map**.
1. Enter a **Name** for the Config Map.
>**Note:** Kubernetes classifies ConfigMaps as [secrets](https://kubernetes.io/docs/concepts/configuration/secret/), and no two secrets in a project or namespace can have duplicate names. Therefore, to prevent conflicts, your ConfigMaps must have a unique name among the other certificates, registries, and secrets within your workspace.
1. Select the **Namespace** you want to add Config Map to. You can also add a new namespace on the fly by clicking **Add to a new namespace**.
1. From **Config Map Values**, click **Add Config Map Value** to add a key value pair to your ConfigMap. Add as many values as you need.
1. Click **Save**.
>**Note:** Don't use ConfigMaps to store sensitive data [use a secret]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/secrets/).
>
>**Tip:** You can add multiple key value pairs to the ConfigMap by copying and pasting.
>
> {{< img "/img/rancher/bulk-key-values.gif" "Bulk Key Value Pair Copy/Paste">}}
**Result:** Your ConfigMap is added to the namespace. You can view it in the Rancher UI from the **Resources > Config Maps** view.
## What's Next?
Now that you have a ConfigMap added to a namespace, you can add it to a workload that you deploy from the namespace of origin. You can use the ConfigMap to specify information for you application to consume, such as:
- Application environment variables.
- Specifying parameters for a Volume mounted to the workload.
For more information on adding ConfigMaps to a workload, see [Deploying Workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/).
@@ -1,61 +0,0 @@
---
title: Set Up Load Balancer and Ingress Controller within Rancher
description: Learn how you can set up load balancers and ingress controllers to redirect service requests within Rancher, and learn about the limitations of load balancers
weight: 1
---
Within Rancher, you can set up load balancers and ingress controllers to redirect service requests.
## Load Balancers
After you launch an application, the app is only available within the cluster. It can't be reached from outside the cluster.
If you want your applications to be externally accessible, you must add a load balancer or ingress to your cluster. Load balancers create a gateway for external connections to access your cluster, provided that the user knows the load balancer's IP address and the application's port number.
Rancher supports two types of load balancers:
- [Layer-4 Load Balancers]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#layer-4-load-balancer)
- [Layer-7 Load Balancers]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#layer-7-load-balancer)
For more information, see [load balancers]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers).
### Load Balancer Limitations
Load Balancers have a couple of limitations you should be aware of:
- Load Balancers can only handle one IP address per service, which means if you run multiple services in your cluster, you must have a load balancer for each service. Running multiples load balancers can be expensive.
- If you want to use a load balancer with a Hosted Kubernetes cluster (i.e., clusters hosted in GKE, EKS, or AKS), the load balancer must be running within that cloud provider's infrastructure. Please review the compatibility tables regarding support for load balancers based on how you've provisioned your clusters:
- [Support for Layer-4 Load Balancing]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#support-for-layer-4-load-balancing)
- [Support for Layer-7 Load Balancing]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#support-for-layer-7-load-balancing)
## Ingress
As mentioned in the limitations above, the disadvantages of using a load balancer are:
- Load Balancers can only handle one IP address per service.
- If you run multiple services in your cluster, you must have a load balancer for each service.
- It can be expensive to have a load balancer for every service.
In contrast, when an ingress is used as the entrypoint into a cluster, the ingress can route traffic to multiple services with greater flexibility. It can map multiple HTTP requests to services without individual IP addresses for each service.
Therefore, it is useful to have an ingress if you want multiple services to be exposed with the same IP address, the same Layer 7 protocol, or the same privileged node-ports: 80 and 443.
Ingress works in conjunction with one or more ingress controllers to dynamically route service requests. When the ingress receives a request, the ingress controller(s) in your cluster direct the request to the correct service based on service subdomains or path rules that you've configured.
Each Kubernetes Ingress resource corresponds roughly to a file in `/etc/nginx/sites-available/` containing a `server{}` configuration block, where requests for specific files and folders are configured.
Your ingress, which creates a port of entry to your cluster similar to a load balancer, can reside within your cluster or externally. Ingress and ingress controllers residing in RKE-launched clusters are powered by [Nginx](https://www.nginx.com/).
Ingress can provide other functionality as well, such as SSL termination, name-based virtual hosting, and more.
>**Using Rancher in a High Availability Configuration?**
>
>Refrain from adding an Ingress to the `local` cluster. The Nginx Ingress Controller that Rancher uses acts as a global entry point for _all_ clusters managed by Rancher, including the `local` cluster. Therefore, when users try to access an application, your Rancher connection may drop due to the Nginx configuration being reloaded. We recommend working around this issue by deploying applications only in clusters that you launch using Rancher.
- For more information on how to set up ingress in Rancher, see [Ingress]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress).
- For complete information about ingress and ingress controllers, see the [Kubernetes Ingress Documentation](https://kubernetes.io/docs/concepts/services-networking/ingress/)
- When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a Global DNS entry, see [Global DNS]({{<baseurl>}}/rancher/v2.x/en/catalog/globaldns/).
@@ -1,80 +0,0 @@
---
title: Adding Ingresses to Your Project
description: Ingresses can be added for workloads to provide load balancing, SSL termination and host/path-based routing. Learn how to add Rancher ingress to your project
weight: 3042
---
Ingress can be added for workloads to provide load balancing, SSL termination and host/path based routing. When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a [Global DNS entry]({{<baseurl>}}/rancher/v2.x/en/catalog/globaldns/).
1. From the **Global** view, open the project that you want to add ingress to.
1. Click **Resources** in the main navigation bar. Click the **Load Balancing** tab. (In versions prior to v2.3.0, just click the **Load Balancing** tab.) Then click **Add Ingress**.
1. Enter a **Name** for the ingress.
1. Select an existing **Namespace** from the drop-down list. Alternatively, you can create a new [namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) on the fly by clicking **Add to a new namespace**.
1. Create ingress forwarding **Rules**.
- **Automatically generate a xip.io hostname**
If you choose this option, ingress routes requests to hostname to a DNS name that's automatically generated. Rancher uses [xip.io](http://xip.io/) to automatically generates the DNS name. This option is best used for testing, _not_ production environments.
>**Note:** To use this option, you must be able to resolve to `xip.io` addresses.
1. Add a **Target Backend**. By default, a workload is added to the ingress, but you can add more targets by clicking either **Service** or **Workload**.
1. **Optional:** If you want specify a workload or service when a request is sent to a particular hostname path, add a **Path** for the target. For example, if you want requests for `www.mysite.com/contact-us` to be sent to a different service than `www.mysite.com`, enter `/contact-us` in the **Path** field.
Typically, the first rule that you create does not include a path.
1. Select a workload or service from the **Target** drop-down list for each target you've added.
1. Enter the **Port** number that each target operates on.
- **Specify a hostname to use**
If you use this option, ingress routes requests for a hostname to the service or workload that you specify.
1. Enter the hostname that your ingress will handle request forwarding for. For example, `www.mysite.com`.
1. Add a **Target Backend**. By default, a workload is added to the ingress, but you can add more targets by clicking either **Service** or **Workload**.
1. **Optional:** If you want specify a workload or service when a request is sent to a particular hostname path, add a **Path** for the target. For example, if you want requests for `www.mysite.com/contact-us` to be sent to a different service than `www.mysite.com`, enter `/contact-us` in the **Path** field.
Typically, the first rule that you create does not include a path.
1. Select a workload or service from the **Target** drop-down list for each target you've added.
1. Enter the **Port** number that each target operates on.
- **Use as the default backend**
Use this option to set an ingress rule for handling requests that don't match any other ingress rules. For example, use this option to route requests that can't be found to a `404` page.
>**Note:** If you deployed Rancher using RKE, a default backend for 404s and 202s is already configured.
1. Add a **Target Backend**. Click either **Service** or **Workload** to add the target.
1. Select a service or workload from the **Target** drop-down list.
1. **Optional:** click **Add Rule** to create additional ingress rules. For example, after you create ingress rules to direct requests for your hostname, you'll likely want to create a default backend to handle 404s.
1. If any of your ingress rules handle requests for encrypted ports, add a certificate to encrypt/decrypt communications.
>**Note:** You must have an SSL certificate that the ingress can use to encrypt/decrypt communications. For more information see [Adding SSL Certificates]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/certificates/).
1. Click **Add Certificate**.
1. Select a **Certificate** from the drop-down list.
1. Enter the **Host** using encrypted communication.
1. To add additional hosts that use the certificate, click **Add Hosts**.
1. **Optional:** Add [Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) and/or [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) to provide metadata for your ingress.
For a list of annotations available for use, see the [Nginx Ingress Controller Documentation](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/).
**Result:** Your ingress is added to the project. The ingress begins enforcing your ingress rules.
@@ -1,72 +0,0 @@
---
title: "Layer 4 and Layer 7 Load Balancing"
description: "Kubernetes supports load balancing in two ways: Layer-4 Load Balancing and Layer-7 Load Balancing. Learn about the support for each way in different deployments"
weight: 3041
---
Kubernetes supports load balancing in two ways: Layer-4 Load Balancing and Layer-7 Load Balancing.
## Layer-4 Load Balancer
Layer-4 load balancer (or the external load balancer) forwards traffic to Nodeports. Layer-4 load balancer allows you to forward both HTTP and TCP traffic.
Often, the Layer-4 load balancer is supported by the underlying cloud provider, so when you deploy RKE clusters on bare-metal servers and vSphere clusters, Layer-4 load balancer is not supported. However, a single [globally managed config-map](https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/) can be used to expose services on NGINX or third-party ingress.
> **Note:** It is possible to deploy a cluster with a non-cloud load balancer, such as [MetalLB.](https://metallb.universe.tf/) However, that use case is more advanced than the Layer-4 load balancer supported by a cloud provider, and it is not configurable in Rancher or RKE.
### Support for Layer-4 Load Balancing
Support for layer-4 load balancer varies based on the underlying cloud provider.
Cluster Deployment | Layer-4 Load Balancer Support
----------------------------------------------|--------------------------------
Amazon EKS | Supported by AWS cloud provider
Google GKE | Supported by GCE cloud provider
Azure AKS | Supported by Azure cloud provider
RKE on EC2 | Supported by AWS cloud provider
RKE on DigitalOcean | Limited NGINX or third-party Ingress*
RKE on vSphere | Limited NGINX or third party-Ingress*
RKE on Custom Hosts<br/>(e.g. bare-metal servers) | Limited NGINX or third-party Ingress*
Third-party MetalLB | Limited NGINX or third-party Ingress*
\* Services can be exposed through a single [globally managed config-map.](https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/)
## Layer-7 Load Balancer
Layer-7 load balancer (or the ingress controller) supports host and path-based load balancing and SSL termination. Layer-7 load balancer only forwards HTTP and HTTPS traffic and therefore they listen on ports 80 and 443 only. Cloud providers such as Amazon and Google support layer-7 load balancer. In addition, RKE clusters deploys the Nginx Ingress Controller.
### Support for Layer-7 Load Balancing
Support for layer-7 load balancer varies based on the underlying cloud provider.
Cluster Deployment | Layer-7 Load Balancer Support
----------------------------------------------|--------------------------------
Amazon EKS | Supported by AWS cloud provider
Google GKE | Supported by GKE cloud provider
Azure AKS | Not Supported
RKE on EC2 | Nginx Ingress Controller
RKE on DigitalOcean | Nginx Ingress Controller
RKE on vSphere | Nginx Ingress Controller
RKE on Custom Hosts<br/>(e.g. bare-metal servers) | Nginx Ingress Controller
### Host Names in Layer-7 Load Balancer
Some cloud-managed layer-7 load balancers (such as the ALB ingress controller on AWS) expose DNS addresses for ingress rules. You need to map (via CNAME) your domain name to the DNS address generated by the layer-7 load balancer.
Other layer-7 load balancers, such as the Google Load Balancer or Nginx Ingress Controller, directly expose one or more IP addresses. Google Load Balancer provides a single routable IP address. Nginx Ingress Controller exposes the external IP of all nodes that run the Nginx Ingress Controller. You can do either of the following:
1. Configure your own DNS to map (via A records) your domain name to the IP addresses exposes by the Layer-7 load balancer.
2. Ask Rancher to generate an xip.io host name for your ingress rule. Rancher will take one of your exposed IPs, say a.b.c.d, and generate a host name <ingressname>.<namespace>.a.b.c.d.xip.io.
The benefit of using xip.io is that you obtain a working entrypoint URL immediately after you create the ingress rule. Setting up your own domain name, on the other hand, requires you to configure DNS servers and wait for DNS to propagate.
## Related Links
- [Create an External Load Balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/)
#### Tutorials
- [Kubernetes installation with External Load Balancer (HTTPS/Layer 7)]({{<baseurl>}}/rancher/v2.x/en/installation/ha-server-install-external-lb)
- [Kubernetes installation with External Load Balancer (TCP/Layer 4)]({{<baseurl>}}/rancher/v2.x/en/installation/ha-server-install)
- [Docker Installation with External Load Balancer]({{<baseurl>}}/rancher/v2.x/en/installation/single-node-install-external-lb)
@@ -1,268 +0,0 @@
---
title: Pipelines
weight: 3047
---
Rancher's pipeline provides a simple CI/CD experience. Use it to automatically checkout code, run builds or scripts, publish Docker images or catalog applications, and deploy the updated software to users.
Setting up a pipeline can help developers deliver new software as quickly and efficiently as possible. Using Rancher, you can integrate with a GitHub repository to setup a continuous integration (CI) pipeline.
After configuring Rancher and GitHub, you can deploy containers running Jenkins to automate a pipeline execution:
- Build your application from code to image.
- Validate your builds.
- Deploy your build images to your cluster.
- Run unit tests.
- Run regression tests.
>**Notes:**
>
>- Pipelines improved in Rancher v2.1. Therefore, if you configured pipelines while using v2.0.x, you'll have to reconfigure them after upgrading to v2.1.
>- Still using v2.0.x? See the pipeline documentation for [previous versions]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/docs-for-v2.0.x).
>- Rancher's pipeline provides a simple CI/CD experience, but it does not offer the full power and flexibility of and is not a replacement of enterprise-grade Jenkins or other CI tools your team uses.
This section covers the following topics:
- [Concepts](#concepts)
- [How Pipelines Work](#how-pipelines-work)
- [Roles-based Access Control for Pipelines](#roles-based-access-control-for-pipelines)
- [Setting up Pipelines](#setting-up-pipelines)
- [Configure version control providers](#1-configure-version-control-providers)
- [Configure repositories](#2-configure-repositories)
- [Configure the pipeline](#3-configure-the-pipeline)
- [Pipeline Configuration Reference](#pipeline-configuration-reference)
- [Running your Pipelines](#running-your-pipelines)
- [Triggering a Pipeline](#triggering-a-pipeline)
- [Modifying the Event Triggers for the Repository](#modifying-the-event-triggers-for-the-repository)
# Concepts
For an explanation of concepts and terminology used in this section, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/concepts)
# How Pipelines Work
After enabling the ability to use pipelines in a project, you can configure multiple pipelines in each project. Each pipeline is unique and can be configured independently.
A pipeline is configured off of a group of files that are checked into source code repositories. Users can configure their pipelines either through the Rancher UI or by adding a `.rancher-pipeline.yml` into the repository.
Before pipelines can be configured, you will need to configure authentication to your version control provider, e.g. GitHub, GitLab, Bitbucket. If you haven't configured a version control provider, you can always use [Rancher's example repositories]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/example-repos/) to view some common pipeline deployments.
When you configure a pipeline in one of your projects, a namespace specifically for the pipeline is automatically created. The following components are deployed to it:
- **Jenkins:**
The pipeline's build engine. Because project users do not directly interact with Jenkins, it's managed and locked.
>**Note:** There is no option to use existing Jenkins deployments as the pipeline engine.
- **Docker Registry:**
Out-of-the-box, the default target for your build-publish step is an internal Docker Registry. However, you can make configurations to push to a remote registry instead. The internal Docker Registry is only accessible from cluster nodes and cannot be directly accessed by users. Images are not persisted beyond the lifetime of the pipeline and should only be used in pipeline runs. If you need to access your images outside of pipeline runs, please push to an external registry.
- **Minio:**
Minio storage is used to store the logs for pipeline executions.
>**Note:** The managed Jenkins instance works statelessly, so don't worry about its data persistency. The Docker Registry and Minio instances use ephemeral volumes by default, which is fine for most use cases. If you want to make sure pipeline logs can survive node failures, you can configure persistent volumes for them, as described in [data persistency for pipeline components]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/storage).
# Roles-based Access Control for Pipelines
If you can access a project, you can enable repositories to start building pipelines.
Only [administrators]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owners or members]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owners]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can configure version control providers and manage global pipeline execution settings.
Project members can only configure repositories and pipelines.
# Setting up Pipelines
To set up pipelines, you will need to do the following:
1. [Configure version control providers](#1-configure-version-control-providers)
2. [Configure repositories](#2-configure-repositories)
3. [Configure the pipeline](#3-configure-the-pipeline)
### 1. Configure Version Control Providers
Before you can start configuring a pipeline for your repository, you must configure and authorize a version control provider.
| Provider | Available as of |
| --- | --- |
| GitHub | v2.0.0 |
| GitLab | v2.1.0 |
| Bitbucket | v2.2.0 |
Select your provider's tab below and follow the directions.
{{% tabs %}}
{{% tab "GitHub" %}}
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Select **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**.
1. Follow the directions displayed to **Setup a Github application**. Rancher redirects you to Github to setup an OAuth App in Github.
1. From GitHub, copy the **Client ID** and **Client Secret**. Paste them into Rancher.
1. If you're using GitHub for enterprise, select **Use a private github enterprise installation**. Enter the host address of your GitHub installation.
1. Click **Authenticate**.
{{% /tab %}}
{{% tab "GitLab" %}}
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Select **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**.
1. Follow the directions displayed to **Setup a GitLab application**. Rancher redirects you to GitLab.
1. From GitLab, copy the **Application ID** and **Secret**. Paste them into Rancher.
1. If you're using GitLab for enterprise setup, select **Use a private gitlab enterprise installation**. Enter the host address of your GitLab installation.
1. Click **Authenticate**.
>**Note:**
> 1. Pipeline uses Gitlab [v4 API](https://docs.gitlab.com/ee/api/v3_to_v4.html) and the supported Gitlab version is 9.0+.
> 2. If you use GitLab 10.7+ and your Rancher setup is in a local network, enable the **Allow requests to the local network from hooks and services** option in GitLab admin settings.
{{% /tab %}}
{{% tab "Bitbucket Cloud" %}}
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Select **Tools > Pipelines** in the navigation bar.
1. Choose the **Use public Bitbucket Cloud** option.
1. Follow the directions displayed to **Setup a Bitbucket Cloud application**. Rancher redirects you to Bitbucket to setup an OAuth consumer in Bitbucket.
1. From Bitbucket, copy the consumer **Key** and **Secret**. Paste them into Rancher.
1. Click **Authenticate**.
{{% /tab %}}
{{% tab "Bitbucket Server" %}}
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Select **Tools > Pipelines** in the navigation bar.
1. Choose the **Use private Bitbucket Server setup** option.
1. Follow the directions displayed to **Setup a Bitbucket Server application**.
1. Enter the host address of your Bitbucket server installation.
1. Click **Authenticate**.
>**Note:**
> Bitbucket server needs to do SSL verification when sending webhooks to Rancher. Please ensure that Rancher server's certificate is trusted by the Bitbucket server. There are two options:
>
> 1. Setup Rancher server with a certificate from a trusted CA.
> 1. If you're using self-signed certificates, import Rancher server's certificate to the Bitbucket server. For instructions, see the Bitbucket server documentation for [configuring self-signed certificates](https://confluence.atlassian.com/bitbucketserver/if-you-use-self-signed-certificates-938028692.html).
>
{{% /tab %}}
{{% /tabs %}}
**Result:** After the version control provider is authenticated, you will be automatically re-directed to start configuring which repositories you want start using with a pipeline.
### 2. Configure Repositories
After the version control provider is authorized, you are automatically re-directed to start configuring which repositories that you want start using pipelines with. Even if someone else has set up the version control provider, you will see their repositories and can build a pipeline.
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Click on **Configure Repositories**.
1. A list of repositories are displayed. If you are configuring repositories the first time, click on **Authorize & Fetch Your Own Repositories** to fetch your repository list.
1. For each repository that you want to set up a pipeline, click on **Enable**.
1. When you're done enabling all your repositories, click on **Done**.
**Results:** You have a list of repositories that you can start configuring pipelines for.
### 3. Configure the Pipeline
Now that repositories are added to your project, you can start configuring the pipeline by adding automated stages and steps. For your convenience, there are multiple built-in step types for dedicated tasks.
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Find the repository that you want to set up a pipeline for.
1. Configure the pipeline through the UI or using a yaml file in the repository, i.e. `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`. Pipeline configuration is split into stages and steps. Stages must fully complete before moving onto the next stage, but steps in a stage run concurrently. For each stage, you can add different step types. Note: As you build out each step, there are different advanced options based on the step type. Advanced options include trigger rules, environment variables, and secrets. For more information on configuring the pipeline through the UI or the YAML file, refer to the [pipeline configuration reference.]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/config)
* If you are going to use the UI, select the vertical **&#8942; > Edit Config** to configure the pipeline using the UI. After the pipeline is configured, you must view the YAML file and push it to the repository.
* If you are going to use the YAML file, select the vertical **&#8942; > View/Edit YAML** to configure the pipeline. If you choose to use a YAML file, you need to push it to the repository after any changes in order for it to be updated in the repository. When editing the pipeline configuration, it takes a few moments for Rancher to check for an existing pipeline configuration.
1. Select which `branch` to use from the list of branches.
1. Optional: Set up notifications.
1. Set up the trigger rules for the pipeline.
1. Enter a **Timeout** for the pipeline.
1. When all the stages and steps are configured, click **Done**.
**Results:** Your pipeline is now configured and ready to be run.
# Pipeline Configuration Reference
Refer to [this page]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/config) for details on how to configure a pipeline to:
- Run a script
- Build and publish images
- Publish catalog templates
- Deploy YAML
- Deploy a catalog app
The configuration reference also covers how to configure:
- Notifications
- Timeouts
- The rules that trigger a pipeline
- Environment variables
- Secrets
# Running your Pipelines
Run your pipeline for the first time. From the project view in Rancher, go to **Resources > Pipelines.** (In versions prior to v2.3.0, go to the **Pipelines** tab.) Find your pipeline and select the vertical **&#8942; > Run**.
During this initial run, your pipeline is tested, and the following pipeline components are deployed to your project as workloads in a new namespace dedicated to the pipeline:
- `docker-registry`
- `jenkins`
- `minio`
This process takes several minutes. When it completes, you can view each pipeline component from the project **Workloads** tab.
# Triggering a Pipeline
When a repository is enabled, a webhook is automatically set in the version control provider. By default, the pipeline is triggered by a **push** event to a repository, but you can modify the event(s) that trigger running the pipeline.
Available Events:
* **Push**: Whenever a commit is pushed to the branch in the repository, the pipeline is triggered.
* **Pull Request**: Whenever a pull request is made to the repository, the pipeline is triggered.
* **Tag**: When a tag is created in the repository, the pipeline is triggered.
> **Note:** This option doesn't exist for Rancher's [example repositories]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/example-repos/).
### Modifying the Event Triggers for the Repository
1. From the **Global** view, navigate to the project that you want to modify the event trigger for the pipeline.
1. 1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Find the repository that you want to modify the event triggers. Select the vertical **&#8942; > Setting**.
1. Select which event triggers (**Push**, **Pull Request** or **Tag**) you want for the repository.
1. Click **Save**.
@@ -1,36 +0,0 @@
---
title: Concepts
weight: 1
---
The purpose of this page is to explain common concepts and terminology related to pipelines.
- **Pipeline:**
A _pipeline_ is a software delivery process that is broken into different stages and steps. Setting up a pipeline can help developers deliver new software as quickly and efficiently as possible. Within Rancher, you can configure pipelines for each of your Rancher projects. A pipeline is based on a specific repository. It defines the process to build, test, and deploy your code. Rancher uses the [pipeline as code](https://jenkins.io/doc/book/pipeline-as-code/) model. Pipeline configuration is represented as a pipeline file in the source code repository, using the file name `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`.
- **Stages:**
A pipeline stage consists of multiple steps. Stages are executed in the order defined in the pipeline file. The steps in a stage are executed concurrently. A stage starts when all steps in the former stage finish without failure.
- **Steps:**
A pipeline step is executed inside a specified stage. A step fails if it exits with a code other than `0`. If a step exits with this failure code, the entire pipeline fails and terminates.
- **Workspace:**
The workspace is the working directory shared by all pipeline steps. In the beginning of a pipeline, source code is checked out to the workspace. The command for every step bootstraps in the workspace. During a pipeline execution, the artifacts from a previous step will be available in future steps. The working directory is an ephemeral volume and will be cleaned out with the executor pod when a pipeline execution is finished.
Typically, pipeline stages include:
- **Build:**
Each time code is checked into your repository, the pipeline automatically clones the repo and builds a new iteration of your software. Throughout this process, the software is typically reviewed by automated tests.
- **Publish:**
After the build is completed, either a Docker image is built and published to a Docker registry or a catalog template is published.
- **Deploy:**
After the artifacts are published, you would release your application so users could start using the updated product.
@@ -1,645 +0,0 @@
---
title: Pipeline Configuration Reference
weight: 1
---
In this section, you'll learn how to configure pipelines.
- [Step Types](#step-types)
- [Step Type: Run Script](#step-type-run-script)
- [Step Type: Build and Publish Images](#step-type-build-and-publish-images)
- [Step Type: Publish Catalog Template](#step-type-publish-catalog-template)
- [Step Type: Deploy YAML](#step-type-deploy-yaml)
- [Step Type: Deploy Catalog App](#step-type-deploy-catalog-app)
- [Notifications](#notifications)
- [Timeouts](#timeouts)
- [Triggers and Trigger Rules](#triggers-and-trigger-rules)
- [Environment Variables](#environment-variables)
- [Secrets](#secrets)
- [Pipeline Variable Substitution Reference](#pipeline-variable-substitution-reference)
- [Global Pipeline Execution Settings](#global-pipeline-execution-settings)
- [Executor Quota](#executor-quota)
- [Resource Quota for Executors](#resource-quota-for-executors)
- [Custom CA](#custom-ca)
- [Persistent Data for Pipeline Components](#persistent-data-for-pipeline-components)
- [Example rancher-pipeline.yml](#example-rancher-pipeline-yml)
# Step Types
Within each stage, you can add as many steps as you'd like. When there are multiple steps in one stage, they run concurrently.
Step types include:
- [Run Script](#step-type-run-script)
- [Build and Publish Images](#step-type-build-and-publish-images)
- [Publish Catalog Template](#step-type-publish-catalog-template)
- [Deploy YAML](#step-type-deploy-yaml)
- [Deploy Catalog App](#step-type-deploy-catalog-app)
<!--
### Clone
The first stage is preserved to be a cloning step that checks out source code from your repo. Rancher handles the cloning of the git repository. This action is equivalent to `git clone <repository_link> <workspace_dir>`.
-->
### Configuring Steps By UI
If you haven't added any stages, click **Configure pipeline for this branch** to configure the pipeline through the UI.
1. Add stages to your pipeline execution by clicking **Add Stage**.
1. Enter a **Name** for each stage of your pipeline.
1. For each stage, you can configure [trigger rules](#triggers-and-trigger-rules) by clicking on **Show Advanced Options**. Note: this can always be updated at a later time.
1. After you've created a stage, start [adding steps](#step-types) by clicking **Add a Step**. You can add multiple steps to each stage.
### Configuring Steps by YAML
For each stage, you can add multiple steps. Read more about each [step type](#step-types) and the advanced options to get all the details on how to configure the YAML. This is only a small example of how to have multiple stages with a singular step in each stage.
```yaml
# example
stages:
- name: Build something
# Conditions for stages
when:
branch: master
event: [ push, pull_request ]
# Multiple steps run concurrently
steps:
- runScriptConfig:
image: busybox
shellScript: date -R
- name: Publish my image
steps:
- publishImageConfig:
dockerfilePath: ./Dockerfile
buildContext: .
tag: rancher/rancher:v2.0.0
# Optionally push to remote registry
pushRemote: true
registry: reg.example.com
```
# Step Type: Run Script
The **Run Script** step executes arbitrary commands in the workspace inside a specified container. You can use it to build, test and do more, given whatever utilities the base image provides. For your convenience, you can use variables to refer to metadata of a pipeline execution. Please refer to the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) for the list of available variables.
### Configuring Script by UI
1. From the **Step Type** drop-down, choose **Run Script** and fill in the form.
1. Click **Add**.
### Configuring Script by YAML
```yaml
# example
stages:
- name: Build something
steps:
- runScriptConfig:
image: golang
shellScript: go build
```
# Step Type: Build and Publish Images
The **Build and Publish Image** step builds and publishes a Docker image. This process requires a Dockerfile in your source code's repository to complete successfully.
The option to publish an image to an insecure registry is not exposed in the UI, but you can specify an environment variable in the YAML that allows you to publish an image insecurely.
### Configuring Building and Publishing Images by UI
1. From the **Step Type** drop-down, choose **Build and Publish**.
1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**.
Field | Description |
---------|----------|
Dockerfile Path | The relative path to the Dockerfile in the source code repo. By default, this path is `./Dockerfile`, which assumes the Dockerfile is in the root directory. You can set it to other paths in different use cases (`./path/to/myDockerfile` for example). |
Image Name | The image name in `name:tag` format. The registry address is not required. For example, to build `example.com/repo/my-image:dev`, enter `repo/my-image:dev`. |
Push image to remote repository | An option to set the registry that publishes the image that's built. To use this option, enable it and choose a registry from the drop-down. If this option is disabled, the image is pushed to the internal registry. |
Build Context <br/><br/> (**Show advanced options**)| By default, the root directory of the source code (`.`). For more details, see the Docker [build command documentation](https://docs.docker.com/engine/reference/commandline/build/).
### Configuring Building and Publishing Images by YAML
You can use specific arguments for Docker daemon and the build. They are not exposed in the UI, but they are available in pipeline YAML format, as indicated in the example below. Available environment variables include:
Variable Name | Description
------------------------|------------------------------------------------------------
PLUGIN_DRY_RUN | Disable docker push
PLUGIN_DEBUG | Docker daemon executes in debug mode
PLUGIN_MIRROR | Docker daemon registry mirror
PLUGIN_INSECURE | Docker daemon allows insecure registries
PLUGIN_BUILD_ARGS | Docker build args, a comma separated list
<br>
```yaml
# This example shows an environment variable being used
# in the Publish Image step. This variable allows you to
# publish an image to an insecure registry:
stages:
- name: Publish Image
steps:
- publishImageConfig:
dockerfilePath: ./Dockerfile
buildContext: .
tag: repo/app:v1
pushRemote: true
registry: example.com
env:
PLUGIN_INSECURE: "true"
```
# Step Type: Publish Catalog Template
The **Publish Catalog Template** step publishes a version of a catalog app template (i.e. Helm chart) to a [git hosted chart repository]({{<baseurl>}}/rancher/v2.x/en/catalog/custom/). It generates a git commit and pushes it to your chart repository. This process requires a chart folder in your source code's repository and a pre-configured secret in the dedicated pipeline namespace to complete successfully. Any variables in the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) is supported for any file in the chart folder.
### Configuring Publishing a Catalog Template by UI
1. From the **Step Type** drop-down, choose **Publish Catalog Template**.
1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**.
Field | Description |
---------|----------|
Chart Folder | The relative path to the chart folder in the source code repo, where the `Chart.yaml` file is located. |
Catalog Template Name | The name of the template. For example, wordpress. |
Catalog Template Version | The version of the template you want to publish, it should be consistent with the version defined in the `Chart.yaml` file. |
Protocol | You can choose to publish via HTTP(S) or SSH protocol. |
Secret | The secret that stores your Git credentials. You need to create a secret in dedicated pipeline namespace in the project before adding this step. If you use HTTP(S) protocol, store Git username and password in `USERNAME` and `PASSWORD` key of the secret. If you use SSH protocol, store Git deploy key in `DEPLOY_KEY` key of the secret. After the secret is created, select it in this option. |
Git URL | The Git URL of the chart repository that the template will be published to. |
Git Branch | The Git branch of the chart repository that the template will be published to. |
Author Name | The author name used in the commit message. |
Author Email | The author email used in the commit message. |
### Configuring Publishing a Catalog Template by YAML
You can add **Publish Catalog Template** steps directly in the `.rancher-pipeline.yml` file.
Under the `steps` section, add a step with `publishCatalogConfig`. You will provide the following information:
* Path: The relative path to the chart folder in the source code repo, where the `Chart.yaml` file is located.
* CatalogTemplate: The name of the template.
* Version: The version of the template you want to publish, it should be consistent with the version defined in the `Chart.yaml` file.
* GitUrl: The git URL of the chart repository that the template will be published to.
* GitBranch: The git branch of the chart repository that the template will be published to.
* GitAuthor: The author name used in the commit message.
* GitEmail: The author email used in the commit message.
* Credentials: You should provide Git credentials by referencing secrets in dedicated pipeline namespace. If you publish via SSH protocol, inject your deploy key to the `DEPLOY_KEY` environment variable. If you publish via HTTP(S) protocol, inject your username and password to `USERNAME` and `PASSWORD` environment variables.
```yaml
# example
stages:
- name: Publish Wordpress Template
steps:
- publishCatalogConfig:
path: ./charts/wordpress/latest
catalogTemplate: wordpress
version: ${CICD_GIT_TAG}
gitUrl: git@github.com:myrepo/charts.git
gitBranch: master
gitAuthor: example-user
gitEmail: user@example.com
envFrom:
- sourceName: publish-keys
sourceKey: DEPLOY_KEY
```
# Step Type: Deploy YAML
This step deploys arbitrary Kubernetes resources to the project. This deployment requires a Kubernetes manifest file to be present in the source code repository. Pipeline variable substitution is supported in the manifest file. You can view an example file at [GitHub](https://github.com/rancher/pipeline-example-go/blob/master/deployment.yaml). Please refer to the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) for the list of available variables.
### Configure Deploying YAML by UI
1. From the **Step Type** drop-down, choose **Deploy YAML** and fill in the form.
1. Enter the **YAML Path**, which is the path to the manifest file in the source code.
1. Click **Add**.
### Configure Deploying YAML by YAML
```yaml
# example
stages:
- name: Deploy
steps:
- applyYamlConfig:
path: ./deployment.yaml
```
# Step Type :Deploy Catalog App
The **Deploy Catalog App** step deploys a catalog app in the project. It will install a new app if it is not present, or upgrade an existing one.
### Configure Deploying Catalog App by UI
1. From the **Step Type** drop-down, choose **Deploy Catalog App**.
1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**.
Field | Description |
---------|----------|
Catalog | The catalog from which the app template will be used. |
Template Name | The name of the app template. For example, wordpress. |
Template Version | The version of the app template you want to deploy. |
Namespace | The target namespace where you want to deploy the app. |
App Name | The name of the app you want to deploy. |
Answers | Key-value pairs of answers used to deploy the app. |
### Configure Deploying Catalog App by YAML
You can add **Deploy Catalog App** steps directly in the `.rancher-pipeline.yml` file.
Under the `steps` section, add a step with `applyAppConfig`. You will provide the following information:
* CatalogTemplate: The ID of the template. This can be found by clicking `Launch app` and selecting `View details` for the app. It is the last part of the URL.
* Version: The version of the template you want to deploy.
* Answers: Key-value pairs of answers used to deploy the app.
* Name: The name of the app you want to deploy.
* TargetNamespace: The target namespace where you want to deploy the app.
```yaml
# example
stages:
- name: Deploy App
steps:
- applyAppConfig:
catalogTemplate: cattle-global-data:library-mysql
version: 0.3.8
answers:
persistence.enabled: "false"
name: testmysql
targetNamespace: test
```
# Timeouts
By default, each pipeline execution has a timeout of 60 minutes. If the pipeline execution cannot complete within its timeout period, the pipeline is aborted.
### Configuring Timeouts by UI
Enter a new value in the **Timeout** field.
### Configuring Timeouts by YAML
In the `timeout` section, enter the timeout value in minutes.
```yaml
# example
stages:
- name: Build something
steps:
- runScriptConfig:
image: busybox
shellScript: ls
# timeout in minutes
timeout: 30
```
# Notifications
You can enable notifications to any [notifiers]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) based on the build status of a pipeline. Before enabling notifications, Rancher recommends [setting up notifiers]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/notifiers/#adding-notifiers) so it will be easy to add recipients immediately.
### Configuring Notifications by UI
1. Within the **Notification** section, turn on notifications by clicking **Enable**.
1. Select the conditions for the notification. You can select to get a notification for the following statuses: `Failed`, `Success`, `Changed`. For example, if you want to receive notifications when an execution fails, select **Failed**.
1. If you don't have any existing [notifiers]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/notifiers), Rancher will provide a warning that no notifiers are set up and provide a link to be able to go to the notifiers page. Follow the [instructions]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/notifiers/#adding-notifiers) to add a notifier. If you already have notifiers, you can add them to the notification by clicking the **Add Recipient** button.
> **Note:** Notifiers are configured at a cluster level and require a different level of permissions.
1. For each recipient, select which notifier type from the dropdown. Based on the type of notifier, you can use the default recipient or override the recipient with a different one. For example, if you have a notifier for _Slack_, you can update which channel to send the notification to. You can add additional notifiers by clicking **Add Recipient**.
### Configuring Notifications by YAML
In the `notification` section, you will provide the following information:
* **Recipients:** This will be the list of notifiers/recipients that will receive the notification.
* **Notifier:** The ID of the notifier. This can be found by finding the notifier and selecting **View in API** to get the ID.
* **Recipient:** Depending on the type of the notifier, the "default recipient" can be used or you can override this with a different recipient. For example, when configuring a slack notifier, you select a channel as your default recipient, but if you wanted to send notifications to a different channel, you can select a different recipient.
* **Condition:** Select which conditions of when you want the notification to be sent.
* **Message (Optional):** If you want to change the default notification message, you can edit this in the yaml. Note: This option is not available in the UI.
```yaml
# Example
stages:
- name: Build something
steps:
- runScriptConfig:
image: busybox
shellScript: ls
notification:
recipients:
- # Recipient
recipient: "#mychannel"
# ID of Notifier
notifier: "c-wdcsr:n-c9pg7"
- recipient: "test@example.com"
notifier: "c-wdcsr:n-lkrhd"
# Select which statuses you want the notification to be sent
condition: ["Failed", "Success", "Changed"]
# Ability to override the default message (Optional)
message: "my-message"
```
# Triggers and Trigger Rules
After you configure a pipeline, you can trigger it using different methods:
- **Manually:**
After you configure a pipeline, you can trigger a build using the latest CI definition from Rancher UI. When a pipeline execution is triggered, Rancher dynamically provisions a Kubernetes pod to run your CI tasks and then remove it upon completion.
- **Automatically:**
When you enable a repository for a pipeline, webhooks are automatically added to the version control system. When project users interact with the repo by pushing code, opening pull requests, or creating a tag, the version control system sends a webhook to Rancher Server, triggering a pipeline execution.
To use this automation, webhook management permission is required for the repository. Therefore, when users authenticate and fetch their repositories, only those on which they have webhook management permission will be shown.
Trigger rules can be created to have fine-grained control of pipeline executions in your pipeline configuration. Trigger rules come in two types:
- **Run this when:** This type of rule starts the pipeline, stage, or step when a trigger explicitly occurs.
- **Do Not Run this when:** This type of rule skips the pipeline, stage, or step when a trigger explicitly occurs.
If all conditions evaluate to `true`, then the pipeline/stage/step is executed. Otherwise it is skipped. When a pipeline is skipped, none of the pipeline is executed. When a stage/step is skipped, it is considered successful and follow-up stages/steps continue to run.
Wildcard character (`*`) expansion is supported in `branch` conditions.
This section covers the following topics:
- [Configuring pipeline triggers](#configuring-pipeline-triggers)
- [Configuring stage triggers](#configuring-stage-triggers)
- [Configuring step triggers](#configuring-step-triggers)
- [Configuring triggers by YAML](#configuring-triggers-by-yaml)
### Configuring Pipeline Triggers
1. From the **Global** view, navigate to the project that you want to configure a pipeline trigger rule.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. From the repository for which you want to manage trigger rules, select the vertical **&#8942; > Edit Config**.
1. Click on **Show Advanced Options**.
1. In the **Trigger Rules** section, configure rules to run or skip the pipeline.
1. Click **Add Rule**. In the **Value** field, enter the name of the branch that triggers the pipeline.
1. **Optional:** Add more branches that trigger a build.
1. Click **Done.**
### Configuring Stage Triggers
1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. From the repository for which you want to manage trigger rules, select the vertical **&#8942; > Edit Config**.
1. Find the **stage** that you want to manage trigger rules, click the **Edit** icon for that stage.
1. Click **Show advanced options**.
1. In the **Trigger Rules** section, configure rules to run or skip the stage.
1. Click **Add Rule**.
1. Choose the **Type** that triggers the stage and enter a value.
| Type | Value |
| ------ | -------------------------------------------------------------------- |
| Branch | The name of the branch that triggers the stage. |
| Event | The type of event that triggers the stage. Values are: `Push`, `Pull Request`, `Tag` |
1. Click **Save**.
### Configuring Step Triggers
1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. From the repository for which you want to manage trigger rules, select the vertical **&#8942; > Edit Config**.
1. Find the **step** that you want to manage trigger rules, click the **Edit** icon for that step.
1. Click **Show advanced options**.
1. In the **Trigger Rules** section, configure rules to run or skip the step.
1. Click **Add Rule**.
1. Choose the **Type** that triggers the step and enter a value.
| Type | Value |
| ------ | -------------------------------------------------------------------- |
| Branch | The name of the branch that triggers the step. |
| Event | The type of event that triggers the step. Values are: `Push`, `Pull Request`, `Tag` |
1. Click **Save**.
### Configuring Triggers by YAML
```yaml
# example
stages:
- name: Build something
# Conditions for stages
when:
branch: master
event: [ push, pull_request ]
# Multiple steps run concurrently
steps:
- runScriptConfig:
image: busybox
shellScript: date -R
# Conditions for steps
when:
branch: [ master, dev ]
event: push
# branch conditions for the pipeline
branch:
include: [ master, feature/*]
exclude: [ dev ]
```
# Environment Variables
When configuring a pipeline, certain [step types](#step-types) allow you to use environment variables to configure the step's script.
### Configuring Environment Variables by UI
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. From the pipeline for which you want to edit build triggers, select **&#8942; > Edit Config**.
1. Within one of the stages, find the **step** that you want to add an environment variable for, click the **Edit** icon.
1. Click **Show advanced options**.
1. Click **Add Variable**, and then enter a key and value in the fields that appear. Add more variables if needed.
1. Add your environment variable(s) into either the script or file.
1. Click **Save**.
### Configuring Environment Variables by YAML
```yaml
# example
stages:
- name: Build something
steps:
- runScriptConfig:
image: busybox
shellScript: echo ${FIRST_KEY} && echo ${SECOND_KEY}
env:
FIRST_KEY: VALUE
SECOND_KEY: VALUE2
```
# Secrets
If you need to use security-sensitive information in your pipeline scripts (like a password), you can pass them in using Kubernetes [secrets]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/secrets/).
### Prerequisite
Create a secret in the same project as your pipeline, or explicitly in the namespace where pipeline build pods run.
<br>
>**Note:** Secret injection is disabled on [pull request events](#triggers-and-trigger-rules).
### Configuring Secrets by UI
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. From the pipeline for which you want to edit build triggers, select **&#8942; > Edit Config**.
1. Within one of the stages, find the **step** that you want to use a secret for, click the **Edit** icon.
1. Click **Show advanced options**.
1. Click **Add From Secret**. Select the secret file that you want to use. Then choose a key. Optionally, you can enter an alias for the key.
1. Click **Save**.
### Configuring Secrets by YAML
```yaml
# example
stages:
- name: Build something
steps:
- runScriptConfig:
image: busybox
shellScript: echo ${ALIAS_ENV}
# environment variables from project secrets
envFrom:
- sourceName: my-secret
sourceKey: secret-key
targetKey: ALIAS_ENV
```
# Pipeline Variable Substitution Reference
For your convenience, the following variables are available for your pipeline configuration scripts. During pipeline executions, these variables are replaced by metadata. You can reference them in the form of `${VAR_NAME}`.
Variable Name | Description
------------------------|------------------------------------------------------------
`CICD_GIT_REPO_NAME` | Repository name (Github organization omitted).
`CICD_GIT_URL` | URL of the Git repository.
`CICD_GIT_COMMIT` | Git commit ID being executed.
`CICD_GIT_BRANCH` | Git branch of this event.
`CICD_GIT_REF` | Git reference specification of this event.
`CICD_GIT_TAG` | Git tag name, set on tag event.
`CICD_EVENT` | Event that triggered the build (`push`, `pull_request` or `tag`).
`CICD_PIPELINE_ID` | Rancher ID for the pipeline.
`CICD_EXECUTION_SEQUENCE` | Build number of the pipeline.
`CICD_EXECUTION_ID` | Combination of `{CICD_PIPELINE_ID}-{CICD_EXECUTION_SEQUENCE}`.
`CICD_REGISTRY` | Address for the Docker registry for the previous publish image step, available in the Kubernetes manifest file of a `Deploy YAML` step.
`CICD_IMAGE` | Name of the image built from the previous publish image step, available in the Kubernetes manifest file of a `Deploy YAML` step. It does not contain the image tag.<br/><br/> [Example](https://github.com/rancher/pipeline-example-go/blob/master/deployment.yaml)
# Global Pipeline Execution Settings
After configuring a version control provider, there are several options that can be configured globally on how pipelines are executed in Rancher. These settings can be edited by selecting **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**.
- [Executor Quota](#executor-quota)
- [Resource Quota for Executors](#resource-quota-for-executors)
- [Custom CA](#custom-ca)
### Executor Quota
Select the maximum number of pipeline executors. The _executor quota_ decides how many builds can run simultaneously in the project. If the number of triggered builds exceeds the quota, subsequent builds will queue until a vacancy opens. By default, the quota is `2`. A value of `0` or less removes the quota limit.
### Resource Quota for Executors
Configure compute resources for Jenkins agent containers. When a pipeline execution is triggered, a build pod is dynamically provisioned to run your CI tasks. Under the hood, A build pod consists of one Jenkins agent container and one container for each pipeline step. You can [manage compute resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for every containers in the pod.
Edit the **Memory Reservation**, **Memory Limit**, **CPU Reservation** or **CPU Limit**, then click **Update Limit and Reservation**.
To configure compute resources for pipeline-step containers:
You can configure compute resources for pipeline-step containers in the `.rancher-pipeline.yml` file.
In a [step type]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/#step-types), you will provide the following information:
* **CPU Reservation (`CpuRequest`)**: CPU request for the container of a pipeline step.
* **CPU Limit (`CpuLimit`)**: CPU limit for the container of a pipeline step.
* **Memory Reservation (`MemoryRequest`)**: Memory request for the container of a pipeline step.
* **Memory Limit (`MemoryLimit`)**: Memory limit for the container of a pipeline step.
```yaml
# example
stages:
- name: Build something
steps:
- runScriptConfig:
image: busybox
shellScript: ls
cpuRequest: 100m
cpuLimit: 1
memoryRequest:100Mi
memoryLimit: 1Gi
- publishImageConfig:
dockerfilePath: ./Dockerfile
buildContext: .
tag: repo/app:v1
cpuRequest: 100m
cpuLimit: 1
memoryRequest:100Mi
memoryLimit: 1Gi
```
>**Note:** Rancher sets default compute resources for pipeline steps except for `Build and Publish Images` and `Run Script` steps. You can override the default value by specifying compute resources in the same way.
### Custom CA
If you want to use a version control provider with a certificate from a custom/internal CA root, the CA root certificates need to be added as part of the version control provider configuration in order for the pipeline build pods to succeed.
1. Click **Edit cacerts**.
1. Paste in the CA root certificates and click **Save cacerts**.
**Result:** Pipelines can be used and new pods will be able to work with the self-signed-certificate.
# Persistent Data for Pipeline Components
The internal Docker registry and the Minio workloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes.
For details on setting up persistent storage for pipelines, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/storage)
# Example rancher-pipeline.yml
An example pipeline configuration file is on [this page.]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/example)
@@ -1,123 +0,0 @@
---
title: v2.0.x Pipeline Documentation
weight: 9000
---
>**Note:** This section describes the pipeline feature as implemented in Rancher v2.0.x. If you are using Rancher v2.1 or later, where pipelines have been significantly improved, please refer to the new documentation for [v2.1 or later]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/).
Pipelines help you automate the software delivery process. You can integrate Rancher with GitHub to create a pipeline.
You can set up your pipeline to run a series of stages and steps to test your code and deploy it.
<dl>
<dt>Pipelines</dt>
<dd>Contain a series of stages and steps. Out-of-the-box, the pipelines feature supports fan out and in capabilities.</dd>
<dt>Stages</dt>
<dd>Executed sequentially. The next stage will not execute until all of the steps within the stage execute.</dd>
<dt>Steps</dt>
<dd>Are executed in parallel within a stage. </dd>
</dl>
## Enabling CI Pipelines
1. Select cluster from drop down.
2. Under tools menu select pipelines.
3. Follow instructions for setting up github auth on page.
## Creating CI Pipelines
1. Go to the project you want this pipeline to run in.
2. Click **Resources > Pipelines.** In versions prior to v2.3.0,click **Workloads > Pipelines.**
4. Click Add pipeline button.
5. Enter in your repository name (Autocomplete should help zero in on it quickly).
6. Select Branch options.
- Only the branch {BRANCH NAME}: Only events triggered by changes to this branch will be built.
- Everything but {BRANCH NAME}: Build any branch that triggered an event EXCEPT events from this branch.
- All branches: Regardless of the branch that triggered the event always build.
>**Note:** If you want one path for master, but another for PRs or development/test/feature branches, create two separate pipelines.
7. Select the build trigger events. By default, builds will only happen by manually clicking build now in Rancher UI.
- Automatically build this pipeline whenever there is a git commit. (This respects the branch selection above)
- Automatically build this pipeline whenever there is a new PR.
- Automatically build the pipeline. (Allows you to configure scheduled builds similar to Cron)
8. Click Add button.
By default, Rancher provides a three stage pipeline for you. It consists of a build stage where you would compile, unit test, and scan code. The publish stage has a single step to publish a docker image.
8. Add a name to the pipeline in order to complete adding a pipeline.
9. Click on the run a script box under the Build stage.
Here you can set the image, or select from pre-packaged envs.
10. Configure a shell script to run inside the container when building.
11. Click Save to persist the changes.
12. Click the “publish an image box under the “Publish” stage.
13. Set the location of the Dockerfile. By default it looks in the root of the workspace. Instead, set the build context for building the image relative to the root of the workspace.
14. Set the image information.
The registry is the remote registry URL. It is defaulted to Docker hub.
Repository is the `<org>/<repo>` in the repository.
15. Select the Tag. You can hard code a tag like latest or select from a list of available variables.
16. If this is the first time using this registry, you can add the username/password for pushing the image. You must click save for the registry credentials AND also save for the modal.
## Creating a New Stage
1. To add a new stage the user must click the add a new stage link in either create or edit mode of the pipeline view.
2. Provide a name for the stage.
3. Click save.
## Creating a New Step
1. Go to create / edit mode of the pipeline.
2. Click “Add Step” button in the stage that you would like to add a step in.
3. Fill out the form as detailed above
## Environment Variables
For your convenience the following environment variables are available in your build steps:
Variable Name | Description
------------------------|------------------------------------------------------------
CICD_GIT_REPO_NAME | Repository Name (Stripped of Github Organization)
CICD_PIPELINE_NAME | Name of the pipeline
CICD_GIT_BRANCH | Git branch of this event
CICD_TRIGGER_TYPE | Event that triggered the build
CICD_PIPELINE_ID | Rancher ID for the pipeline
CICD_GIT_URL | URL of the Git repository
CICD_EXECUTION_SEQUENCE | Build number of the pipeline
CICD_EXECUTION_ID | Combination of {CICD_PIPELINE_ID}-{CICD_EXECUTION_SEQUENCE}
CICD_GIT_COMMIT | Git commit ID being executed.
@@ -1,74 +0,0 @@
---
title: Example Repositories
weight: 500
---
Rancher ships with several example repositories that you can use to familiarize yourself with pipelines. We recommend configuring and testing the example repository that most resembles your environment before using pipelines with your own repositories in a production environment. Use this example repository as a sandbox for repo configuration, build demonstration, etc. Rancher includes example repositories for:
- Go
- Maven
- php
> **Note:** The example repositories are only available if you have not [configured a version control provider]({{<baseurl>}}/rancher/v2.x/en/project-admin/pipelines).
To start using these example repositories,
1. [Enable the example repositories](#1-enable-the-example-repositories)
2. [View the example pipeline](#2-view-the-example-pipeline)
3. [Run the example pipeline](#3-run-the-example-pipeline)
### 1. Enable the Example Repositories
By default, the example pipeline repositories are disabled. Enable one (or more) to test out the pipeline feature and see how it works.
1. From the **Global** view, navigate to the project that you want to test out pipelines.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Click **Configure Repositories**.
**Step Result:** A list of example repositories displays.
>**Note:** Example repositories only display if you haven't fetched your own repos.
1. Click **Enable** for one of the example repos (e.g., `https://github.com/rancher/pipeline-example-go.git`). Then click **Done**.
**Results:**
- The example repository is enabled to work with a pipeline is available in the **Pipeline** tab.
- The following workloads are deployed to a new namespace:
- `docker-registry`
- `jenkins`
- `minio`
### 2. View the Example Pipeline
After enabling an example repository, review the pipeline to see how it is set up.
1. From the **Global** view, navigate to the project that you want to test out pipelines.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Find the example repository, select the vertical **&#8942;**. There are two ways to view the pipeline:
* **Rancher UI**: Click on **Edit Config** to view the stages and steps of the pipeline.
* **YAML**: Click on View/Edit YAML to view the `./rancher-pipeline.yml` file.
### 3. Run the Example Pipeline
After enabling an example repository, run the pipeline to see how it works.
1. From the **Global** view, navigate to the project that you want to test out pipelines.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Find the example repository, select the vertical **&#8942; > Run**.
>**Note:** When you run a pipeline the first time, it takes a few minutes to pull relevant images and provision necessary pipeline components.
**Result:** The pipeline runs. You can see the results in the logs.
### What's Next?
For detailed information about setting up your own pipeline for your repository, [configure a version control provider]({{<baseurl>}}/rancher/v2.x/en/project-admin/pipelines), [enable a repository](#configure-repositories) and finally [configure your pipeline]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/#pipeline-configuration).
@@ -1,72 +0,0 @@
---
title: Example YAML File
weight: 501
---
Pipelines can be configured either through the UI or using a yaml file in the repository, i.e. `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`.
In the [pipeline configuration reference]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/config), we provide examples of how to configure each feature using the Rancher UI or using YAML configuration.
Below is a full example `rancher-pipeline.yml` for those who want to jump right in.
```yaml
# example
stages:
- name: Build something
# Conditions for stages
when:
branch: master
event: [ push, pull_request ]
# Multiple steps run concurrently
steps:
- runScriptConfig:
image: busybox
shellScript: echo ${FIRST_KEY} && echo ${ALIAS_ENV}
# Set environment variables in container for the step
env:
FIRST_KEY: VALUE
SECOND_KEY: VALUE2
# Set environment variables from project secrets
envFrom:
- sourceName: my-secret
sourceKey: secret-key
targetKey: ALIAS_ENV
- runScriptConfig:
image: busybox
shellScript: date -R
# Conditions for steps
when:
branch: [ master, dev ]
event: push
- name: Publish my image
steps:
- publishImageConfig:
dockerfilePath: ./Dockerfile
buildContext: .
tag: rancher/rancher:v2.0.0
# Optionally push to remote registry
pushRemote: true
registry: reg.example.com
- name: Deploy some workloads
steps:
- applyYamlConfig:
path: ./deployment.yaml
# branch conditions for the pipeline
branch:
include: [ master, feature/*]
exclude: [ dev ]
# timeout in minutes
timeout: 30
notification:
recipients:
- # Recipient
recipient: "#mychannel"
# ID of Notifier
notifier: "c-wdcsr:n-c9pg7"
- recipient: "test@example.com"
notifier: "c-wdcsr:n-lkrhd"
# Select which statuses you want the notification to be sent
condition: ["Failed", "Success", "Changed"]
# Ability to override the default message (Optional)
message: "my-message"
```
@@ -1,103 +0,0 @@
---
title: Configuring Persistent Data for Pipeline Components
weight: 600
---
The internal [Docker registry](#how-pipelines-work) and the [Minio](#how-pipelines-work) workloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes.
This section assumes that you understand how persistent storage works in Kubernetes. For more information, refer to the section on [how storage works.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/how-storage-works/)
>**Prerequisites (for both parts A and B):**
>
>[Persistent volumes]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/) must be available for the cluster.
### A. Configuring Persistent Data for Docker Registry
1. From the project that you're configuring a pipeline for, and click **Resources > Workloads.** In versions prior to v2.3.0, select the **Workloads** tab.
1. Find the `docker-registry` workload and select **&#8942; > Edit**.
1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section:
- **Add Volume > Add a new persistent volume (claim)**
- **Add Volume > Use an existing persistent volume (claim)**
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
{{% tabs %}}
{{% tab "Add a new persistent volume" %}}
<br/>
1. Enter a **Name** for the volume claim.
1. Select a volume claim **Source**:
- If you select **Use a Storage Class to provision a new persistent volume**, select a storage class and enter a **Capacity**.
- If you select **Use an existing persistent volume**, choose a **Persistent Volume** from the drop-down.
1. From the **Customize** section, choose the read/write access for the volume.
1. Click **Define**.
{{% /tab %}}
{{% tab "Use an existing persistent volume" %}}
<br/>
1. Enter a **Name** for the volume claim.
1. Choose a **Persistent Volume Claim** from the drop-down.
1. From the **Customize** section, choose the read/write access for the volume.
1. Click **Define**.
{{% /tab %}}
{{% /tabs %}}
1. From the **Mount Point** field, enter `/var/lib/registry`, which is the data storage path inside the Docker registry container.
1. Click **Upgrade**.
### B. Configuring Persistent Data for Minio
1. From the project view, click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) Find the `minio` workload and select **&#8942; > Edit**.
1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section:
- **Add Volume > Add a new persistent volume (claim)**
- **Add Volume > Use an existing persistent volume (claim)**
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
{{% tabs %}}
{{% tab "Add a new persistent volume" %}}
<br/>
1. Enter a **Name** for the volume claim.
1. Select a volume claim **Source**:
- If you select **Use a Storage Class to provision a new persistent volume**, select a storage class and enter a **Capacity**.
- If you select **Use an existing persistent volume**, choose a **Persistent Volume** from the drop-down.
1. From the **Customize** section, choose the read/write access for the volume.
1. Click **Define**.
{{% /tab %}}
{{% tab "Use an existing persistent volume" %}}
<br/>
1. Enter a **Name** for the volume claim.
1. Choose a **Persistent Volume Claim** from the drop-down.
1. From the **Customize** section, choose the read/write access for the volume.
1. Click **Define**.
{{% /tab %}}
{{% /tabs %}}
1. From the **Mount Point** field, enter `/data`, which is the data storage path inside the Minio container.
1. Click **Upgrade**.
**Result:** Persistent storage is configured for your pipeline components.
@@ -1,30 +0,0 @@
---
title: Adding a Pod Security Policy
weight: 80
---
> **Prerequisite:** The options below are available only for clusters that are [launched using RKE.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/)
When your cluster is running pods with security-sensitive configurations, assign it a [pod security policy]({{<baseurl>}}/rancher/v2.x/en/admin-settings/pod-security-policies/), which is a set of rules that monitors the conditions and settings in your pods. If a pod doesn't meet the rules specified in your policy, the policy stops it from running.
You can assign a pod security policy when you provision a cluster. However, if you need to relax or restrict security for your pods later, you can update the policy while editing your cluster.
1. From the **Global** view, find the cluster to which you want to apply a pod security policy. Select **&#8942; > Edit**.
2. Expand **Cluster Options**.
3. From **Pod Security Policy Support**, select **Enabled**.
>**Note:** This option is only available for clusters [provisioned by RKE]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/).
4. From the **Default Pod Security Policy** drop-down, select the policy you want to apply to the cluster.
Rancher ships with [policies]({{<baseurl>}}/rancher/v2.x/en/admin-settings/pod-security-policies/#default-pod-security-policies) of `restricted` and `unrestricted`, although you can [create custom policies]({{<baseurl>}}/rancher/v2.x/en/admin-settings/pod-security-policies/#default-pod-security-policies) as well.
5. Click **Save**.
**Result:** The pod security policy is applied to the cluster and any projects within the cluster.
>**Note:** Workloads already running before assignment of a pod security policy are grandfathered in. Even if they don't meet your pod security policy, workloads running before assignment of the policy continue to run.
>
>To check if a running workload passes your pod security policy, clone or upgrade it.
@@ -1,41 +0,0 @@
---
title: Projects
weight: 2500
---
_Projects_ are objects introduced in Rancher that help organize namespaces in your Kubernetes cluster. You can use projects to create multi-tenant clusters, which allows a group of users to share the same underlying resources without interacting with each other's applications.
In terms of hierarchy:
- Clusters contain projects
- Projects contain namespaces
Within Rancher, projects allow you to manage multiple namespaces as a single entity. In native Kubernetes, which does not include projects, features like role-based access rights or cluster resources are assigned to individual namespaces. In clusters where multiple namespaces require the same set of access rights, assigning these rights to each individual namespace can become tedious. Even though all namespaces require the same rights, there's no way to apply those rights to all of your namespaces in a single action. You'd have to repetitively assign these rights to each namespace!
Rancher projects resolve this issue by allowing you to apply resources and access rights at the project level. Each namespace in the project then inherits these resources and policies, so you only have to assign them to the project once, rather than assigning them to each individual namespace.
You can use projects to perform actions like:
- [Assign users access to a group of namespaces]({{<baseurl>}}/rancher/v2.x/en/project-admin/project-members)
- Assign users [specific roles in a project]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles). A role can be owner, member, read-only, or [custom]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/)
- [Set resource quotas]({{<baseurl>}}/rancher/v2.x/en/project-admin/resource-quotas/)
- [Manage namespaces]({{<baseurl>}}/rancher/v2.x/en/project-admin/namespaces/)
- [Configure tools]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/)
- [Set up pipelines for continuous integration and deployment]({{<baseurl>}}/rancher/v2.x/en/project-admin/pipelines)
- [Configure pod security policies]({{<baseurl>}}/rancher/v2.x/en/project-admin/pod-security-policies)
### Authorization
Non-administrative users are only authorized for project access after an [administrator]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owner or member]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owner]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) adds them to the project's **Members** tab.
Whoever creates the project automatically becomes a [project owner]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles).
## Switching between Projects
To switch between projects, use the drop-down available in the navigation bar. Alternatively, you can switch between projects directly in the navigation bar.
1. From the **Global** view, navigate to the project that you want to configure.
1. Select **Projects/Namespaces** from the navigation bar.
1. Select the link for the project that you want to open.
@@ -1,68 +0,0 @@
---
title: Namespaces
weight: 2520
---
Within Rancher, you can further divide projects into different [namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), which are virtual clusters within a project backed by a physical cluster. Should you require another level of organization beyond projects and the `default` namespace, you can use multiple namespaces to isolate applications and resources.
Although you assign resources at the project level so that each namespace in the project can use them, you can override this inheritance by assigning resources explicitly to a namespace.
Resources that you can assign directly to namespaces include:
- [Workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/)
- [Load Balancers/Ingress]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/)
- [Service Discovery Records]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/service-discovery/)
- [Persistent Volume Claims]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/persistent-volume-claims/)
- [Certificates]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/certificates/)
- [ConfigMaps]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/configmaps/)
- [Registries]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/registries/)
- [Secrets]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/secrets/)
To manage permissions in a vanilla Kubernetes cluster, cluster admins configure role-based access policies for each namespace. With Rancher, user permissions are assigned on the project level instead, and permissions are automatically inherited by any namespace owned by the particular project.
> **Note:** If you create a namespace with `kubectl`, it may be unusable because `kubectl` doesn't require your new namespace to be scoped within a project that you have access to. If your permissions are restricted to the project level, it is better to [create a namespace through Rancher]({{<baseurl>}}/rancher/v2.x/en/project-admin/namespaces/#creating-namespaces) to ensure that you will have permission to access the namespace.
### Creating Namespaces
Create a new namespace to isolate apps and resources in a project.
>**Tip:** When working with project resources that you can assign to a namespace (i.e., [workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/), [certificates]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/certificates/), [ConfigMaps]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/configmaps), etc.) you can create a namespace on the fly.
1. From the **Global** view, open the project where you want to create a namespace.
>**Tip:** As a best practice, we recommend creating namespaces from the project level. However, cluster owners and members can create them from the cluster level as well.
1. From the main menu, select **Namespace**. The click **Add Namespace**.
1. **Optional:** If your project has [Resource Quotas]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas) in effect, you can override the default resource **Limits** (which places a cap on the resources that the namespace can consume).
1. Enter a **Name** and then click **Create**.
**Result:** Your namespace is added to the project. You can begin assigning cluster resources to the namespace.
### Moving Namespaces to Another Project
Cluster admins and members may occasionally need to move a namespace to another project, such as when you want a different team to start using the application.
1. From the **Global** view, open the cluster that contains the namespace you want to move.
1. From the main menu, select **Projects/Namespaces**.
1. Select the namespace(s) that you want to move to a different project. Then click **Move**. You can move multiple namespaces at one.
>**Notes:**
>
>- Don't move the namespaces in the `System` project. Moving these namespaces can adversely affect cluster networking.
>- You cannot move a namespace into a project that already has a [resource quota]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas/) configured.
>- If you move a namespace from a project that has a quota set to a project with no quota set, the quota is removed from the namespace.
1. Choose a new project for the new namespace and then click **Move**. Alternatively, you can remove the namespace from all projects by selecting **None**.
**Result:** Your namespace is moved to a different project (or is unattached from all projects). If any project resources are attached to the namespace, the namespace releases them and then attached resources from the new project.
### Editing Namespace Resource Quotas
You can always override the namespace default limit to provide a specific namespace with access to more (or less) project resources.
For more information, see how to [edit namespace resource quotas]({{<baseurl>}}/rancher/v2.x/en/project-admin//resource-quotas/override-namespace-default/#editing-namespace-resource-quotas).
@@ -1,41 +0,0 @@
---
title: Project Applications
weight: 2525
---
> This section is under construction.
Rancher contains a variety of tools that aren't included in Kubernetes to assist in your DevOps operations. Rancher can integrate with external services to help your clusters run more efficiently. Tools are divided into following categories:
<!-- TOC -->
- [Notifiers and Alerts](#notifiers-and-alerts)
- [Logging](#logging)
- [Monitoring](#monitoring)
<!-- /TOC -->
## Notifiers and Alerts
Notifiers and alerts are two features that work together to inform you of events in the Rancher system.
[Notifiers]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/notifiers) are services that inform you of alert events. You can configure notifiers to send alert notifications to staff best suited to take corrective action. Notifications can be sent with Slack, email, PagerDuty, WeChat, and webhooks.
[Alerts]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/alerts) are rules that trigger those notifications. Before you can receive alerts, you must configure one or more notifier in Rancher. The scope for alerts can be set at either the cluster or project level.
## Logging
Logging is helpful because it allows you to:
- Capture and analyze the state of your cluster
- Look for trends in your environment
- Save your logs to a safe location outside of your cluster
- Stay informed of events like a container crashing, a pod eviction, or a node dying
- More easily debugg and troubleshoot problems
Rancher can integrate with Elasticsearch, splunk, kafka, syslog, and fluentd.
For details, refer to the [logging section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging)
## Monitoring
Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with [Prometheus](https://prometheus.io/), a leading open-source monitoring solution. For details, refer to the [monitoring section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring)
@@ -1,184 +0,0 @@
---
title: Project Alerts
weight: 2
---
To keep your clusters and applications healthy and driving your organizational productivity forward, you need to stay informed of events occurring in your clusters and projects, both planned and unplanned. When an event occurs, your alert is triggered, and you are sent a notification. You can then, if necessary, follow up with corrective actions.
Notifiers and alerts are built on top of the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/alertmanager/). Leveraging these tools, Rancher can notify [cluster owners]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) and [project owners]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) of events they need to address.
Before you can receive alerts, one or more [notifier]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/notifiers) must be configured at the cluster level.
Only [administrators]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owners or members]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owners]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can manage project alerts.
This section covers the following topics:
- [Alerts scope](#alerts-scope)
- [Default project-level alerts](#default-project-level-alerts)
- [Adding project alerts](#adding-project-alerts)
- [Managing project alerts](#managing-project-alerts)
## Alerts Scope
The scope for alerts can be set at either the [cluster level]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/alerts/) or project level.
At the project level, Rancher monitors specific deployments and sends alerts for:
* Deployment availability
* Workloads status
* Pod status
* The Prometheus expression cross the thresholds
## Default Project-level Alerts
When you enable monitoring for the project, some project-level alerts are provided. You can receive these alerts if a [notifier]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/notifiers) for them is configured at the cluster level.
| Alert | Explanation |
|-------|-------------|
| Less than half workload available | A critical alert is triggered if less than half of a workload is available, based on workloads where the key is `app` and the value is `workload`. |
| Memory usage close to the quota | A warning alert is triggered if the workload's memory usage exceeds the memory resource quota that is set for the workload. You can see the memory limit in the Rancher UI if you go to the workload under the **Security & Host Config** tab. |
For information on other default alerts, refer to the section on [cluster-level alerts.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/alerts/default-alerts)
## Adding Project Alerts
>**Prerequisite:** Before you can receive project alerts, you must add a notifier.
1. From the **Global** view, navigate to the project that you want to configure project alerts for. Select **Tools > Alerts**. In versions prior to v2.2.0, you can choose **Resources > Alerts**.
1. Click **Add Alert Group**.
1. Enter a **Name** for the alert that describes its purpose, you could group alert rules for the different purpose.
1. Based on the type of alert you want to create, complete one of the instruction subsets below.
{{% accordion id="pod" label="Pod Alerts" %}}
This alert type monitors for the status of a specific pod.
1. Select the **Pod** option, and then select a pod from the drop-down.
1. Select a pod status that triggers an alert:
- **Not Running**
- **Not Scheduled**
- **Restarted `<x>` times with the last `<x>` Minutes**
1. Select the urgency level of the alert. The options are:
- **Critical**: Most urgent
- **Warning**: Normal urgency
- **Info**: Least urgent
Select the urgency level of the alert based on pod state. For example, select **Info** for Job pod which stop running after job finished. However, if an important pod isn't scheduled, it may affect operations, so choose **Critical**.
1. Configure advanced options. By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule.
- **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds.
- **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds.
- **Repeat Wait Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 1 hour.
{{% /accordion %}}
{{% accordion id="workload" label="Workload Alerts" %}}
This alert type monitors for the availability of a workload.
1. Choose the **Workload** option. Then choose a workload from the drop-down.
1. Choose an availability percentage using the slider. The alert is triggered when the workload's availability on your cluster nodes drops below the set percentage.
1. Select the urgency level of the alert.
- **Critical**: Most urgent
- **Warning**: Normal urgency
- **Info**: Least urgent
Select the urgency level of the alert based on the percentage you choose and the importance of the workload.
1. Configure advanced options. By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule.
- **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds.
- **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds.
- **Repeat Wait Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 1 hour.
{{% /accordion %}}
{{% accordion id="workload-selector" label="Workload Selector Alerts" %}}
This alert type monitors for the availability of all workloads marked with tags that you've specified.
1. Select the **Workload Selector** option, and then click **Add Selector** to enter the key value pair for a label. If one of the workloads drops below your specifications, an alert is triggered. This label should be applied to one or more of your workloads.
1. Select the urgency level of the alert.
- **Critical**: Most urgent
- **Warning**: Normal urgency
- **Info**: Least urgent
Select the urgency level of the alert based on the percentage you choose and the importance of the workload.
1. Configure advanced options. By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule.
- **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds.
- **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds.
- **Repeat Wait Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 1 hour.
{{% /accordion %}}
{{% accordion id="project-expression" label="Metric Expression Alerts" %}}
<br>
If you enable [project monitoring]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/#monitoring), this alert type monitors for the overload from Prometheus expression querying.
1. Input or select an **Expression**, the drop down shows the original metrics from Prometheus, including:
- [**Container**](https://github.com/google/cadvisor)
- [**Kubernetes Resources**](https://github.com/kubernetes/kube-state-metrics)
- [**Customize**]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/#project-metrics)
- [**Project Level Grafana**](http://docs.grafana.org/administration/metrics/)
- **Project Level Prometheus**
1. Choose a comparison.
- **Equal**: Trigger alert when expression value equal to the threshold.
- **Not Equal**: Trigger alert when expression value not equal to the threshold.
- **Greater Than**: Trigger alert when expression value greater than to threshold.
- **Less Than**: Trigger alert when expression value equal or less than the threshold.
- **Greater or Equal**: Trigger alert when expression value greater to equal to the threshold.
- **Less or Equal**: Trigger alert when expression value less or equal to the threshold.
1. Input a **Threshold**, for trigger alert when the value of expression cross the threshold.
1. Choose a **Comparison**.
1. Select a **Duration**, for trigger alert when expression value crosses the threshold longer than the configured duration.
1. Select the urgency level of the alert.
- **Critical**: Most urgent
- **Warning**: Normal urgency
- **Info**: Least urgent
<br/>
<br/>
Select the urgency level of the alert based on its impact on operations. For example, an alert triggered when a expression for container memory close to the limit raises above 60% deems an urgency of **Info**, but raised about 95% deems an urgency of **Critical**.
1. Configure advanced options. By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule.
- **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds.
- **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds.
- **Repeat Wait Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 1 hour.
<br>
{{% /accordion %}}
1. Continue adding more **Alert Rule** to the group.
1. Finally, choose the [notifiers]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) that send you alerts.
- You can set up multiple notifiers.
- You can change notifier recipients on the fly.
**Result:** Your alert is configured. A notification is sent when the alert is triggered.
## Managing Project Alerts
To manage project alerts, browse to the project that alerts you want to manage. Then select **Tools > Alerts**. In versions prior to v2.2.0, you can choose **Resources > Alerts**. You can:
- Deactivate/Reactive alerts
- Edit alert settings
- Delete unnecessary alerts
- Mute firing alerts
- Unmute muted alerts
@@ -1,19 +0,0 @@
---
title: Istio in Projects
weight: 1
---
Using Rancher, you can connect, secure, control, and observe services through integration with [Istio](https://istio.io/), a leading open-source service mesh solution. Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications.
This service mesh provides features that include but are not limited to the following:
- Traffic management features
- Enhanced monitoring and tracing
- Service discovery and routing
- Secure connections and service-to-service authentication with mutual TLS
- Load balancing
- Automatic retries, backoff, and circuit breaking
Istio needs to be set up by a Rancher administrator or cluster administrator before it can be used in a project for [comprehensive data visualizations,]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/#accessing-visualizations) traffic management, or any of its other features.
For information on how Istio is integrated with Rancher and how to set it up, refer to the [section about Istio.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio)
@@ -1,108 +0,0 @@
---
title: Project Logging
weight: 3
---
Rancher can integrate with a variety of popular logging services and tools that exist outside of your Kubernetes clusters.
For background information about how logging integrations work, refer to the [cluster administration section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/#how-logging-integrations-work)
Rancher supports the following services:
- Elasticsearch
- Splunk
- Kafka
- Syslog
- Fluentd
>**Note:** You can only configure one logging service per cluster or per project.
Only [administrators]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owners or members]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owners]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can configure Rancher to send Kubernetes logs to a logging service.
## Requirements
The Docker daemon on each node in the cluster should be [configured](https://docs.docker.com/config/containers/logging/configure/) with the (default) log-driver: `json-file`. You can check the log-driver by running the following command:
```
$ docker info | grep 'Logging Driver'
Logging Driver: json-file
```
## Advantages
Setting up a logging service to collect logs from your cluster/project has several advantages:
- Logs errors and warnings in your Kubernetes infrastructure to a stream. The stream informs you of events like a container crashing, a pod eviction, or a node dying.
- Allows you to capture and analyze the state of your cluster and look for trends in your environment using the log stream.
- Helps you when troubleshooting or debugging.
- Saves your logs to a safe location outside of your cluster, so that you can still access them even if your cluster encounters issues.
## Logging Scope
You can configure logging at either cluster level or project level.
- [Cluster logging]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/) writes logs for every pod in the cluster, i.e. in all the projects. For [RKE clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters), it also writes logs for all the Kubernetes system components.
- Project logging writes logs for every pod in that particular project.
Logs that are sent to your logging service are from the following locations:
- Pod logs stored at `/var/log/containers`.
- Kubernetes system components logs stored at `/var/lib/rancher/rke/logs/`.
## Enabling Project Logging
1. From the **Global** view, navigate to the project that you want to configure project logging.
1. Select **Tools > Logging** in the navigation bar. In versions prior to v2.2.0, you can choose **Resources > Logging**.
1. Select a logging service and enter the configuration. Refer to the specific service for detailed configuration. Rancher supports the following services:
- [Elasticsearch]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/elasticsearch/)
- [Splunk]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/splunk/)
- [Kafka]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/kafka/)
- [Syslog]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/syslog/)
- [Fluentd]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/fluentd/)
1. (Optional) Instead of using the UI to configure the logging services, you can enter custom advanced configurations by clicking on **Edit as File**, which is located above the logging targets. This link is only visible after you select a logging service.
- With the file editor, enter raw fluentd configuration for any logging service. Refer to the documentation for each logging service on how to setup the output configuration.
- [Elasticsearch Documentation](https://github.com/uken/fluent-plugin-elasticsearch)
- [Splunk Documentation](https://github.com/fluent/fluent-plugin-splunk)
- [Kafka Documentation](https://github.com/fluent/fluent-plugin-kafka)
- [Syslog Documentation](https://github.com/dlackty/fluent-plugin-remote_syslog)
- [Fluentd Documentation](https://docs.fluentd.org/v1.0/articles/out_forward)
- If the logging service is using TLS, you also need to complete the **SSL Configuration** form.
1. Provide the **Client Private Key** and **Client Certificate**. You can either copy and paste them or upload them by using the **Read from a file** button.
- You can use either a self-signed certificate or one provided by a certificate authority.
- You can generate a self-signed certificate using an openssl command. For example:
```
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com"
```
2. If you are using a self-signed certificate, provide the **CA Certificate PEM**.
1. (Optional) Complete the **Additional Logging Configuration** form.
1. **Optional:** Use the **Add Field** button to add custom log fields to your logging configuration. These fields are key value pairs (such as `foo=bar`) that you can use to filter the logs from another system.
1. Enter a **Flush Interval**. This value determines how often [Fluentd](https://www.fluentd.org/) flushes data to the logging server. Intervals are measured in seconds.
1. **Include System Log**. The logs from pods in system project and RKE components will be sent to the target. Uncheck it to exclude the system logs.
1. Click **Test**. Rancher sends a test log to the service.
> **Note:** This button is replaced with _Dry Run_ if you are using the custom configuration editor. In this case, Rancher calls the fluentd dry run command to validate the configuration.
1. Click **Save**.
**Result:** Rancher is now configured to send logs to the selected service. Log into the logging service so that you can start viewing the logs.
## Related Links
[Logging Architecture](https://kubernetes.io/docs/concepts/cluster-administration/logging/)
@@ -1,81 +0,0 @@
---
title: Project Monitoring
weight: 4
---
Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with [Prometheus](https://prometheus.io/), a leading open-source monitoring solution.
> For more information about how Prometheus works, refer to the [cluster administration section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#about-prometheus)
This section covers the following topics:
- [Monitoring scope](#monitoring-scope)
- [Permissions to configure project monitoring](#permissions-to-configure-project-monitoring)
- [Enabling project monitoring](#enabling-project-monitoring)
- [Project-level monitoring resource requirements](#project-level-monitoring-resource-requirements)
- [Project metrics](#project-metrics)
### Monitoring Scope
Using Prometheus, you can monitor Rancher at both the [cluster level]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) and project level. For each cluster and project that is enabled for monitoring, Rancher deploys a Prometheus server.
- [Cluster monitoring]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) allows you to view the health of your Kubernetes cluster. Prometheus collects metrics from the cluster components below, which you can view in graphs and charts.
- [Kubernetes control plane]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#kubernetes-components-metrics)
- [etcd database]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#etcd-metrics)
- [All nodes (including workers)]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#cluster-metrics)
- Project monitoring allows you to view the state of pods running in a given project. Prometheus collects metrics from the project's deployed HTTP and TCP/UDP workloads.
### Permissions to Configure Project Monitoring
Only [administrators]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owners or members]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owners]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can configure project level monitoring. Project members can only view monitoring metrics.
### Enabling Project Monitoring
> **Prerequisite:** Cluster monitoring must be [enabled.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/)
1. Go to the project where monitoring should be enabled. Note: When cluster monitoring is enabled, monitoring is also enabled by default in the **System** project.
1. Select **Tools > Monitoring** in the navigation bar.
1. Select **Enable** to show the [Prometheus configuration options]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/). Enter in your desired configuration options.
1. Click **Save**.
### Project-Level Monitoring Resource Requirements
Container| CPU - Request | Mem - Request | CPU - Limit | Mem - Limit | Configurable
---------|---------------|---------------|-------------|-------------|-------------
Prometheus|750m| 750Mi | 1000m | 1000Mi | Yes
Grafana | 100m | 100Mi | 200m | 200Mi | No
**Result:** A single application,`project-monitoring`, is added as an [application]({{<baseurl>}}/rancher/v2.x/en/catalog/apps/) to the project. After the application is `active`, you can start viewing [project metrics](#project-metrics) through the [Rancher dashboard]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#rancher-dashboard) or directly from [Grafana]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#grafana).
> The default username and password for the Grafana instance will be `admin/admin`. However, Grafana dashboards are served via the Rancher authentication proxy, so only users who are currently authenticated into the Rancher server have access to the Grafana dashboard.
### Project Metrics
[Workload metrics]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#workload-metrics) are available for the project if monitoring is enabled at the [cluster level]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) and at the [project level.](#enabling-project-monitoring)
You can monitor custom metrics from any [exporters.](https://prometheus.io/docs/instrumenting/exporters/) You can also expose some custom endpoints on deployments without needing to configure Prometheus for your project.
> **Example:**
> A [Redis](https://redis.io/) application is deployed in the namespace `redis-app` in the project `Datacenter`. It is monitored via [Redis exporter](https://github.com/oliver006/redis_exporter). After enabling project monitoring, you can edit the application to configure the <b>Advanced Options -> Custom Metrics</b> section. Enter the `Container Port` and `Path` and select the `Protocol`.
To access a project-level Grafana instance,
1. From the **Global** view, navigate to a cluster that has monitoring enabled.
1. Go to a project that has monitoring enabled.
1. From the project view, click **Apps.** In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar.
1. Go to the `project-monitoring` application.
1. In the `project-monitoring` application, there are two `/index.html` links: one that leads to a Grafana instance and one that leads to a Prometheus instance. When you click the Grafana link, it will redirect you to a new webpage for Grafana, which shows metrics for the cluster.
1. You will be signed in to the Grafana instance automatically. The default username is `admin` and the default password is `admin`. For security, we recommend that you log out of Grafana, log back in with the `admin` password, and change your password.
**Results:** You will be logged into Grafana from the Grafana instance. After logging in, you can view the preset Grafana dashboards, which are imported via the [Grafana provisioning mechanism](http://docs.grafana.org/administration/provisioning/#dashboards), so you cannot modify them directly. For now, if you want to configure your own dashboards, clone the original and modify the new copy.
@@ -1,42 +0,0 @@
---
title: Project Resource Quotas
weight: 3
---
In situations where several teams share a cluster, one team may overconsume the resources available: CPU, memory, storage, services, Kubernetes objects like pods or secrets, and so on. To prevent this overconsumption, you can apply a _resource quota_, which is a Rancher feature that limits the resources available to a project or namespace.
This page is a how-to guide for creating resource quotas in existing projects.
Resource quotas can also be set when a new project is created. For details, refer to the section on [creating new projects.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/projects-and-namespaces/#creating-projects)
> Resource quotas in Rancher include the same functionality as the [native version of Kubernetes](https://kubernetes.io/docs/concepts/policy/resource-quotas/). However, in Rancher, resource quotas have been extended so that you can apply them to [projects]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects). For details on how resource quotas work with projects in Rancher, refer to [this page.](./quotas-for-projects)
### Applying Resource Quotas to Existing Projects
Edit [resource quotas]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas) when:
- You want to limit the resources that a project and its namespaces can use.
- You want to scale the resources available to a project up or down when a research quota is already in effect.
1. From the **Global** view, open the cluster containing the project to which you want to apply a resource quota.
1. From the main menu, select **Projects/Namespaces**.
1. Find the project that you want to add a resource quota to. From that project, select **&#8942; > Edit**.
1. Expand **Resource Quotas** and click **Add Quota**. Alternatively, you can edit existing quotas.
1. Select a [Resource Type]({{<baseurl>}}/rancher/v2.x/en/project-admin/resource-quotas/#resource-quota-types).
1. Enter values for the **Project Limit** and the **Namespace Default Limit**.
| Field | Description |
| ----------------------- | -------------------------------------------------------------------------------------------------------- |
| Project Limit | The overall resource limit for the project. |
| Namespace Default Limit | The default resource limit available for each namespace. This limit is propagated to each namespace in the project. The combined limit of all project namespaces shouldn't exceed the project limit. |
1. **Optional:** Add more quotas.
1. Click **Create**.
**Result:** The resource quota is applied to your project and namespaces. When you add more namespaces in the future, Rancher validates that the project can accommodate the namespace. If the project can't allocate the resources, Rancher won't let you save your changes.
@@ -1,39 +0,0 @@
---
title: Setting Container Default Resource Limits
weight: 3
---
When setting resource quotas, if you set anything related to CPU or Memory (i.e. limits or reservations) on a project / namespace, all containers will require a respective CPU or Memory field set during creation. See the [Kubernetes documentation](https://kubernetes.io/docs/concepts/policy/resource-quotas/#requests-vs-limits) for more details on why this is required.
To avoid setting these limits on each and every container during workload creation, a default container resource limit can be specified on the namespace.
### Editing the Container Default Resource Limit
Edit [container default resource limit]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas/#setting-container-default-resource-limit) when:
- You have a CPU or Memory resource quota set on a project, and want to supply the corresponding default values for a container.
- You want to edit the default container resource limit.
1. From the **Global** view, open the cluster containing the project to which you want to edit the container default resource limit.
1. From the main menu, select **Projects/Namespaces**.
1. Find the project that you want to edit the container default resource limit. From that project, select **&#8942; > Edit**.
1. Expand **Container Default Resource Limit** and edit the values.
### Resource Limit Propagation
When the default container resource limit is set at a project level, the parameter will be propagated to any namespace created in the project after the limit has been set. For any existing namespace in a project, this limit will not be automatically propagated. You will need to manually set the default container resource limit for any existing namespaces in the project in order for it to be used when creating any containers.
> **Note:** Prior to v2.2.0, you could not launch catalog applications that did not have any limits set. With v2.2.0, you can set a default container resource limit on a project and launch any catalog applications.
Once a container default resource limit is configured on a namespace, the default will be pre-populated for any containers created in that namespace. These limits/reservations can always be overridden during workload creation.
### Container Resource Quota Types
The following resource limits can be configured:
| Resource Type | Description |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| CPU Limit | The maximum amount of CPU (in [millicores](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu)) allocated to the container.|
| CPU Reservation | The minimum amount of CPU (in millicores) guaranteed to the container. |
| Memory Limit | The maximum amount of memory (in bytes) allocated to the container. |
| Memory Reservation | The minimum amount of memory (in bytes) guaranteed to the container.
@@ -1,34 +0,0 @@
---
title: Overriding the Default Limit for a Namespace
weight: 2
---
Although the **Namespace Default Limit** propagates from the project to each namespace when created, in some cases, you may need to increase (or decrease) the quotas for a specific namespace. In this situation, you can override the default limits by editing the namespace.
In the diagram below, the Rancher administrator has a resource quota in effect for their project. However, the administrator wants to override the namespace limits for `Namespace 3` so that it has more resources available. Therefore, the administrator [raises the namespace limits]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#editing-namespace-resource-quotas) for `Namespace 3` so that the namespace can access more resources.
<sup>Namespace Default Limit Override</sup>
![Namespace Default Limit Override]({{<baseurl>}}/img/rancher/rancher-resource-quota-override.svg)
How to: [Editing Namespace Resource Quotas]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#editing-namespace-resource-quotas)
### Editing Namespace Resource Quotas
If there is a [resource quota]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas) configured for a project, you can override the namespace default limit to provide a specific namespace with access to more (or less) project resources.
1. From the **Global** view, open the cluster that contains the namespace for which you want to edit the resource quota.
1. From the main menu, select **Projects/Namespaces**.
1. Find the namespace for which you want to edit the resource quota. Select **&#8942; > Edit**.
1. Edit the Resource Quota **Limits**. These limits determine the resources available to the namespace. The limits must be set within the configured project limits.
For more information about each **Resource Type**, see [Resource Quota Types]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas/#resource-quota-types).
>**Note:**
>
>- If a resource quota is not configured for the project, these options will not be available.
>- If you enter limits that exceed the configured project limits, Rancher will not let you save your edits.
**Result:** Your override is applied to the namespace's resource quota.
@@ -1,24 +0,0 @@
---
title: Resource Quota Type Reference
weight: 4
---
When you create a resource quota, you are configuring the pool of resources available to the project. You can set the following resource limits for the following resource types.
| Resource Type | Description |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| CPU Limit* | The maximum amount of CPU (in [millicores](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu)) allocated to the project/namespace.<sup>1</sup> |
| CPU Reservation* | The minimum amount of CPU (in millicores) guaranteed to the project/namespace.<sup>1</sup> |
| Memory Limit* | The maximum amount of memory (in bytes) allocated to the project/namespace.<sup>1</sup> |
| Memory Reservation* | The minimum amount of memory (in bytes) guaranteed to the project/namespace.<sup>1</sup> |
| Storage Reservation | The minimum amount of storage (in gigabytes) guaranteed to the project/namespace. |
| Services Load Balancers | The maximum number of load balancers services that can exist in the project/namespace. |
| Services Node Ports | The maximum number of node port services that can exist in the project/namespace. |
| Pods | The maximum number of pods that can exist in the project/namespace in a non-terminal state (i.e., pods with a state of `.status.phase in (Failed, Succeeded)` equal to true). |
| Services | The maximum number of services that can exist in the project/namespace. |
| ConfigMaps | The maximum number of ConfigMaps that can exist in the project/namespace. |
| Persistent Volume Claims | The maximum number of persistent volume claims that can exist in the project/namespace. |
| Replications Controllers | The maximum number of replication controllers that can exist in the project/namespace. |
| Secrets | The maximum number of secrets that can exist in the project/namespace. |
>**<sup>*</sup>** When setting resource quotas, if you set anything related to CPU or Memory (i.e. limits or reservations) on a project / namespace, all containers will require a respective CPU or Memory field set during creation. As of v2.2.0, a [container default resource limit](#setting-container-default-resource-limit) can be set at the same time to avoid the need to explicitly set these limits for every workload. See the [Kubernetes documentation](https://kubernetes.io/docs/concepts/policy/resource-quotas/#requests-vs-limits) for more details on why this is required.
@@ -1,41 +0,0 @@
---
title: How Resource Quotas Work in Rancher Projects
weight: 1
---
Resource quotas in Rancher include the same functionality as the [native version of Kubernetes](https://kubernetes.io/docs/concepts/policy/resource-quotas/). However, in Rancher, resource quotas have been extended so that you can apply them to [projects]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects).
In a standard Kubernetes deployment, resource quotas are applied to individual namespaces. However, you cannot apply the quota to your namespaces simultaneously with a single action. Instead, the resource quota must be applied multiple times.
In the following diagram, a Kubernetes administrator is trying to enforce a resource quota without Rancher. The administrator wants to apply a resource quota that sets the same CPU and memory limit to every namespace in his cluster (`Namespace 1-4`) . However, in the base version of Kubernetes, each namespace requires a unique resource quota. The administrator has to create four different resource quotas that have the same specs configured (`Resource Quota 1-4`) and apply them individually.
<sup>Base Kubernetes: Unique Resource Quotas Being Applied to Each Namespace</sup>
![Native Kubernetes Resource Quota Implementation]({{<baseurl>}}/img/rancher/kubernetes-resource-quota.svg)
Resource quotas are a little different in Rancher. In Rancher, you apply a resource quota to the [project]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects), and then the quota propagates to each namespace, whereafter Kubernetes enforces your limits using the native version of resource quotas. If you want to change the quota for a specific namespace, you can [override it](#overriding-the-default-limit-for-a-namespace).
The resource quota includes two limits, which you set while creating or editing a project:
<a id="project-limits"></a>
- **Project Limits:**
This set of values configures an overall resource limit for the project. If you try to add a new namespace to the project, Rancher uses the limits you've set to validate that the project has enough resources to accommodate the namespace. In other words, if you try to move a namespace into a project near its resource quota, Rancher blocks you from moving the namespace.
- **Namespace Default Limits:**
This value is the default resource limit available for each namespace. When the resource quota is created at the project level, this limit is automatically propagated to each namespace in the project. Each namespace is bound to this default limit unless you [override it](#namespace-default-limit-overrides).
In the following diagram, a Rancher administrator wants to apply a resource quota that sets the same CPU and memory limit for every namespace in their project (`Namespace 1-4`). However, in Rancher, the administrator can set a resource quota for the project (`Project Resource Quota`) rather than individual namespaces. This quota includes resource limits for both the entire project (`Project Limit`) and individual namespaces (`Namespace Default Limit`). Rancher then propagates the `Namespace Default Limit` quotas to each namespace (`Namespace Resource Quota`) when created.
<sup>Rancher: Resource Quotas Propagating to Each Namespace</sup>
![Rancher Resource Quota Implementation]({{<baseurl>}}/img/rancher/rancher-resource-quota.svg)
Let's highlight some more nuanced functionality. If a quota is deleted at the project level, it will also be removed from all namespaces contained within that project, despite any overrides that may exist. Further, updating an existing namespace default limit for a quota at the project level will not result in that value being propagated to existing namespaces in the project; the updated value will only be applied to newly created namespaces in that project. To update a namespace default limit for existing namespaces you can delete and subsequently recreate the quota at the project level with the new default value. This will result in the new default value being applied to all existing namespaces in the project.
The following table explains the key differences between the two quota types.
| Rancher Resource Quotas | Kubernetes Resource Quotas |
| ---------------------------------------------------------- | -------------------------------------------------------- |
| Applies to projects and namespace. | Applies to namespaces only. |
| Creates resource pool for all namespaces in project. | Applies static resource limits to individual namespaces. |
| Applies resource quotas to namespaces through propagation. | Applies only to the assigned namespace.
@@ -1,116 +0,0 @@
---
title: Kubernetes Registry and Docker Registry
description: Learn about the Docker registry and Kubernetes registry, their use cases and how to use a private registry with the Rancher UI
weight: 6
---
Registries are Kubernetes secrets containing credentials used to authenticate with [private Docker registries](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/).
The word "registry" can mean two things, depending on whether it is used to refer to a Docker or Kubernetes registry:
- A **Docker registry** contains Docker images that you can pull in order to use them in your deployment. The registry is a stateless, scalable server side application that stores and lets you distribute Docker images.
- The **Kubernetes registry** is an image pull secret that your deployment uses to authenticate with a Docker registry.
Deployments use the Kubernetes registry secret to authenticate with a private Docker registry and then pull a Docker image hosted on it.
Currently, deployments pull the private registry credentials automatically only if the workload is created in the Rancher UI and not when it is created via kubectl.
# Creating a Registry
>**Prerequisites:** You must have a [private registry](https://docs.docker.com/registry/deploying/) available to use.
1. From the **Global** view, select the project containing the namespace(s) where you want to add a registry.
1. From the main menu, click **Resources > Secrets > Registry Credentials.** (For Rancher prior to v2.3, click **Resources > Registries.)**
1. Click **Add Registry.**
1. Enter a **Name** for the registry.
>**Note:** Kubernetes classifies secrets, certificates, and registries all as [secrets](https://kubernetes.io/docs/concepts/configuration/secret/), and no two secrets in a project or namespace can have duplicate names. Therefore, to prevent conflicts, your registry must have a unique name among all secrets within your workspace.
1. Select a **Scope** for the registry. You can either make the registry available for the entire project or a single [namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces).
1. Select the website that hosts your private registry. Then enter credentials that authenticate with the registry. For example, if you use DockerHub, provide your DockerHub username and password.
1. Click **Save**.
**Result:**
- Your secret is added to the project or namespace, depending on the scope you chose.
- You can view the secret in the Rancher UI from the **Resources > Registries** view.
- Any workload that you create in the Rancher UI will have the credentials to access the registry if the workload is within the registry's scope.
# Using a Private Registry
You can deploy a workload with an image from a private registry through the Rancher UI, or with `kubectl`.
### Using the Private Registry with the Rancher UI
To deploy a workload with an image from your private registry,
1. Go to the project view,
1. Click **Resources > Workloads.** In versions prior to v2.3.0, go to the **Workloads** tab.
1. Click **Deploy.**
1. Enter a unique name for the workload and choose a namespace.
1. In the **Docker Image** field, enter the URL of the path to the Docker image in your private registry. For example, if your private registry is on Quay.io, you could use `quay.io/<Quay profile name>/<Image name>`.
1. Click **Launch.**
**Result:** Your deployment should launch, authenticate using the private registry credentials you added in the Rancher UI, and pull the Docker image that you specified.
### Using the Private Registry with kubectl
When you create the workload using `kubectl`, you need to configure the pod so that its YAML has the path to the image in the private registry. You also have to create and reference the registry secret because the pod only automatically gets access to the private registry credentials if it is created in the Rancher UI.
The secret has to be created in the same namespace where the workload gets deployed.
Below is an example `pod.yml` for a workload that uses an image from a private registry. In this example, the pod uses an image from Quay.io, and the .yml specifies the path to the image. The pod authenticates with the registry using credentials stored in a Kubernetes secret called `testquay`, which is specified in `spec.imagePullSecrets` in the `name` field:
```
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: quay.io/<Quay profile name>/<image name>
imagePullSecrets:
- name: testquay
```
In this example, the secret named `testquay` is in the default namespace.
You can use `kubectl` to create the secret with the private registry credentials. This command creates the secret named `testquay`:
```
kubectl create secret docker-registry testquay \
--docker-server=quay.io \
--docker-username=<Profile name> \
--docker-password=<password>
```
To see how the secret is stored in Kubernetes, you can use this command:
```
kubectl get secret testquay --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
```
The result looks like this:
```
{"auths":{"quay.io":{"username":"<Profile name>","password":"<password>","auth":"c291bXlhbGo6dGVzdGFiYzEyMw=="}}}
```
After the workload is deployed, you can check if the image was pulled successfully:
```
kubectl get events
```
The result should look like this:
```
14s Normal Scheduled Pod Successfully assigned default/private-reg2 to minikube
11s Normal Pulling Pod pulling image "quay.io/<Profile name>/<image name>"
10s Normal Pulled Pod Successfully pulled image "quay.io/<Profile name>/<image name>"
```
For more information, refer to the Kubernetes documentation on [creating a pod that uses your secret.](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret)
@@ -1,44 +0,0 @@
---
title: Secrets
weight: 4
---
[Secrets](https://kubernetes.io/docs/concepts/configuration/secret/#overview-of-secrets) store sensitive data like passwords, tokens, or keys. They may contain one or more key value pairs.
> This page is about secrets in general. For details on setting up a private registry, refer to the section on [registries.]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/registries)
When configuring a workload, you'll be able to choose which secrets to include. Like config maps, secrets can be referenced by workloads as either an environment variable or a volume mount.
Mounted secrets will be updated automatically unless they are mounted as subpath volumes. For details on how updated secrets are propagated, refer to the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/configuration/secret/#mounted-secrets-are-updated-automatically)
# Creating Secrets
When creating a secret, you can make it available for any deployment within a project, or you can limit it to a single namespace.
1. From the **Global** view, select the project containing the namespace(s) where you want to add a secret.
2. From the main menu, select **Resources > Secrets**. Click **Add Secret**.
3. Enter a **Name** for the secret.
>**Note:** Kubernetes classifies secrets, certificates, and registries all as [secrets](https://kubernetes.io/docs/concepts/configuration/secret/), and no two secrets in a project or namespace can have duplicate names. Therefore, to prevent conflicts, your secret must have a unique name among all secrets within your workspace.
4. Select a **Scope** for the secret. You can either make the registry available for the entire project or a single [namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces).
5. From **Secret Values**, click **Add Secret Value** to add a key value pair. Add as many values as you need.
>**Tip:** You can add multiple key value pairs to the secret by copying and pasting.
>
> {{< img "/img/rancher/bulk-key-values.gif" "Bulk Key Value Pair Copy/Paste">}}
1. Click **Save**.
**Result:** Your secret is added to the project or namespace, depending on the scope you chose. You can view the secret in the Rancher UI from the **Resources > Secrets** view.
Mounted secrets will be updated automatically unless they are mounted as subpath volumes. For details on how updated secrets are propagated, refer to the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/configuration/secret/#mounted-secrets-are-updated-automatically)
# What's Next?
Now that you have a secret added to the project or namespace, you can add it to a workload that you deploy.
For more information on adding secret to a workload, see [Deploying Workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/).
@@ -1,50 +0,0 @@
---
title: Service Discovery
weight: 2
---
For every workload created, a complementing Service Discovery entry is created. This Service Discovery entry enables DNS resolution for the workload's pods using the following naming convention:
`<workload>.<namespace>.svc.cluster.local`.
However, you also have the option of creating additional Service Discovery records. You can use these additional records so that a given [namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) resolves with one or more external IP addresses, an external hostname, an alias to another DNS record, other workloads, or a set of pods that match a selector that you create.
1. From the **Global** view, open the project that you want to add a DNS record to.
1. Click **Resources** in the main navigation bar. Click the **Service Discovery** tab. (In versions prior to v2.3.0, just click the **Service Discovery** tab.) Then click **Add Record**.
1. Enter a **Name** for the DNS record. This name is used for DNS resolution.
1. Select a **Namespace** from the drop-down list. Alternatively, you can create a new namespace on the fly by clicking **Add to a new namespace**.
1. Select one of the **Resolves To** options to route requests to the DNS record.
1. **One or more external IP addresses**
Enter an IP address in the **Target IP Addresses** field. Add more IP addresses by clicking **Add Target IP**.
1. **An external hostname**
Enter a **Target Hostname**.
1. **Alias of another DNS record's value**
Click **Add Target Record** and select another DNS record from the **Value** drop-down.
1. **One or more workloads**
Click **Add Target Workload** and select another workload from the **Value** drop-down.
1. **The set of pods which match a selector**
Enter key value pairs of [label selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors) to create a record for all pods that match your parameters.
1. Click **Create**
**Result:** A new DNS record is created.
- You can view the record by from the project's **Service Discovery** tab.
- When you visit the new DNS name for the new record that you created (`<recordname>.<namespace>.svc.cluster.local`), it resolves the chosen namespace.
## Related Links
- [Adding entries to Pod /etc/hosts with HostAliases](https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/)
@@ -1,58 +0,0 @@
---
title: "Storage"
description: "Learn about the two ways with which you can create persistent storage in Kubernetes: persistent volumes and storage classes"
weight: 17
---
When deploying an application that needs to retain data, you'll need to create persistent storage. Persistent storage allows you to store application data external from the pod running your application. This storage practice allows you to maintain application data, even if the application's pod fails.
The documents in this section assume that you understand the Kubernetes concepts of persistent volumes, persistent volume claims, and storage classes. For more information, refer to the section on [how storage works.](./how-storage-works)
### Prerequisites
To set up persistent storage, the `Manage Volumes` [role]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-role-reference) is required.
If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider.
For provisioning new storage with Rancher, the cloud provider must be enabled. For details on enabling cloud providers, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/)
For attaching existing persistent storage to a cluster, the cloud provider does not need to be enabled.
### Setting up Existing Storage
The overall workflow for setting up existing storage is as follows:
1. Set up persistent storage in an infrastructure provider.
2. Add a persistent volume (PV) that refers to the persistent storage.
3. Add a persistent volume claim (PVC) that refers to the PV.
4. Mount the PVC as a volume in your workload.
For details and prerequisites, refer to [this page.](./attaching-existing-storage)
### Dynamically Provisioning New Storage in Rancher
The overall workflow for provisioning new storage is as follows:
1. Add a storage class and configure it to use your storage provider.
2. Add a persistent volume claim (PVC) that refers to the storage class.
3. Mount the PVC as a volume for your workload.
For details and prerequisites, refer to [this page.](./provisioning-new-storage)
### Provisioning Storage Examples
We provide examples of how to provision storage with [NFS,](./examples/nfs) [vSphere,](./examples/vsphere) and [Amazon's EBS.](./examples/ebs)
### GlusterFS Volumes
In clusters that store data on GlusterFS volumes, you may experience an issue where pods fail to mount volumes after restarting the `kubelet`. For details on preventing this from happening, refer to [this page.](./glusterfs-volumes)
### iSCSI Volumes
In [Rancher Launched Kubernetes clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) that store data on iSCSI volumes, you may experience an issue where kubelets fail to automatically connect with iSCSI volumes. For details on resolving this issue, refer to [this page.](./iscsi-volumes)
### hostPath Volumes
Before you create a hostPath volume, you need to set up an [extra_bind]({{<baseurl>}}/rke/latest/en/config-options/services/services-extras/#extra-binds/) in your cluster configuration. This will mount the path as a volume in your kubelets, which can then be used for hostPath volumes in your workloads.
### Related Links
- [Kubernetes Documentation: Storage](https://kubernetes.io/docs/concepts/storage/)
@@ -1,102 +0,0 @@
---
title: Setting up Existing Storage
weight: 3
---
This section describes how to set up existing persistent storage for workloads in Rancher.
> This section assumes that you understand the Kubernetes concepts of persistent volumes and persistent volume claims. For more information, refer to the section on [how storage works.](../how-storage-works)
To set up storage, follow these steps:
1. [Set up persistent storage in an infrastructure provider.](#1-set-up-persistent-storage-in-an-infrastructure-provider)
2. [Add a persistent volume that refers to the persistent storage.](#2-add-a-persistent-volume-that-refers-to-the-persistent-storage)
3. [Add a persistent volume claim that refers to the persistent volume.](#3-add-a-persistent-volume-claim-that-refers-to-the-persistent-volume)
4. [Mount the persistent volume claim as a volume in your workload.](#4-mount-the-persistent-storage-claim-as-a-volume-in-your-workload)
### Prerequisites
- To create a persistent volume as a Kubernetes resource, you must have the `Manage Volumes` [role.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-role-reference)
- If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider.
### 1. Set up persistent storage in an infrastructure provider
Creating a persistent volume in Rancher will not create a storage volume. It only creates a Kubernetes resource that maps to an existing volume. Therefore, before you can create a persistent volume as a Kubernetes resource, you must have storage provisioned.
The steps to set up a persistent storage device will differ based on your infrastructure. We provide examples of how to set up storage using [vSphere,](../examples/vsphere) [NFS,](../examples/nfs) or Amazon's [EBS.](../examples/ebs)
### 2. Add a persistent volume that refers to the persistent storage
These steps describe how to set up a persistent volume at the cluster level in Kubernetes.
1. From the cluster view, select **Storage > Persistent Volumes**.
1. Click **Add Volume**.
1. Enter a **Name** for the persistent volume.
1. Select the **Volume Plugin** for the disk type or service that you're using. When adding storage to a cluster that's hosted by a cloud provider, use the cloud provider's plug-in for cloud storage. For example, if you have a Amazon EC2 cluster and you want to use cloud storage for it, you must use the `Amazon EBS Disk` volume plugin.
1. Enter the **Capacity** of your volume in gigabytes.
1. Complete the **Plugin Configuration** form. Each plugin type requires information specific to the vendor of disk type. For help regarding each plugin's form and the information that's required, refer to the plug-in's vendor documentation.
1. Optional: In the **Customize** form, configure the [access modes.](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) This options sets how many nodes can access the volume, along with the node read/write permissions. The [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) includes a table that lists which access modes are supported by the plugins available.
1. Optional: In the **Customize** form, configure the [mount options.](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options) Each volume plugin allows you to specify additional command line options during the mounting process. Consult each plugin's vendor documentation for the mount options available.
1. Click **Save**.
**Result:** Your new persistent volume is created.
### 3. Add a persistent volume claim that refers to the persistent volume
These steps describe how to set up a PVC in the namespace where your stateful workload will be deployed.
1. Go to the project containing a workload that you want to add a persistent volume claim to.
1. Then click the **Volumes** tab and click **Add Volume**. (In versions prior to v2.3.0, click **Workloads** on the main navigation bar, then **Volumes.**)
1. Enter a **Name** for the volume claim.
1. Select the [Namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) of the workload that you want to add the persistent storage to.
1. In the section called **Use an existing persistent volume,** go to the **Persistent Volume** drop-down and choose the persistent volume that you created.
1. **Optional:** From **Customize**, select the [Access Modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) that you want to use.
1. Click **Create.**
**Result:** Your PVC is created. You can now attach it to any workload in the project.
### 4. Mount the persistent volume claim as a volume in your workload
Mount PVCs to stateful workloads so that your applications can store their data.
You can mount PVCs during the deployment of a workload, or following workload creation.
The following steps describe how to assign existing storage to a new workload that is a stateful set:
1. From the **Project** view, go to the **Workloads** tab.
1. Click **Deploy.**
1. Enter a name for the workload.
1. Next to the **Workload Type** field, click **More Options.**
1. Click **Stateful set of 1 pod.** Optionally, configure the number of pods.
1. Choose the namespace where the workload will be deployed.
1. Expand the **Volumes** section and click **Add Volume > Use an existing persistent volume (claim).**.
1. In the **Persistent Volume Claim** field, select the PVC that you created.
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Launch.**
**Result:** When the workload is deployed, it will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC.
The following steps describe how to assign persistent storage to an existing workload:
1. From the **Project** view, go to the **Workloads** tab.
1. Go to the workload that you want to add the persistent storage to. The workload type should be a stateful set. Click **&#8942; > Edit.**
1. Expand the **Volumes** section and click **Add Volume > Use an existing persistent volume (claim).**.
1. In the **Persistent Volume Claim** field, select the PVC that you created.
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Save.**
**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC.
@@ -1,11 +0,0 @@
---
title: Provisioning Storage Examples
weight: 4
---
Rancher supports persistent storage with a variety of volume plugins. However, before you use any of these plugins to bind persistent storage to your workloads, you have to configure the storage itself, whether its a cloud-based solution from a service-provider or an on-prem solution that you manage yourself.
For your convenience, Rancher offers documentation on how to configure some of the popular storage methods:
- [NFS]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/examples/nfs/)
- [vSphere]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/examples/vsphere/)
@@ -1,16 +0,0 @@
---
title: Creating Persistent Storage in Amazon's EBS
weight: 3053
---
This section describes how to set up Amazon's Elastic Block Store in EC2.
1. From the EC2 console, go to the **ELASTIC BLOCK STORE** section in the left panel and click **Volumes.**
1. Click **Create Volume.**
1. Optional: Configure the size of the volume or other options. The volume should be created in the same availability zone as the instance it will be attached to.
1. Click **Create Volume.**
1. Click **Close.**
**Result:** Persistent storage has been created.
For details on how to set up the newly created storage in Rancher, refer to the section on [setting up existing storage.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/)
@@ -1,66 +0,0 @@
---
title: NFS Storage
weight: 3054
---
Before you can use the NFS storage volume plug-in with Rancher deployments, you need to provision an NFS server.
>**Note:**
>
>- If you already have an NFS share, you don't need to provision a new NFS server to use the NFS volume plugin within Rancher. Instead, skip the rest of this procedure and complete [adding storage]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/).
>
>- This procedure demonstrates how to set up an NFS server using Ubuntu, although you should be able to use these instructions for other Linux distros (e.g. Debian, RHEL, Arch Linux, etc.). For official instruction on how to create an NFS server using another Linux distro, consult the distro's documentation.
>**Recommended:** To simplify the process of managing firewall rules, use NFSv4.
1. Using a remote Terminal connection, log into the Ubuntu server that you intend to use for NFS storage.
1. Enter the following command:
```
sudo apt-get install nfs-kernel-server
```
1. Enter the command below, which sets the directory used for storage, along with user access rights. Modify the command if you'd like to keep storage at a different directory.
```
mkdir -p /nfs && chown nobody:nogroup /nfs
```
- The `-p /nfs` parameter creates a directory named `nfs` at root.
- The `chown nobody:nogroup /nfs` parameter allows all access to the storage directory.
1. Create an NFS exports table. This table sets the directory paths on your NFS server that are exposed to the nodes that will use the server for storage.
1. Open `/etc/exports` using your text editor of choice.
1. Add the path of the `/nfs` folder that you created in step 3, along with the IP addresses of your cluster nodes. Add an entry for each IP address in your cluster. Follow each address and its accompanying parameters with a single space that is a delimiter.
```
/nfs <IP_ADDRESS1>(rw,sync,no_subtree_check) <IP_ADDRESS2>(rw,sync,no_subtree_check) <IP_ADDRESS3>(rw,sync,no_subtree_check)
```
**Tip:** You can replace the IP addresses with a subnet. For example: `10.212.50.12&#47;24`
1. Update the NFS table by entering the following command:
```
exportfs -ra
```
1. Open the ports used by NFS.
1. To find out what ports NFS is using, enter the following command:
```
rpcinfo -p | grep nfs
```
2. [Open the ports](https://help.ubuntu.com/lts/serverguide/firewall.html.en) that the previous command outputs. For example, the following command opens port 2049:
```
sudo ufw allow 2049
```
**Result:** Your NFS server is configured to be used for storage with your Rancher nodes.
## What's Next?
Within Rancher, add the NFS server as a [storage volume]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#adding-a-persistent-volume) and/or [storage class]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#adding-storage-classes). After adding the server, you can use it for storage for your deployments.
@@ -1,68 +0,0 @@
---
title: vSphere Storage
weight: 3055
---
To provide stateful workloads with vSphere storage, we recommend creating a vSphereVolume [storage class]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#storage-classes). This practice dynamically provisions vSphere storage when workloads request volumes through a [persistent volume claim]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/persistent-volume-claims/).
### Prerequisites
In order to provision vSphere volumes in a cluster created with the [Rancher Kubernetes Engine (RKE)]({{< baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), the [vSphere cloud provider]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/vsphere) must be explicitly enabled in the [cluster options]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/).
### Creating A Storage Class
> **Note:**
>
> The following steps can also be performed using the `kubectl` command line tool. See [Kubernetes documentation on persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) for details.
1. From the Global view, open the cluster where you want to provide vSphere storage.
2. From the main menu, select **Storage > Storage Classes**. Then click **Add Class**.
3. Enter a **Name** for the class.
4. Under **Provisioner**, select **VMWare vSphere Volume**.
{{< img "/img/rancher/vsphere-storage-class.png" "vsphere-storage-class">}}
5. Optionally, specify additional properties for this storage class under **Parameters**. Refer to the [vSphere storage documentation](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/storageclass.html) for details.
5. Click **Save**.
### Creating a Workload with a vSphere Volume
1. From the cluster where you configured vSphere storage, begin creating a workload as you would in [Deploying Workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/).
2. For **Workload Type**, select **Stateful set of 1 pod**.
3. Expand the **Volumes** section and click **Add Volume**.
4. Choose **Add a new persistent volume (claim)**. This option will implicitly create the claim once you deploy the workload.
5. Assign a **Name** for the claim, ie. `test-volume` and select the vSphere storage class created in the previous step.
6. Enter the required **Capacity** for the volume. Then click **Define**.
{{< img "/img/rancher/workload-add-volume.png" "workload-add-volume">}}
7. Assign a path in the **Mount Point** field. This is the full path where the volume will be mounted in the container file system, e.g. `/persistent`.
8. Click **Launch** to create the workload.
### Verifying Persistence of the Volume
1. From the context menu of the workload you just created, click **Execute Shell**.
2. Note the directory at root where the volume has been mounted to (in this case `/persistent`).
3. Create a file in the volume by executing the command `touch /<volumeMountPoint>/data.txt`.
4. **Close** the shell window.
5. Click on the name of the workload to reveal detail information.
6. Open the context menu next to the Pod in the *Running* state.
7. Delete the Pod by selecting **Delete**.
8. Observe that the pod is deleted. Then a new pod is scheduled to replace it so that the workload maintains its configured scale of a single stateful pod.
9. Once the replacement pod is running, click **Execute Shell**.
10. Inspect the contents of the directory where the volume is mounted by entering `ls -l /<volumeMountPoint>`. Note that the file you created earlier is still present.
![workload-persistent-data]({{<baseurl>}}/img/rancher/workload-persistent-data.png)
## Why to Use StatefulSets Instead of Deployments
You should always use [StatefulSets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) for workloads consuming vSphere storage, as this resource type is designed to address a VMDK block storage caveat.
Since vSphere volumes are backed by VMDK block storage, they only support an [access mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) of `ReadWriteOnce`. This setting restricts the volume so that it can only be mounted to a single pod at a time, unless all pods consuming that volume are co-located on the same node. This behavior makes a deployment resource unusable for scaling beyond a single replica if it consumes vSphere volumes.
Even using a deployment resource with just a single replica may result in a deadlock situation while updating the deployment. If the updated pod is scheduled to a node different from where the existing pod lives, it will fail to start because the VMDK is still attached to the other node.
## Related Links
- [vSphere Storage for Kubernetes](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/)
- [Kubernetes Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
@@ -1,32 +0,0 @@
---
title: GlusterFS Volumes
weight: 5
---
> This section only applies to [RKE clusters.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/)
In clusters that store data on GlusterFS volumes, you may experience an issue where pods fail to mount volumes after restarting the `kubelet`. The logging of the `kubelet` will show: `transport endpoint is not connected`. To prevent this from happening, you can configure your cluster to mount the `systemd-run` binary in the `kubelet` container. There are two requirements before you can change the cluster configuration:
- The node needs to have the `systemd-run` binary installed (this can be checked by using the command `which systemd-run` on each cluster node)
- The `systemd-run` binary needs to be compatible with Debian OS on which the hyperkube image is based (this can be checked using the following command on each cluster node, replacing the image tag with the Kubernetes version you want to use)
```
docker run -v /usr/bin/systemd-run:/usr/bin/systemd-run --entrypoint /usr/bin/systemd-run rancher/hyperkube:v1.16.2-rancher1 --version
```
>**Note:**
>
>Before updating your Kubernetes YAML to mount the `systemd-run` binary, make sure the `systemd` package is installed on your cluster nodes. If this package isn't installed _before_ the bind mounts are created in your Kubernetes YAML, Docker will automatically create the directories and files on each node and will not allow the package install to succeed.
```
services:
kubelet:
extra_binds:
- "/usr/bin/systemd-run:/usr/bin/systemd-run"
```
After the cluster has finished provisioning, you can check the `kubelet` container logging to see if the functionality is activated by looking for the following logline:
```
Detected OS with systemd
```
@@ -1,76 +0,0 @@
---
title: How Persistent Storage Works
weight: 1
---
A persistent volume (PV) is a piece of storage in the Kubernetes cluster, while a persistent volume claim (PVC) is a request for storage.
There are two ways to use persistent storage in Kubernetes:
- Use an existing persistent volume
- Dynamically provision new persistent volumes
To use an existing PV, your application will need to use a PVC that is bound to a PV, and the PV should include the minimum resources that the PVC requires.
For dynamic storage provisioning, your application will need to use a PVC that is bound to a storage class. The storage class contains the authorization to provision new persistent volumes.
![Setting Up New and Existing Persistent Storage]({{<baseurl>}}/img/rancher/rancher-storage.svg)
For more information, refer to the [official Kubernetes documentation on storage](https://kubernetes.io/docs/concepts/storage/volumes/)
This section covers the following topics:
- [About persistent volume claims](#about-persistent-volume-claims)
- [PVCs are required for both new and existing persistent storage](#pvcs-are-required-for-both-new-and-existing-persistent-storage)
- [Setting up existing storage with a PVC and PV](#setting-up-existing-storage-with-a-pvc-and-pv)
- [Binding PVs to PVCs](#binding-pvs-to-pvcs)
- [Provisioning new storage with a PVC and storage class](#provisioning-new-storage-with-a-pvc-and-storage-class)
# About Persistent Volume Claims
Persistent volume claims (PVCs) are objects that request storage resources from your cluster. They're similar to a voucher that your deployment can redeem for storage access. A PVC is mounted into a workloads as a volume so that the workload can claim its specified share of the persistent storage.
To access persistent storage, a pod must have a PVC mounted as a volume. This PVC lets your deployment application store its data in an external location, so that if a pod fails, it can be replaced with a new pod and continue accessing its data stored externally, as though an outage never occurred.
Each Rancher project contains a list of PVCs that you've created, available from **Resources > Workloads > Volumes.** (In versions prior to v2.3.0, the PVCs are in the **Volumes** tab.) You can reuse these PVCs when creating deployments in the future.
### PVCs are Required for Both New and Existing Persistent Storage
A PVC is required for pods to use any persistent storage, regardless of whether the workload is intended to use storage that already exists, or the workload will need to dynamically provision new storage on demand.
If you are setting up existing storage for a workload, the workload mounts a PVC, which refers to a PV, which corresponds to existing storage infrastructure.
If a workload should request new storage, the workload mounts PVC, which refers to a storage class, which has the capability to create a new PV along with its underlying storage infrastructure.
Rancher lets you create as many PVCs within a project as you'd like.
You can mount PVCs to a deployment as you create it, or later, after the deployment is running.
# Setting up Existing Storage with a PVC and PV
Your pods can store data in [volumes,](https://kubernetes.io/docs/concepts/storage/volumes/) but if the pod fails, that data is lost. To solve this issue, Kubernetes offers persistent volumes (PVs), which are Kubernetes resources that correspond to external storage disks or file systems that your pods can access. If a pod crashes, its replacement pod can access the data in persistent storage without any data loss.
PVs can represent a physical disk or file system that you host on premise, or a vendor-hosted storage resource, such as Amazon EBS or Azure Disk.
Creating a persistent volume in Rancher will not create a storage volume. It only creates a Kubernetes resource that maps to an existing volume. Therefore, before you can create a persistent volume as a Kubernetes resource, you must have storage provisioned.
> **Important:** PVs are created at the cluster level, which means that in a multi-tenant cluster, teams with access to separate namespaces could have access to the same PV.
### Binding PVs to PVCs
When pods are set up to use persistent storage, they mount a persistent volume claim (PVC) that is mounted the same way as any other Kubernetes volume. When each PVC is created, the Kubernetes master considers it to be a request for storage and binds it to a PV that matches the minimum resource requirements of the PVC. Not every PVC is guaranteed to be bound to a PV. According to the Kubernetes [documentation,](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
> Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.
In other words, you can create unlimited PVCs, but they will only be bound to PVs if the Kubernetes master can find a sufficient PVs that has at least the amount of disk space required by the PVC.
To dynamically provision new storage, the PVC mounted in the pod would have to correspond to a storage class instead of a persistent volume.
# Provisioning New Storage with a PVC and Storage Class
Storage Classes allow you to create PVs dynamically without having to create persistent storage in an infrastructure provider first.
For example, if a workload is bound to a PVC and the PVC refers to an Amazon EBS Storage Class, the storage class can dynamically create an EBS volume and a corresponding PV.
The Kubernetes master will then bind the newly created PV to your workload's PVC, allowing your workload to use the persistent storage.
@@ -1,30 +0,0 @@
---
title: iSCSI Volumes
weight: 6
---
In [Rancher Launched Kubernetes clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) that store data on iSCSI volumes, you may experience an issue where kubelets fail to automatically connect with iSCSI volumes. This failure is likely due to an incompatibility issue involving the iSCSI initiator tool. You can resolve this issue by installing the iSCSI initiator tool on each of your cluster nodes.
Rancher Launched Kubernetes clusters storing data on iSCSI volumes leverage the [iSCSI initiator tool](http://www.open-iscsi.com/), which is embedded in the kubelet's `rancher/hyperkube` Docker image. From each kubelet (i.e., the _initiator_), the tool discovers and launches sessions with an iSCSI volume (i.e., the _target_). However, in some instances, the versions of the iSCSI initiator tool installed on the initiator and the target may not match, resulting in a connection failure.
If you encounter this issue, you can work around it by installing the initiator tool on each node in your cluster. You can install the iSCSI initiator tool by logging into your cluster nodes and entering one of the following commands:
| Platform | Package Name | Install Command |
| ------------- | ----------------------- | -------------------------------------- |
| Ubuntu/Debian | `open-iscsi` | `sudo apt install open-iscsi` |
| RHEL | `iscsi-initiator-utils` | `yum install iscsi-initiator-utils -y` |
After installing the initiator tool on your nodes, edit the YAML for your cluster, editing the kubelet configuration to mount the iSCSI binary and configuration, as shown in the sample below.
>**Note:**
>
>Before updating your Kubernetes YAML to mount the iSCSI binary and configuration, make sure either the `open-iscsi` (deb) or `iscsi-initiator-utils` (yum) package is installed on your cluster nodes. If this package isn't installed _before_ the bind mounts are created in your Kubernetes YAML, Docker will automatically create the directories and files on each node and will not allow the package install to succeed.
```
services:
kubelet:
extra_binds:
- "/etc/iscsi:/etc/iscsi"
- "/sbin/iscsiadm:/sbin/iscsiadm"
```
@@ -1,109 +0,0 @@
---
title: Dynamically Provisioning New Storage in Rancher
weight: 2
---
This section describes how to provision new persistent storage for workloads in Rancher.
> This section assumes that you understand the Kubernetes concepts of storage classes and persistent volume claims. For more information, refer to the section on [how storage works.](../how-storage-works)
To provision new storage for your workloads, follow these steps:
1. [Add a storage class and configure it to use your storage provider.](#1-add-a-storage-class-and-configure-it-to-use-your-storage-provider)
2. [Add a persistent volume claim that refers to the storage class.](#2-add-a-persistent-volume-claim-that-refers-to-the-storage-class)
3. [Mount the persistent volume claim as a volume for your workload.](#3-mount-the-persistent-volume-claim-as-a-volume-for-your-workload)
### Prerequisites
- To set up persistent storage, the `Manage Volumes` [role]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-role-reference) is required.
- If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider.
- The cloud provider must be enabled. For details on enabling cloud providers, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/)
- Make sure your storage provisioner is available to be enabled.
The following storage provisioners are enabled by default:
Name | Plugin
--------|----------
Amazon EBS Disk | `aws-ebs`
AzureFile | `azure-file`
AzureDisk | `azure-disk`
Google Persistent Disk | `gce-pd`
Longhorn | `flex-volume-longhorn`
VMware vSphere Volume | `vsphere-volume`
Local | `local`
Network File System | `nfs`
hostPath | `host-path`
To use a storage provisioner that is not on the above list, you will need to use a [feature flag to enable unsupported storage drivers.]({{<baseurl>}}/rancher/v2.x/en/installation/options/feature-flags/enable-not-default-storage-drivers/)
### 1. Add a storage class and configure it to use your storage provider
These steps describe how to set up a storage class at the cluster level.
1. Go to the cluster for which you want to dynamically provision persistent storage volumes.
1. From the cluster view, select `Storage > Storage Classes`. Click `Add Class`.
1. Enter a `Name` for your storage class.
1. From the `Provisioner` drop-down, select the service that you want to use to dynamically provision storage volumes. For example, if you have a Amazon EC2 cluster and you want to use cloud storage for it, use the `Amazon EBS Disk` provisioner.
1. From the `Parameters` section, fill out the information required for the service to dynamically provision storage volumes. Each provisioner requires different information to dynamically provision storage volumes. Consult the service's documentation for help on how to obtain this information.
1. Click `Save`.
**Result:** The storage class is available to be consumed by a PVC.
For full information about the storage class parameters, refer to the official [Kubernetes documentation.](https://kubernetes.io/docs/concepts/storage/storage-classes/#parameters).
### 2. Add a persistent volume claim that refers to the storage class
These steps describe how to set up a PVC in the namespace where your stateful workload will be deployed.
1. Go to the project containing a workload that you want to add a PVC to.
1. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Then select the **Volumes** tab. Click **Add Volume**.
1. Enter a **Name** for the volume claim.
1. Select the [Namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) of the volume claim.
1. In the **Source** field, click **Use a Storage Class to provision a new persistent volume.**
1. Go to the **Storage Class** drop-down and select the storage class that you created.
1. Enter a volume **Capacity**.
1. Optional: Expand the **Customize** section and select the [Access Modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) that you want to use.
1. Click **Create.**
**Result:** Your PVC is created. You can now attach it to any workload in the project.
### 3. Mount the persistent volume claim as a volume for your workload
Mount PVCs to workloads so that your applications can store their data.
You can mount PVCs during the deployment of a workload, or following workload creation.
To attach the PVC to a new workload,
1. Create a workload as you would in [Deploying Workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/).
1. For **Workload Type**, select **Stateful set of 1 pod**.
1. Expand the **Volumes** section and click **Add Volume > Add a New Persistent Volume (Claim).**
1. In the **Persistent Volume Claim** section, select the newly created persistent volume claim that is attached to the storage class.
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Launch.**
**Result:** When the workload is deployed, it will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC.
To attach the PVC to an existing workload,
1. Go to the project that has the workload that will have the PVC attached.
1. Go to the workload that will have persistent storage and click **&#8942; > Edit.**
1. Expand the **Volumes** section and click **Add Volume > Add a New Persistent Volume (Claim).**
1. In the **Persistent Volume Claim** section, select the newly created persistent volume claim that is attached to the storage class.
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Save.**
**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. If not, Rancher will provision new persistent storage.
@@ -1,79 +0,0 @@
---
title: "Kubernetes Workloads and Pods"
description: "Learn about the two constructs with which you can build any complex containerized application in Kubernetes: Kubernetes workloads and pods"
weight: 7
---
You can build any complex containerized application in Kubernetes using two basic constructs: pods and workloads. Once you build an application, you can expose it for access either within the same cluster or on the Internet using a third construct: services.
### Pods
[_Pods_](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/) are one or more containers that share network namespaces and storage volumes. Most pods have only one container. Therefore when we discuss _pods_, the term is often synonymous with _containers_. You scale pods the same way you scale containers—by having multiple instances of the same pod that implement a service. Usually pods get scaled and managed by the workload.
### Workloads
_Workloads_ are objects that set deployment rules for pods. Based on these rules, Kubernetes performs the deployment and updates the workload with the current state of the application.
Workloads let you define the rules for application scheduling, scaling, and upgrade.
#### Workload Types
Kubernetes divides workloads into different types. The most popular types supported by Kubernetes are:
- [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)
_Deployments_ are best used for stateless applications (i.e., when you don't have to maintain the workload's state). Pods managed by deployment workloads are treated as independent and disposable. If a pod encounters disruption, Kubernetes removes it and then recreates it. An example application would be an Nginx web server.
- [StatefulSets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
_StatefulSets_, in contrast to deployments, are best used when your application needs to maintain its identity and store data. An application would be something like Zookeeper—an application that requires a database for storage.
- [DaemonSets](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/)
_Daemonsets_ ensures that every node in the cluster runs a copy of pod. For use cases where you're collecting logs or monitoring node performance, this daemon-like workload works best.
- [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/)
_Jobs_ launch one or more pods and ensure that a specified number of them successfully terminate. Jobs are best used to run a finite task to completion as opposed to managing an ongoing desired application state.
- [CronJobs](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/)
_CronJobs_ are similar to jobs. CronJobs, however, runs to completion on a cron-based schedule.
### Services
In many use cases, a workload has to be either:
- Accessed by other workloads in the cluster.
- Exposed to the outside world.
You can achieve these goals by creating a _Service_. Services are mapped to the underlying workload's pods using a [selector/label approach (view the code samples)](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#service-and-replicationcontroller). Rancher UI simplifies this mapping process by automatically creating a service along with the workload, using the service port and type that you select.
#### Service Types
There are several types of services available in Rancher. The descriptions below are sourced from the [Kubernetes Documentation](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types).
- **ClusterIP**
>Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default `ServiceType`.
- **NodePort**
>Exposes the service on each Nodes IP at a static port (the `NodePort`). A `ClusterIP` service, to which the `NodePort` service will route, is automatically created. Youll be able to contact the `NodePort` service, from outside the cluster, by requesting `<NodeIP>:<NodePort>`.
- **LoadBalancer**
>Exposes the service externally using a cloud providers load balancer. `NodePort` and `ClusterIP` services, to which the external load balancer will route, are automatically created.
## Workload Options
This section of the documentation contains instructions for deploying workloads and using workload options.
- [Deploy Workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/)
- [Upgrade Workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads/)
- [Rollback Workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/rollback-workloads/)
## Related Links
### External Links
- [Services](https://kubernetes.io/docs/concepts/services-networking/service/)
@@ -1,35 +0,0 @@
---
title: Adding a Sidecar
weight: 4
---
A _sidecar_ is a container that extends or enhances the main container in a pod. The main container and the sidecar share a pod, and therefore share the same network space and storage. You can add sidecars to existing workloads by using the **Add a Sidecar** option.
1. From the **Global** view, open the project running the workload you want to add a sidecar to.
1. Click **Resources > Workloads.** In versions prior to v2.3.0, select the **Workloads** tab.
1. Find the workload that you want to extend. Select **&#8942; icon (...) > Add a Sidecar**.
1. Enter a **Name** for the sidecar.
1. Select a **Sidecar Type**. This option determines if the sidecar container is deployed before or after the main container is deployed.
- **Standard Container:**
The sidecar container is deployed after the main container.
- **Init Container:**
The sidecar container is deployed before the main container.
1. From the **Docker Image** field, enter the name of the Docker image that you want to deploy in support of the main container. During deployment, Rancher pulls this image from [Docker Hub](https://hub.docker.com/explore/). Enter the name exactly as it appears on Docker Hub.
1. Set the remaining options. You can read about them in [Deploying Workloads](../deploy-workloads).
1. Click **Launch**.
**Result:** The sidecar is deployed according to your parameters. Following its deployment, you can view the sidecar by selecting **&#8942; icon (...) > Edit** for the main deployment.
## Related Links
- [The Distributed System ToolKit: Patterns for Composite Containers](https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns/)
@@ -1,57 +0,0 @@
---
title: Deploying Workloads
description: Read this step by step guide for deploying workloads. Deploy a workload to run an application in one or more containers.
weight: 1
---
Deploy a workload to run an application in one or more containers.
1. From the **Global** view, open the project that you want to deploy a workload to.
1. 1. Click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) From the **Workloads** view, click **Deploy**.
1. Enter a **Name** for the workload.
1. Select a [workload type]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/). The workload defaults to a scalable deployment, by can change the workload type by clicking **More options.**
1. From the **Docker Image** field, enter the name of the Docker image that you want to deploy to the project, optionally prefacing it with the registry host (e.g. `quay.io`, `registry.gitlab.com`, etc.). During deployment, Rancher pulls this image from the specified public or private registry. If no registry host is provided, Rancher will pull the image from [Docker Hub](https://hub.docker.com/explore/). Enter the name exactly as it appears in the registry server, including any required path, and optionally including the desired tag (e.g. `registry.gitlab.com/user/path/image:tag`). If no tag is provided, the `latest` tag will be automatically used.
1. Either select an existing [namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces), or click **Add to a new namespace** and enter a new namespace.
1. Click **Add Port** to enter a port mapping, which enables access to the application inside and outside of the cluster . For more information, see [Services]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/#services).
1. Configure the remaining options:
- **Environment Variables**
Use this section to either specify environment variables for your workload to consume on the fly, or to pull them from another source, such as a secret or [ConfigMap]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/configmaps/).
- **Node Scheduling**
- **Health Check**
- **Volumes**
Use this section to add storage for your workload. You can manually specify the volume that you want to add, use a persistent volume claim to dynamically create a volume for the workload, or read data for a volume to use from a file such as a [ConfigMap]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/configmaps/).
When you are deploying a Stateful Set, you should use a Volume Claim Template when using Persistent Volumes. This will ensure that Persistent Volumes are created dynamically when you scale your Stateful Set. This option is available in the UI as of Rancher v2.2.0.
- **Scaling/Upgrade Policy**
>**Amazon Note for Volumes:**
>
> To mount an Amazon EBS volume:
>
>- In [Amazon AWS](https://aws.amazon.com/), the nodes must be in the same Availability Zone and possess IAM permissions to attach/unattach volumes.
>
>- The cluster must be using the [AWS cloud provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws) option. For more information on enabling this option see [Creating an Amazon EC2 Cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/) or [Creating a Custom Cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes).
1. Click **Show Advanced Options** and configure:
- **Command**
- **Networking**
- **Labels & Annotations**
- **Security and Host Config**
1. Click **Launch**.
**Result:** The workload is deployed to the chosen namespace. You can view the workload's status from the project's **Workloads** view.
@@ -1,35 +0,0 @@
---
title: The Horizontal Pod Autoscaler
description: Learn about the horizontal pod autoscaler (HPA). How to manage HPAs and how to test them with a service deployment
weight: 5
---
The [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) is a Kubernetes feature that allows you to configure your cluster to automatically scale the services it's running up or down.
Rancher provides some additional features to help manage HPAs, depending on the version of Rancher.
You can create, manage, and delete HPAs using the Rancher UI in Rancher v2.3.0-alpha4 and higher versions. It only supports HPA in the `autoscaling/v2beta2` API.
## Managing HPAs
The way that you manage HPAs is different based on your version of the Kubernetes API:
- **For Kubernetes API version autoscaling/V2beta1:** This version of the Kubernetes API lets you autoscale your pods based on the CPU and memory utilization of your application.
- **For Kubernetes API Version autoscaling/V2beta2:** This version of the Kubernetes API lets you autoscale your pods based on CPU and memory utilization, in addition to custom metrics.
HPAs are also managed differently based on your version of Rancher:
- **For Rancher v2.3.0+**: You can create, manage, and delete HPAs using the Rancher UI. From the Rancher UI you can configure the HPA to scale based on CPU and memory utilization. For more information, refer to [Managing HPAs with the Rancher UI]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). To scale the HPA based on custom metrics, you still need to use `kubectl`. For more information, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus).
- **For Rancher Prior to v2.3.0:** To manage and configure HPAs, you need to use `kubectl`. For instructions on how to create, manage, and scale HPAs, refer to [Managing HPAs with kubectl]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl).
You might have additional HPA installation steps if you are using an older version of Rancher:
- **For Rancher v2.0.7+:** Clusters created in Rancher v2.0.7 and higher automatically have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use HPA.
- **For Rancher Prior to v2.0.7:** Clusters created in Rancher prior to v2.0.7 don't automatically have the requirements needed to use HPA. For instructions on installing HPA for these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7).
## Testing HPAs with a Service Deployment
In Rancher v2.3.x+, you can see your HPA's current number of replicas by going to your project and clicking **Resources > HPA.** For more information, refer to [Get HPA Metrics and Status]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/).
You can also use `kubectl` to get the status of HPAs that you test with your load testing tool. For more information, refer to [Testing HPAs with kubectl]
({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/).
@@ -1,40 +0,0 @@
---
title: Background Information on HPAs
weight: 1
---
The [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) is a Kubernetes feature that allows you to configure your cluster to automatically scale the services it's running up or down. This section provides explanation on how HPA works with Kubernetes.
## Why Use Horizontal Pod Autoscaler?
Using HPA, you can automatically scale the number of pods within a replication controller, deployment, or replica set up or down. HPA automatically scales the number of pods that are running for maximum efficiency. Factors that affect the number of pods include:
- A minimum and maximum number of pods allowed to run, as defined by the user.
- Observed CPU/memory use, as reported in resource metrics.
- Custom metrics provided by third-party metrics application like Prometheus, Datadog, etc.
HPA improves your services by:
- Releasing hardware resources that would otherwise be wasted by an excessive number of pods.
- Increase/decrease performance as needed to accomplish service level agreements.
## How HPA Works
![HPA Schema]({{<baseurl>}}/img/rancher/horizontal-pod-autoscaler.jpg)
HPA is implemented as a control loop, with a period controlled by the `kube-controller-manager` flags below:
Flag | Default | Description |
---------|----------|----------|
`--horizontal-pod-autoscaler-sync-period` | `30s` | How often HPA audits resource/custom metrics in a deployment.
`--horizontal-pod-autoscaler-downscale-delay` | `5m0s` | Following completion of a downscale operation, how long HPA must wait before launching another downscale operations.
`--horizontal-pod-autoscaler-upscale-delay` | `3m0s` | Following completion of an upscale operation, how long HPA must wait before launching another upscale operation.
For full documentation on HPA, refer to the [Kubernetes Documentation](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/).
## Horizontal Pod Autoscaler API Objects
HPA is an API resource in the Kubernetes `autoscaling` API group. The current stable version is `autoscaling/v1`, which only includes support for CPU autoscaling. To get additional support for scaling based on memory and custom metrics, use the beta version instead: `autoscaling/v2beta1`.
For more information about the HPA API object, see the [HPA GitHub Readme](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
@@ -1,200 +0,0 @@
---
title: Managing HPAs with kubectl
weight: 3
---
This section describes HPA management with `kubectl`. This document has instructions for how to:
- Create an HPA
- Get information on HPAs
- Delete an HPA
- Configure your HPAs to scale with CPU or memory utilization
- Configure your HPAs to scale using custom metrics, if you use a third-party tool such as Prometheus for metrics
### Note For Rancher v2.3.x
In Rancher v2.3.x, you can create, view, and delete HPAs from the Rancher UI. You can also configure them to scale based on CPU or memory usage from the Rancher UI. For more information, refer to [Managing HPAs with the Rancher UI]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). For scaling HPAs based on other metrics than CPU or memory, you still need `kubectl`.
### Note For Rancher Prior to v2.0.7
Clusters created with older versions of Rancher don't automatically have all the requirements to create an HPA. To install an HPA on these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7).
##### Basic kubectl Command for Managing HPAs
If you have an HPA manifest file, you can create, manage, and delete HPAs using `kubectl`:
- Creating HPA
- With manifest: `kubectl create -f <HPA_MANIFEST>`
- Without manifest (Just support CPU): `kubectl autoscale deployment hello-world --min=2 --max=5 --cpu-percent=50`
- Getting HPA info
- Basic: `kubectl get hpa hello-world`
- Detailed description: `kubectl describe hpa hello-world`
- Deleting HPA
- `kubectl delete hpa hello-world`
##### HPA Manifest Definition Example
The HPA manifest is the config file used for managing an HPA with `kubectl`.
The following snippet demonstrates use of different directives in an HPA manifest. See the list below the sample to understand the purpose of each directive.
```yml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: hello-world
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: hello-world
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageValue: 100Mi
```
Directive | Description
---------|----------|
`apiVersion: autoscaling/v2beta1` | The version of the Kubernetes `autoscaling` API group in use. This example manifest uses the beta version, so scaling by CPU and memory is enabled. |
`name: hello-world` | Indicates that HPA is performing autoscaling for the `hello-word` deployment. |
`minReplicas: 1` | Indicates that the minimum number of replicas running can't go below 1. |
`maxReplicas: 10` | Indicates the maximum number of replicas in the deployment can't go above 10.
`targetAverageUtilization: 50` | Indicates the deployment will scale pods up when the average running pod uses more than 50% of its requested CPU.
`targetAverageValue: 100Mi` | Indicates the deployment will scale pods up when the average running pod uses more that 100Mi of memory.
<br/>
##### Configuring HPA to Scale Using Resource Metrics (CPU and Memory)
Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler. Run the following commands to check if metrics are available in your installation:
```
$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
node-controlplane 196m 9% 1623Mi 42%
node-etcd 80m 4% 1090Mi 28%
node-worker 64m 3% 1146Mi 29%
$ kubectl -n kube-system top pods
NAME CPU(cores) MEMORY(bytes)
canal-pgldr 18m 46Mi
canal-vhkgr 20m 45Mi
canal-x5q5v 17m 37Mi
canal-xknnz 20m 37Mi
kube-dns-7588d5b5f5-298j2 0m 22Mi
kube-dns-autoscaler-5db9bbb766-t24hw 0m 5Mi
metrics-server-97bc649d5-jxrlt 0m 12Mi
$ kubectl -n kube-system logs -l k8s-app=metrics-server
I1002 12:55:32.172841 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:https://kubernetes.default.svc?kubeletHttps=true&kubeletPort=10250&useServiceAccount=true&insecure=true
I1002 12:55:32.172994 1 heapster.go:72] Metrics Server version v0.2.1
I1002 12:55:32.173378 1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default.svc" and version
I1002 12:55:32.173401 1 configs.go:62] Using kubelet port 10250
I1002 12:55:32.173946 1 heapster.go:128] Starting with Metric Sink
I1002 12:55:32.592703 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
I1002 12:55:32.925630 1 heapster.go:101] Starting Heapster API server...
[restful] 2018/10/02 12:55:32 log.go:33: [restful/swagger] listing is available at https:///swaggerapi
[restful] 2018/10/02 12:55:32 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/
I1002 12:55:32.928597 1 serve.go:85] Serving securely on 0.0.0.0:443
```
If you have created your cluster in Rancher v2.0.6 or before, please refer to [Manual installation](#manual-installation)
##### Configuring HPA to Scale Using Custom Metrics with Prometheus
You can configure HPA to autoscale based on custom metrics provided by third-party software. The most common use case for autoscaling using third-party software is based on application-level metrics (i.e., HTTP requests per second). HPA uses the `custom.metrics.k8s.io` API to consume these metrics. This API is enabled by deploying a custom metrics adapter for the metrics collection solution.
For this example, we are going to use [Prometheus](https://prometheus.io/). We are beginning with the following assumptions:
- Prometheus is deployed in the cluster.
- Prometheus is configured correctly and collecting proper metrics from pods, nodes, namespaces, etc.
- Prometheus is exposed at the following URL and port: `http://prometheus.mycompany.io:80`
Prometheus is available for deployment in the Rancher v2.0 catalog. Deploy it from Rancher catalog if it isn't already running in your cluster.
For HPA to use custom metrics from Prometheus, package [k8s-prometheus-adapter](https://github.com/DirectXMan12/k8s-prometheus-adapter) is required in the `kube-system` namespace of your cluster. To install `k8s-prometheus-adapter`, we are using the Helm chart available at [banzai-charts](https://github.com/banzaicloud/banzai-charts).
1. Initialize Helm in your cluster.
```
# kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
```
1. Clone the `banzai-charts` repo from GitHub:
```
# git clone https://github.com/banzaicloud/banzai-charts
```
1. Install the `prometheus-adapter` chart, specifying the Prometheus URL and port number.
```
# helm install --name prometheus-adapter banzai-charts/prometheus-adapter --set prometheus.url="http://prometheus.mycompany.io",prometheus.port="80" --namespace kube-system
```
1. Check that `prometheus-adapter` is running properly. Check the service pod and logs in the `kube-system` namespace.
1. Check that the service pod is `Running`. Enter the following command.
```
# kubectl get pods -n kube-system
```
From the resulting output, look for a status of `Running`.
```
NAME READY STATUS RESTARTS AGE
...
prometheus-adapter-prometheus-adapter-568674d97f-hbzfx 1/1 Running 0 7h
...
```
1. Check the service logs to make sure the service is running correctly by entering the command that follows.
```
# kubectl logs prometheus-adapter-prometheus-adapter-568674d97f-hbzfx -n kube-system
```
Then review the log output to confirm the service is running.
{{% accordion id="prometheus-logs" label="Prometheus Adaptor Logs" %}}
...
I0724 10:18:45.696679 1 round_trippers.go:436] GET https://10.43.0.1:443/api/v1/namespaces/default/pods?labelSelector=app%3Dhello-world 200 OK in 2 milliseconds
I0724 10:18:45.696695 1 round_trippers.go:442] Response Headers:
I0724 10:18:45.696699 1 round_trippers.go:445] Date: Tue, 24 Jul 2018 10:18:45 GMT
I0724 10:18:45.696703 1 round_trippers.go:445] Content-Type: application/json
I0724 10:18:45.696706 1 round_trippers.go:445] Content-Length: 2581
I0724 10:18:45.696766 1 request.go:836] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/default/pods","resourceVersion":"6237"},"items":[{"metadata":{"name":"hello-world-54764dfbf8-q6l82","generateName":"hello-world-54764dfbf8-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/hello-world-54764dfbf8-q6l82","uid":"484cb929-8f29-11e8-99d2-067cac34e79c","resourceVersion":"4066","creationTimestamp":"2018-07-24T10:06:50Z","labels":{"app":"hello-world","pod-template-hash":"1032089694"},"annotations":{"cni.projectcalico.org/podIP":"10.42.0.7/32"},"ownerReferences":[{"apiVersion":"extensions/v1beta1","kind":"ReplicaSet","name":"hello-world-54764dfbf8","uid":"4849b9b1-8f29-11e8-99d2-067cac34e79c","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-ncvts","secret":{"secretName":"default-token-ncvts","defaultMode":420}}],"containers":[{"name":"hello-world","image":"rancher/hello-world","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{"requests":{"cpu":"500m","memory":"64Mi"}},"volumeMounts":[{"name":"default-token-ncvts","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"34.220.18.140","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:54Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"}],"hostIP":"34.220.18.140","podIP":"10.42.0.7","startTime":"2018-07-24T10:06:50Z","containerStatuses":[{"name":"hello-world","state":{"running":{"startedAt":"2018-07-24T10:06:54Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"rancher/hello-world:latest","imageID":"docker-pullable://rancher/hello-world@sha256:4b1559cb4b57ca36fa2b313a3c7dde774801aa3a2047930d94e11a45168bc053","containerID":"docker://cce4df5fc0408f03d4adf82c90de222f64c302bf7a04be1c82d584ec31530773"}],"qosClass":"Burstable"}}]}
I0724 10:18:45.699525 1 api.go:74] GET http://prometheus-server.prometheus.34.220.18.140.xip.io/api/v1/query?query=sum%28rate%28container_fs_read_seconds_total%7Bpod_name%3D%22hello-world-54764dfbf8-q6l82%22%2Ccontainer_name%21%3D%22POD%22%2Cnamespace%3D%22default%22%7D%5B5m%5D%29%29+by+%28pod_name%29&time=1532427525.697 200 OK
I0724 10:18:45.699620 1 api.go:93] Response Body: {"status":"success","data":{"resultType":"vector","result":[{"metric":{"pod_name":"hello-world-54764dfbf8-q6l82"},"value":[1532427525.697,"0"]}]}}
I0724 10:18:45.699939 1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/fs_read?labelSelector=app%3Dhello-world: (12.431262ms) 200 [[kube-controller-manager/v1.10.1 (linux/amd64) kubernetes/d4ab475/system:serviceaccount:kube-system:horizontal-pod-autoscaler] 10.42.0.0:24268]
I0724 10:18:51.727845 1 request.go:836] Request Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":false}}
...
{{% /accordion %}}
1. Check that the metrics API is accessible from kubectl.
- If you are accessing the cluster directly, enter your Server URL in the kubectl config in the following format: `https://<Kubernetes_URL>:6443`.
```
# kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
```
If the API is accessible, you should receive output that's similar to what follows.
{{% accordion id="custom-metrics-api-response" label="API Response" %}}
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[{"name":"pods/fs_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_rss","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_period","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_read","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_user","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/last_seen","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/tasks_state","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_quota","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/start_time_seconds","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_write","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_cache","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_working_set_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_udp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes_free","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time_weighted","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failures","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_swap","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_shares","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_swap_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_current","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failcnt","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_tcp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_max_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_reservation_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_load_average_10s","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_system","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]}]}
{{% /accordion %}}
- If you are accessing the cluster through Rancher, enter your Server URL in the kubectl config in the following format: `https://<RANCHER_URL>/k8s/clusters/<CLUSTER_ID>`. Add the suffix `/k8s/clusters/<CLUSTER_ID>` to API path.
```
# kubectl get --raw /k8s/clusters/<CLUSTER_ID>/apis/custom.metrics.k8s.io/v1beta1
```
If the API is accessible, you should receive output that's similar to what follows.
{{% accordion id="custom-metrics-api-response-rancher" label="API Response" %}}
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[{"name":"pods/fs_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_rss","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_period","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_read","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_user","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/last_seen","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/tasks_state","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_quota","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/start_time_seconds","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_write","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_cache","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_working_set_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_udp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes_free","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time_weighted","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failures","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_swap","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_shares","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_swap_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_current","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failcnt","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_tcp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_max_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_reservation_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_load_average_10s","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_system","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]}]}
{{% /accordion %}}
@@ -1,53 +0,0 @@
---
title: Managing HPAs with the Rancher UI
weight: 2
---
The Rancher UI supports creating, managing, and deleting HPAs. You can configure CPU or memory usage as the metric that the HPA uses to scale.
If you want to create HPAs that scale based on other metrics than CPU and memory, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus).
## Creating an HPA
1. From the **Global** view, open the project that you want to deploy a HPA to.
1. Click **Resources > HPA.**
1. Click **Add HPA.**
1. Enter a **Name** for the HPA.
1. Select a **Namespace** for the HPA.
1. Select a **Deployment** as scale target for the HPA.
1. Specify the **Minimum Scale** and **Maximum Scale** for the HPA.
1. Configure the metrics for the HPA. You can choose memory or CPU usage as the metric that will cause the HPA to scale the service up or down. In the **Quantity** field, enter the percentage of the workload's memory or CPU usage that will cause the HPA to scale the service. To configure other HPA metrics, including metrics available from Prometheus, you need to [manage HPAs using kubectl]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus).
1. Click **Create** to create the HPA.
> **Result:** The HPA is deployed to the chosen namespace. You can view the HPA's status from the project's Resources > HPA view.
## Get HPA Metrics and Status
1. From the **Global** view, open the project with the HPAs you want to look at.
1. Click **Resources > HPA.** The **HPA** tab shows the number of current replicas.
1. For more detailed metrics and status of a specific HPA, click the name of the HPA. This leads to the HPA detail page.
## Deleting an HPA
1. From the **Global** view, open the project that you want to delete an HPA from.
1. Click **Resources > HPA.**
1. Find the HPA which you would like to delete.
1. Click **&#8942; > Delete**.
1. Click **Delete** to confirm.
> **Result:** The HPA is deleted from the current cluster.
@@ -1,491 +0,0 @@
---
title: Testing HPAs with kubectl
weight: 4
---
This document describes how to check the status of your HPAs after scaling them up or down with your load testing tool. For information on how to check the status from the Rancher UI (at least version 2.3.x), refer to [Managing HPAs with the Rancher UI]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/).
For HPA to work correctly, service deployments should have resources request definitions for containers. Follow this hello-world example to test if HPA is working correctly.
1. Configure `kubectl` to connect to your Kubernetes cluster.
2. Copy the `hello-world` deployment manifest below.
{{% accordion id="hello-world" label="Hello World Manifest" %}}
```
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: hello-world
name: hello-world
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: rancher/hello-world
imagePullPolicy: Always
name: hello-world
resources:
requests:
cpu: 500m
memory: 64Mi
ports:
- containerPort: 80
protocol: TCP
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hello-world
```
{{% /accordion %}}
1. Deploy it to your cluster.
```
# kubectl create -f <HELLO_WORLD_MANIFEST>
```
1. Copy one of the HPAs below based on the metric type you're using:
{{% accordion id="service-deployment-resource-metrics" label="Hello World HPA: Resource Metrics" %}}
```
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: hello-world
namespace: default
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: hello-world
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageValue: 1000Mi
```
{{% /accordion %}}
{{% accordion id="service-deployment-custom-metrics" label="Hello World HPA: Custom Metrics" %}}
```
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: hello-world
namespace: default
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: hello-world
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageValue: 100Mi
- type: Pods
pods:
metricName: cpu_system
targetAverageValue: 20m
```
{{% /accordion %}}
1. View the HPA info and description. Confirm that metric data is shown.
{{% accordion id="hpa-info-resource-metrics" label="Resource Metrics" %}}
1. Enter the following commands.
```
# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hello-world Deployment/hello-world 1253376 / 100Mi, 0% / 50% 1 10 1 6m
# kubectl describe hpa
Name: hello-world
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Mon, 23 Jul 2018 20:21:16 +0200
Reference: Deployment/hello-world
Metrics: ( current / target )
resource memory on pods: 1253376 / 100Mi
resource cpu on pods (as a percentage of request): 0% (0) / 50%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events: <none>
```
{{% /accordion %}}
{{% accordion id="hpa-info-custom-metrics" label="Custom Metrics" %}}
1. Enter the following command.
```
# kubectl describe hpa
```
You should receive the output that follows.
```
Name: hello-world
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Tue, 24 Jul 2018 18:36:28 +0200
Reference: Deployment/hello-world
Metrics: ( current / target )
resource memory on pods: 3514368 / 100Mi
"cpu_system" on pods: 0 / 20m
resource cpu on pods (as a percentage of request): 0% (0) / 50%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events: <none>
```
{{% /accordion %}}
1. Generate a load for the service to test that your pods autoscale as intended. You can use any load-testing tool (Hey, Gatling, etc.), but we're using [Hey](https://github.com/rakyll/hey).
1. Test that pod autoscaling works as intended.<br/></br>
**To Test Autoscaling Using Resource Metrics:**
{{% accordion id="observe-upscale-2-pods-cpu" label="Upscale to 2 Pods: CPU Usage Up to Target" %}}
Use your load testing tool to scale up to two pods based on CPU Usage.
1. View your HPA.
```
# kubectl describe hpa
```
You should receive output similar to what follows.
```
Name: hello-world
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Mon, 23 Jul 2018 22:22:04 +0200
Reference: Deployment/hello-world
Metrics: ( current / target )
resource memory on pods: 10928128 / 100Mi
resource cpu on pods (as a percentage of request): 56% (280m) / 50%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 2
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 13s horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
```
1. Enter the following command to confirm you've scaled to two pods.
```
# kubectl get pods
```
You should receive output similar to what follows:
```
NAME READY STATUS RESTARTS AGE
hello-world-54764dfbf8-k8ph2 1/1 Running 0 1m
hello-world-54764dfbf8-q6l4v 1/1 Running 0 3h
```
{{% /accordion %}}
{{% accordion id="observe-upscale-3-pods-cpu-cooldown" label="Upscale to 3 pods: CPU Usage Up to Target" %}}
Use your load testing tool to upscale to 3 pods based on CPU usage with `horizontal-pod-autoscaler-upscale-delay` set to 3 minutes.
1. Enter the following command.
```
# kubectl describe hpa
```
You should receive output similar to what follows
```
Name: hello-world
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Mon, 23 Jul 2018 22:22:04 +0200
Reference: Deployment/hello-world
Metrics: ( current / target )
resource memory on pods: 9424896 / 100Mi
resource cpu on pods (as a percentage of request): 66% (333m) / 50%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 3
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 4m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 16s horizontal-pod-autoscaler New size: 3; reason: cpu resource utilization (percentage of request) above target
```
2. Enter the following command to confirm three pods are running.
```
# kubectl get pods
```
You should receive output similar to what follows.
```
NAME READY STATUS RESTARTS AGE
hello-world-54764dfbf8-f46kh 0/1 Running 0 1m
hello-world-54764dfbf8-k8ph2 1/1 Running 0 5m
hello-world-54764dfbf8-q6l4v 1/1 Running 0 3h
```
{{% /accordion %}}
{{% accordion id="observe-downscale-1-pod" label="Downscale to 1 Pod: All Metrics Below Target" %}}
Use your load testing to scale down to 1 pod when all metrics are below target for `horizontal-pod-autoscaler-downscale-delay` (5 minutes by default).
1. Enter the following command.
```
# kubectl describe hpa
```
You should receive output similar to what follows.
```
Name: hello-world
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Mon, 23 Jul 2018 22:22:04 +0200
Reference: Deployment/hello-world
Metrics: ( current / target )
resource memory on pods: 10070016 / 100Mi
resource cpu on pods (as a percentage of request): 0% (0) / 50%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 1
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 10m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 6m horizontal-pod-autoscaler New size: 3; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 1s horizontal-pod-autoscaler New size: 1; reason: All metrics below target
```
{{% /accordion %}}
<br/>
**To Test Autoscaling Using Custom Metrics:**
{{% accordion id="custom-observe-upscale-2-pods-cpu" label="Upscale to 2 Pods: CPU Usage Up to Target" %}}
Use your load testing tool to upscale two pods based on CPU usage.
1. Enter the following command.
```
# kubectl describe hpa
```
You should receive output similar to what follows.
```
Name: hello-world
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200
Reference: Deployment/hello-world
Metrics: ( current / target )
resource memory on pods: 8159232 / 100Mi
"cpu_system" on pods: 7m / 20m
resource cpu on pods (as a percentage of request): 64% (321m) / 50%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 2
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 16s horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
```
1. Enter the following command to confirm two pods are running.
```
# kubectl get pods
```
You should receive output similar to what follows.
```
NAME READY STATUS RESTARTS AGE
hello-world-54764dfbf8-5pfdr 1/1 Running 0 3s
hello-world-54764dfbf8-q6l82 1/1 Running 0 6h
```
{{% /accordion %}}
{{% accordion id="observe-upscale-3-pods-cpu-cooldown-2" label="Upscale to 3 Pods: CPU Usage Up to Target" %}}
Use your load testing tool to scale up to three pods when the cpu_system usage limit is up to target.
1. Enter the following command.
```
# kubectl describe hpa
```
You should receive output similar to what follows:
```
Name: hello-world
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200
Reference: Deployment/hello-world
Metrics: ( current / target )
resource memory on pods: 8374272 / 100Mi
"cpu_system" on pods: 27m / 20m
resource cpu on pods (as a percentage of request): 71% (357m) / 50%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 3
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 3m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 3s horizontal-pod-autoscaler New size: 3; reason: pods metric cpu_system above target
```
1. Enter the following command to confirm three pods are running.
```
# kubectl get pods
```
You should receive output similar to what follows:
```
# kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world-54764dfbf8-5pfdr 1/1 Running 0 3m
hello-world-54764dfbf8-m2hrl 1/1 Running 0 1s
hello-world-54764dfbf8-q6l82 1/1 Running 0 6h
```
{{% /accordion %}}
{{% accordion id="observe-upscale-4-pods" label="Upscale to 4 Pods: CPU Usage Up to Target" %}}
Use your load testing tool to upscale to four pods based on CPU usage. `horizontal-pod-autoscaler-upscale-delay` is set to three minutes by default.
1. Enter the following command.
```
# kubectl describe hpa
```
You should receive output similar to what follows.
```
Name: hello-world
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200
Reference: Deployment/hello-world
Metrics: ( current / target )
resource memory on pods: 8374272 / 100Mi
"cpu_system" on pods: 27m / 20m
resource cpu on pods (as a percentage of request): 71% (357m) / 50%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 3
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 5m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 3m horizontal-pod-autoscaler New size: 3; reason: pods metric cpu_system above target
Normal SuccessfulRescale 4s horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target
```
1. Enter the following command to confirm four pods are running.
```
# kubectl get pods
```
You should receive output similar to what follows.
```
NAME READY STATUS RESTARTS AGE
hello-world-54764dfbf8-2p9xb 1/1 Running 0 5m
hello-world-54764dfbf8-5pfdr 1/1 Running 0 2m
hello-world-54764dfbf8-m2hrl 1/1 Running 0 1s
hello-world-54764dfbf8-q6l82 1/1 Running 0 6h
```
{{% /accordion %}}
{{% accordion id="custom-metrics-observe-downscale-1-pod" label="Downscale to 1 Pod: All Metrics Below Target" %}}
Use your load testing tool to scale down to one pod when all metrics below target for `horizontal-pod-autoscaler-downscale-delay`.
1. Enter the following command.
```
# kubectl describe hpa
```
You should receive similar output to what follows.
```
Name: hello-world
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200
Reference: Deployment/hello-world
Metrics: ( current / target )
resource memory on pods: 8101888 / 100Mi
"cpu_system" on pods: 8m / 20m
resource cpu on pods (as a percentage of request): 0% (0) / 50%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 1
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 10m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 8m horizontal-pod-autoscaler New size: 3; reason: pods metric cpu_system above target
Normal SuccessfulRescale 5m horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 13s horizontal-pod-autoscaler New size: 1; reason: All metrics below target
```
1. Enter the following command to confirm a single pods is running.
```
# kubectl get pods
```
You should receive output similar to what follows.
```
NAME READY STATUS RESTARTS AGE
hello-world-54764dfbf8-q6l82 1/1 Running 0 6h
```
{{% /accordion %}}
@@ -1,14 +0,0 @@
---
title: Rolling Back Workloads
weight: 3
---
Sometimes there is a need to rollback to the previous version of the application, either for debugging purposes or because an upgrade did not go as planned.
1. From the **Global** view, open the project running the workload you want to rollback.
1. Find the workload that you want to rollback and select **Vertical &#8942; (... ) > Rollback**.
1. Choose the revision that you want to roll back to. Click **Rollback**.
**Result:** Your workload reverts to the previous version that you chose. Wait a few minutes for the action to complete.
@@ -1,21 +0,0 @@
---
title: Upgrading Workloads
weight: 2
---
When a new version of an application image is released on Docker Hub, you can upgrade any workloads running a previous version of the application to the new one.
1. From the **Global** view, open the project running the workload you want to upgrade.
1. Find the workload that you want to upgrade and select **Vertical &#8942; (... ) > Edit**.
1. Update the **Docker Image** to the updated version of the application image on Docker Hub.
1. Update any other options that you want to change.
1. Review and edit the workload's **Scaling/Upgrade** policy.
These options control how the upgrade rolls out to containers that are currently running. For example, for scalable deployments, you can chose whether you want to stop old pods before deploying new ones, or vice versa, as well as the upgrade batch size.
1. Click **Upgrade**.
**Result:** The workload begins upgrading its containers, per your specifications. Note that scaling up the deployment or updating the upgrade/scaling policy won't result in the pods recreation.
@@ -1,120 +0,0 @@
---
title: Contributing to Rancher
weight: 22
---
This section explains the repositories used for Rancher, how to build the repositories, and what information to include when you file an issue.
For more detailed information on how to contribute to the development of Rancher projects, refer to the [Rancher Developer Wiki](https://github.com/rancher/rancher/wiki). The wiki has resources on many topics, including the following:
- How to set up the Rancher development environment and run tests
- The typical flow of an issue through the development lifecycle
- Coding guidelines and development best practices
- Debugging and troubleshooting
- Developing the Rancher API
On the Rancher Users Slack, the channel for developers is **#developer**.
# Repositories
All of repositories are located within our main GitHub organization. There are many repositories used for Rancher, but we'll provide descriptions of some of the main ones used in Rancher.
Repository | URL | Description
-----------|-----|-------------
Rancher | https://github.com/rancher/rancher | This repository is the main source code for Rancher 2.x.
Types | https://github.com/rancher/types | This repository is the repository that has all the API types for Rancher 2.x.
API Framework | https://github.com/rancher/norman | This repository is an API framework for building Rancher style APIs backed by Kubernetes Custom Resources.
User Interface | https://github.com/rancher/ui | This repository is the source of the UI.
(Rancher) Docker Machine | https://github.com/rancher/machine | This repository is the source of the Docker Machine binary used when using Node Drivers. This is a fork of the `docker/machine` repository.
machine-package | https://github.com/rancher/machine-package | This repository is used to build the Rancher Docker Machine binary.
kontainer-engine | https://github.com/rancher/kontainer-engine | This repository is the source of kontainer-engine, the tool to provision hosted Kubernetes clusters.
RKE repository | https://github.com/rancher/rke | This repository is the source of Rancher Kubernetes Engine, the tool to provision Kubernetes clusters on any machine.
CLI | https://github.com/rancher/cli | This repository is the source code for the Rancher CLI used in Rancher 2.x.
(Rancher) Helm repository | https://github.com/rancher/helm | This repository is the source of the packaged Helm binary. This is a fork of the `helm/helm` repository.
Telemetry repository | https://github.com/rancher/telemetry | This repository is the source for the Telemetry binary.
loglevel repository | https://github.com/rancher/loglevel | This repository is the source of the loglevel binary, used to dynamically change log levels.
To see all libraries/projects used in Rancher, see the [`go.mod` file](https://github.com/rancher/rancher/blob/master/go.mod) in the `rancher/rancher` repository.
![Rancher diagram]({{<baseurl>}}/img/rancher/ranchercomponentsdiagram.svg)<br/>
<sup>Rancher components used for provisioning/managing Kubernetes clusters.</sup>
# Building
Every repository should have a Makefile and can be built using the `make` command. The `make` targets are based on the scripts in the `/scripts` directory in the repository, and each target will use [Dapper](https://github.com/rancher/dapper) to run the target in an isolated environment. The `Dockerfile.dapper` will be used for this process, and includes all the necessary build tooling needed.
The default target is `ci`, and will run `./scripts/validate`, `./scripts/build`, `./scripts/test` and `./scripts/package`. The resulting binaries of the build will be in `./build/bin` and are usually also packaged in a Docker image.
# Bugs, Issues or Questions
If you find any bugs or are having any trouble, please search the [reported issue](https://github.com/rancher/rancher/issues) as someone may have experienced the same issue or we are actively working on a solution.
If you can't find anything related to your issue, contact us by [filing an issue](https://github.com/rancher/rancher/issues/new). Though we have many repositories related to Rancher, we want the bugs filed in the Rancher repository so we won't miss them! If you want to ask a question or ask fellow users about an use case, we suggest creating a post on the [Rancher Forums](https://forums.rancher.com).
### Checklist for Filing Issues
Please follow this checklist when filing an issue which will helps us investigate and fix the issue. More info means more data we can use to determine what is causing the issue or what might be related to the issue.
>**Note:** For large amounts of data, please use [GitHub Gist](https://gist.github.com/) or similar and link the created resource in the issue.
>**Important:** Please remove any sensitive data as it will be publicly viewable.
- **Resources:** Provide as much as detail as possible on the used resources. As the source of the issue can be many things, including as much of detail as possible helps to determine the root cause. See some examples below:
- **Hosts:** What specifications does the host have, like CPU/memory/disk, what cloud does it happen on, what Amazon Machine Image are you using, what DigitalOcean droplet are you using, what image are you provisioning that we can rebuild or use when we try to reproduce
- **Operating System:** What operating system are you using? Providing specifics helps here like the output of `cat /etc/os-release` for exact OS release and `uname -r` for exact kernel used
- **Docker:** What Docker version are you using, how did you install it? Most of the details of Docker can be found by supplying output of `docker version` and `docker info`
- **Environment:** Are you in a proxy environment, are you using recognized CA/self signed certificates, are you using an external loadbalancer
- **Rancher:** What version of Rancher are you using, this can be found on the bottom left of the UI or be retrieved from the image tag you are running on the host
- **Clusters:** What kind of cluster did you create, how did you create it, what did you specify when you were creating it
- **Steps to reproduce the issue:** Provide as much detail on how you got into the reported situation. This helps the person to reproduce the situation you are in.
- Provide manual steps or automation scripts used to get from a newly created setup to the situation you reported.
- **Logs:** Provide data/logs from the used resources.
- Rancher
- Docker install
```
docker logs \
--timestamps \
$(docker ps | grep -E "rancher/rancher:|rancher/rancher " | awk '{ print $1 }')
```
- Kubernetes install using `kubectl`
> **Note:** Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_rancher-cluster.yml` if Rancher is installed on a Kubernetes cluster) or are using the embedded kubectl via the UI.
```
kubectl -n cattle-system \
logs \
-l app=rancher \
--timestamps=true
```
- Docker install using `docker` on each of the nodes in the RKE cluster
```
docker logs \
--timestamps \
$(docker ps | grep -E "rancher/rancher@|rancher_rancher" | awk '{ print $1 }')
```
- Kubernetes Install with RKE Add-On
> **Note:** Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_rancher-cluster.yml` if the Rancher server is installed on a Kubernetes cluster) or are using the embedded kubectl via the UI.
```
kubectl -n cattle-system \
logs \
--timestamps=true \
-f $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name="cattle-server") | .metadata.name')
```
- System logging (these might not all exist, depending on operating system)
- `/var/log/messages`
- `/var/log/syslog`
- `/var/log/kern.log`
- Docker daemon logging (these might not all exist, depending on operating system)
- `/var/log/docker.log`
- **Metrics:** If you are experiencing performance issues, please provide as much of data (files or screenshots) of metrics which can help determining what is going on. If you have an issue related to a machine, it helps to supply output of `top`, `free -m`, `df` which shows processes/memory/disk usage.
# Docs
If you have any updates to our documentation, please make any pull request to our docs repo.
- [Rancher 2.x Docs repository](https://github.com/rancher/docs): This repo is where all the docs for Rancher 2.x are located. They are located in the `content` folder in the repo.
- [Rancher 1.x Docs repository](https://github.com/rancher/rancher.github.io): This repo is where all the docs for Rancher 1.x are located. They are located in the `rancher` folder in the repo.
-50
View File
@@ -1,50 +0,0 @@
---
title: Enterprise Cluster Manager
weight: 6
---
After installation, the [system administrator]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) should configure Rancher to configure authentication, authorization, security, default settings, security policies, drivers and global DNS entries.
## First Log In
After you log into Rancher for the first time, Rancher will prompt you for a **Rancher Server URL**.You should set the URL to the main entry point to the Rancher Server. When a load balancer sits in front a Rancher Server cluster, the URL should resolve to the load balancer. The system will automatically try to infer the Rancher Server URL from the IP address or host name of the host running the Rancher Server. This is only correct if you are running a single node Rancher Server installation. In most cases, therefore, you need to set the Rancher Server URL to the correct value yourself.
>**Important!** After you set the Rancher Server URL, we do not support updating it. Set the URL with extreme care.
## Authentication
One of the key features that Rancher adds to Kubernetes is centralized user authentication. This feature allows to set up local users and/or connect to an external authentication provider. By connecting to an external authentication provider, you can leverage that provider's user and groups.
For more information how authentication works and how to configure each provider, see [Authentication]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/).
## Authorization
Within Rancher, each person authenticates as a _user_, which is a login that grants you access to Rancher. Once the user logs in to Rancher, their _authorization_, or their access rights within the system, is determined by the user's role. Rancher provides built-in roles to allow you to easily configure a user's permissions to resources, but Rancher also provides the ability to customize the roles for each Kubernetes resource.
For more information how authorization works and how to customize roles, see [Roles Based Access Control (RBAC)]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/).
## Pod Security Policies
_Pod Security Policies_ (or PSPs) are objects that control security-sensitive aspects of pod specification, e.g. root privileges. If a pod does not meet the conditions specified in the PSP, Kubernetes will not allow it to start, and Rancher will display an error message.
For more information how to create and use PSPs, see [Pod Security Policies]({{<baseurl>}}/rancher/v2.x/en/admin-settings/pod-security-policies/).
## Provisioning Drivers
Drivers in Rancher allow you to manage which providers can be used to provision [hosted Kubernetes clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) or [nodes in an infrastructure provider]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) to allow Rancher to deploy and manage Kubernetes.
For more information, see [Provisioning Drivers]({{<baseurl>}}/rancher/v2.x/en/admin-settings/drivers/).
## Adding Kubernetes Versions into Rancher
With this feature, you can upgrade to the latest version of Kubernetes as soon as it is released, without upgrading Rancher. This feature allows you to easily upgrade Kubernetes patch versions (i.e. `v1.15.X`), but not intended to upgrade Kubernetes minor versions (i.e. `v1.X.0`) as Kubernetes tends to deprecate or add APIs between minor versions.
The information that Rancher uses to provision [RKE clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) is now located in the Rancher Kubernetes Metadata. For details on metadata configuration and how to change the Kubernetes version used for provisioning RKE clusters, see [Rancher Kubernetes Metadata.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/k8s-metadata/)
Rancher Kubernetes Metadata contains Kubernetes version information which Rancher uses to provision [RKE clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/).
For more information on how metadata works and how to configure metadata config, see [Rancher Kubernetes Metadata]({{<baseurl>}}/rancher/v2.x/en/admin-settings/k8s-metadata/).
## Enabling Experimental Features
Rancher includes some features that are experimental and disabled by default. Feature flags were introduced to allow you to try these features. For more information, refer to the section about [feature flags.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/feature-flags/)
@@ -1,36 +0,0 @@
---
title: Access Control
weight: 1
---
> This section is under construction.
There are many ways you can interact with Kubernetes clusters that are managed by Rancher:
- **Rancher UI**
Rancher provides an intuitive user interface for interacting with your clusters. All options available in the UI use the Rancher API. Therefore any action possible in the UI is also possible in the Rancher CLI or Rancher API.
- **kubectl**
You can use the Kubernetes command-line tool, [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), to manage your clusters. You have two options for using kubectl:
- **Rancher kubectl shell**
Interact with your clusters by launching a kubectl shell available in the Rancher UI. This option requires no configuration actions on your part.
For more information, see [Accessing Clusters with kubectl Shell]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-shell).
- **Terminal remote connection**
You can also interact with your clusters by installing [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your local desktop and then copying the cluster's kubeconfig file to your local `~/.kube/config` directory.
For more information, see [Accessing Clusters with kubectl and a kubeconfig File]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-and-a-kubeconfig-file).
- **Rancher CLI**
You can control your clusters by downloading Rancher's own command-line interface, [Rancher CLI]({{<baseurl>}}/rancher/v2.x/en/cli/). This CLI tool can interact directly with different clusters and projects or pass them `kubectl` commands.
- **Rancher API**
Finally, you can interact with your clusters over the Rancher API. Before you use the API, you must obtain an [API key]({{<baseurl>}}/rancher/v2.x/en/user-settings/api-keys/). To view the different resource fields and actions for an API object, open the API UI, which can be accessed by clicking on **View in API** for any Rancher UI object.
@@ -1,98 +0,0 @@
---
title: Authentication Providers
weight: 1
---
> This section is under construction.
One of the key features that Rancher adds to Kubernetes is centralized user authentication. This feature allows your users to use one set of credentials to authenticate with any of your Kubernetes clusters.
This centralized user authentication is accomplished using the Rancher authentication proxy, which is installed along with the rest of Rancher. This proxy authenticates your users and forwards their requests to your Kubernetes clusters using a service account.
<!-- todomark add diagram -->
## External vs. Local Authentication
The Rancher authentication proxy integrates with the following external authentication services. The following table lists the first version of Rancher each service debuted.
| Auth Service | Available as of |
| ------------------------------------------------------------------------------------------------ | ---------------- |
| [Microsoft Active Directory]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/ad/) | v2.0.0 |
| [GitHub]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/github/) | v2.0.0 |
| [Microsoft Azure AD]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/azure-ad/) | v2.0.3 |
| [FreeIPA]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/freeipa/) | v2.0.5 |
| [OpenLDAP]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/openldap/) | v2.0.5 |
| [Microsoft AD FS]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/) | v2.0.7 |
| [PingIdentity]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/ping-federate/) | v2.0.7 |
| [Keycloak]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/keycloak/) | v2.1.0 |
| [Okta]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/okta/) | v2.2.0 |
| [Google OAuth]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/google/) | v2.3.0 |
| [Shibboleth]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/shibboleth) | v2.4.0 |
<br/>
However, Rancher also provides [local authentication]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/local/).
In most cases, you should use an external authentication service over local authentication, as external authentication allows user management from a central location. However, you may want a few local authentication users for managing Rancher under rare circumstances, such as if your external authentication provider is unavailable or undergoing maintenance.
## Users and Groups
Rancher relies on users and groups to determine who is allowed to log in to Rancher and which resources they can access. When authenticating with an external provider, groups are provided from the external provider based on the user. These users and groups are given specific roles to resources like clusters, projects, multi-cluster apps, and global DNS providers and entries. When you give access to a group, all users who are a member of that group in the authentication provider will be able to access the resource with the permissions that you've specified. For more information on roles and permissions, see [Role Based Access Control]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/).
> **Note:** Local authentication does not support creating or managing groups.
For more information, see [Users and Groups]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/user-groups/)
## Scope of Rancher Authorization
After you configure Rancher to allow sign on using an external authentication service, you should configure who should be allowed to log in and use Rancher. The following options are available:
| Access Level | Description |
|----------------------------------------------|-------------|
| Allow any valid Users | _Any_ user in the authorization service can access Rancher. We generally discourage use of this setting! |
| Allow members of Clusters, Projects, plus Authorized Users and Organizations | Any user in the authorization service and any group added as a **Cluster Member** or **Project Member** can log in to Rancher. Additionally, any user in the authentication service or group you add to the **Authorized Users and Organizations** list may log in to Rancher. |
| Restrict access to only Authorized Users and Organizations | Only users in the authentication service or groups added to the Authorized Users and Organizations can log in to Rancher. |
To set the Rancher access level for users in the authorization service, follow these steps:
1. From the **Global** view, click **Security > Authentication.**
1. Use the **Site Access** options to configure the scope of user authorization. The table above explains the access level for each option.
1. Optional: If you choose an option other than **Allow any valid Users,** you can add users to the list of authorized users and organizations by searching for them in the text field that appears.
1. Click **Save.**
**Result:** The Rancher access configuration settings are applied.
{{< saml_caveats >}}
## External Authentication Configuration and Principal Users
Configuration of external authentication requires:
- A local user assigned the administrator role, called hereafter the _local principal_.
- An external user that can authenticate with your external authentication service, called hereafter the _external principal_.
Configuration of external authentication affects how principal users are managed within Rancher. Follow the list below to better understand these effects.
1. Sign into Rancher as the local principal and complete configuration of external authentication.
![Sign In]({{<baseurl>}}/img/rancher/sign-in.png)
2. Rancher associates the external principal with the local principal. These two users share the local principal's user ID.
![Principal ID Sharing]({{<baseurl>}}/img/rancher/principal-ID.png)
3. After you complete configuration, Rancher automatically signs out the local principal.
![Sign Out Local Principal]({{<baseurl>}}/img/rancher/sign-out-local.png)
4. Then, Rancher automatically signs you back in as the external principal.
![Sign In External Principal]({{<baseurl>}}/img/rancher/sign-in-external.png)
5. Because the external principal and the local principal share an ID, no unique object for the external principal displays on the Users page.
![Sign In External Principal]({{<baseurl>}}/img/rancher/users-page.png)
6. The external principal and the local principal share the same access rights.
@@ -1,197 +0,0 @@
---
title: Configuring Active Directory (AD)
weight: 2
---
If your organization uses Microsoft Active Directory as central user repository, you can configure Rancher to communicate with an Active Directory server to authenticate users. This allows Rancher admins to control access to clusters and projects based on users and groups managed externally in the Active Directory, while allowing end-users to authenticate with their AD credentials when logging in to the Rancher UI.
Rancher uses LDAP to communicate with the Active Directory server. The authentication flow for Active Directory is therefore the same as for the [OpenLDAP authentication]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/openldap) integration.
> **Note:**
>
> Before you start, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
## Prerequisites
You'll need to create or obtain from your AD administrator a new AD user to use as service account for Rancher. This user must have sufficient permissions to perform LDAP searches and read attributes of users and groups under your AD domain.
Usually a (non-admin) **Domain User** account should be used for this purpose, as by default such user has read-only privileges for most objects in the domain partition.
Note however, that in some locked-down Active Directory configurations this default behaviour may not apply. In such case you will need to ensure that the service account user has at least **Read** and **List Content** permissions granted either on the Base OU (enclosing users and groups) or globally for the domain.
> **Using TLS?**
>
> If the certificate used by the AD server is self-signed or not from a recognised certificate authority, make sure have at hand the CA certificate (concatenated with any intermediate certificates) in PEM format. You will have to paste in this certificate during the configuration so that Rancher is able to validate the certificate chain.
## Configuration Steps
### Open Active Directory Configuration
1. Log into the Rancher UI using the initial local `admin` account.
2. From the **Global** view, navigate to **Security** > **Authentication**
3. Select **Active Directory**. The **Configure an AD server** form will be displayed.
### Configure Active Directory Server Settings
In the section titled `1. Configure an Active Directory server`, complete the fields with the information specific to your Active Directory server. Please refer to the following table for detailed information on the required values for each parameter.
> **Note:**
>
> If you are unsure about the correct values to enter in the user/group Search Base field, please refer to [Identify Search Base and Schema using ldapsearch](#annex-identify-search-base-and-schema-using-ldapsearch).
**Table 1: AD Server parameters**
| Parameter | Description |
|:--|:--|
| Hostname | Specify the hostname or IP address of the AD server |
| Port | Specify the port at which the Active Directory server is listening for connections. Unencrypted LDAP normally uses the standard port of 389, while LDAPS uses port 636.|
| TLS | Check this box to enable LDAP over SSL/TLS (commonly known as LDAPS).|
| Server Connection Timeout | The duration in number of seconds that Rancher waits before considering the AD server unreachable. |
| Service Account Username | Enter the username of an AD account with read-only access to your domain partition (see [Prerequisites](#prerequisites)). The username can be entered in NetBIOS format (e.g. "DOMAIN\serviceaccount") or UPN format (e.g. "serviceaccount@domain.com"). |
| Service Account Password | The password for the service account. |
| Default Login Domain | When you configure this field with the NetBIOS name of your AD domain, usernames entered without a domain (e.g. "jdoe") will automatically be converted to a slashed, NetBIOS logon (e.g. "LOGIN_DOMAIN\jdoe") when binding to the AD server. If your users authenticate with the UPN (e.g. "jdoe@acme.com") as username then this field **must** be left empty. |
| User Search Base | The Distinguished Name of the node in your directory tree from which to start searching for user objects. All users must be descendents of this base DN. For example: "ou=people,dc=acme,dc=com".|
| Group Search Base | If your groups live under a different node than the one configured under `User Search Base` you will need to provide the Distinguished Name here. Otherwise leave it empty. For example: "ou=groups,dc=acme,dc=com".|
---
### Configure User/Group Schema
In the section titled `2. Customize Schema` you must provide Rancher with a correct mapping of user and group attributes corresponding to the schema used in your directory.
Rancher uses LDAP queries to search for and retrieve information about users and groups within the Active Directory. The attribute mappings configured in this section are used to construct search filters and resolve group membership. It is therefore paramount that the provided settings reflect the reality of your AD domain.
> **Note:**
>
> If you are unfamiliar with the schema used in your Active Directory domain, please refer to [Identify Search Base and Schema using ldapsearch](#annex-identify-search-base-and-schema-using-ldapsearch) to determine the correct configuration values.
#### User Schema
The table below details the parameters for the user schema section configuration.
**Table 2: User schema configuration parameters**
| Parameter | Description |
|:--|:--|
| Object Class | The name of the object class used for user objects in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
| Username Attribute | The user attribute whose value is suitable as a display name. |
| Login Attribute | The attribute whose value matches the username part of credentials entered by your users when logging in to Rancher. If your users authenticate with their UPN (e.g. "jdoe@acme.com") as username then this field must normally be set to `userPrincipalName`. Otherwise for the old, NetBIOS-style logon names (e.g. "jdoe") it's usually `sAMAccountName`. |
| User Member Attribute | The attribute containing the groups that a user is a member of. |
| Search Attribute | When a user enters text to add users or groups in the UI, Rancher queries the AD server and attempts to match users by the attributes provided in this setting. Multiple attributes can be specified by separating them with the pipe ("\|") symbol. To match UPN usernames (e.g. jdoe@acme.com) you should usually set the value of this field to `userPrincipalName`. |
| Search Filter | This filter gets applied to the list of users that is searched when Rancher attempts to add users to a site access list or tries to add members to clusters or projects. For example, a user search filter could be <code>(&#124;(memberOf=CN=group1,CN=Users,DC=testad,DC=rancher,DC=io)(memberOf=CN=group2,CN=Users,DC=testad,DC=rancher,DC=io))</code>. Note: If the search filter does not use [valid AD search syntax,](https://docs.microsoft.com/en-us/windows/win32/adsi/search-filter-syntax) the list of users will be empty. |
| User Enabled Attribute | The attribute containing an integer value representing a bitwise enumeration of user account flags. Rancher uses this to determine if a user account is disabled. You should normally leave this set to the AD standard `userAccountControl`. |
| Disabled Status Bitmask | This is the value of the `User Enabled Attribute` designating a disabled user account. You should normally leave this set to the default value of "2" as specified in the Microsoft Active Directory schema (see [here](https://docs.microsoft.com/en-us/windows/desktop/adschema/a-useraccountcontrol#remarks)). |
---
#### Group Schema
The table below details the parameters for the group schema configuration.
**Table 3: Group schema configuration parameters**
| Parameter | Description |
|:--|:--|
| Object Class | The name of the object class used for group objects in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
| Name Attribute | The group attribute whose value is suitable for a display name. |
| Group Member User Attribute | The name of the **user attribute** whose format matches the group members in the `Group Member Mapping Attribute`. |
| Group Member Mapping Attribute | The name of the group attribute containing the members of a group. |
| Search Attribute | Attribute used to construct search filters when adding groups to clusters or projects. See description of user schema `Search Attribute`. |
| Search Filter | This filter gets applied to the list of groups that is searched when Rancher attempts to add groups to a site access list or tries to add groups to clusters or projects. For example, a group search filter could be <code>(&#124;(cn=group1)(cn=group2))</code>. Note: If the search filter does not use [valid AD search syntax,](https://docs.microsoft.com/en-us/windows/win32/adsi/search-filter-syntax) the list of groups will be empty. |
| Group DN Attribute | The name of the group attribute whose format matches the values in the user attribute describing a the user's memberships. See `User Member Attribute`. |
| Nested Group Membership | This settings defines whether Rancher should resolve nested group memberships. Use only if your organisation makes use of these nested memberships (ie. you have groups that contain other groups as members). |
---
### Test Authentication
Once you have completed the configuration, proceed by testing the connection to the AD server **using your AD admin account**. If the test is successful, authentication with the configured Active Directory will be enabled implicitly with the account you test with set as admin.
> **Note:**
>
> The AD user pertaining to the credentials entered in this step will be mapped to the local principal account and assigned administrator privileges in Rancher. You should therefore make a conscious decision on which AD account you use to perform this step.
1. Enter the **username** and **password** for the AD account that should be mapped to the local principal account.
2. Click **Authenticate with Active Directory** to finalise the setup.
**Result:**
- Active Directory authentication has been enabled.
- You have been signed into Rancher as administrator using the provided AD credentials.
> **Note:**
>
> You will still be able to login using the locally configured `admin` account and password in case of a disruption of LDAP services.
## Annex: Identify Search Base and Schema using ldapsearch
In order to successfully configure AD authentication it is crucial that you provide the correct configuration pertaining to the hierarchy and schema of your AD server.
The [`ldapsearch`](http://manpages.ubuntu.com/manpages/artful/man1/ldapsearch.1.html) tool allows you to query your AD server to learn about the schema used for user and group objects.
For the purpose of the example commands provided below we will assume:
- The Active Directory server has a hostname of `ad.acme.com`
- The server is listening for unencrypted connections on port `389`
- The Active Directory domain is `acme`
- You have a valid AD account with the username `jdoe` and password `secret`
### Identify Search Base
First we will use `ldapsearch` to identify the Distinguished Name (DN) of the parent node(s) for users and groups:
```
$ ldapsearch -x -D "acme\jdoe" -w "secret" -p 389 \
-h ad.acme.com -b "dc=acme,dc=com" -s sub "sAMAccountName=jdoe"
```
This command performs an LDAP search with the search base set to the domain root (`-b "dc=acme,dc=com"`) and a filter targeting the user account (`sAMAccountNam=jdoe`), returning the attributes for said user:
{{< img "/img/rancher/ldapsearch-user.png" "LDAP User">}}
Since in this case the user's DN is `CN=John Doe,CN=Users,DC=acme,DC=com` [5], we should configure the **User Search Base** with the parent node DN `CN=Users,DC=acme,DC=com`.
Similarly, based on the DN of the group referenced in the **memberOf** attribute [4], the correct value for the **Group Search Base** would be the parent node of that value, ie. `OU=Groups,DC=acme,DC=com`.
### Identify User Schema
The output of the above `ldapsearch` query also allows to determine the correct values to use in the user schema configuration:
- `Object Class`: **person** [1]
- `Username Attribute`: **name** [2]
- `Login Attribute`: **sAMAccountName** [3]
- `User Member Attribute`: **memberOf** [4]
> **Note:**
>
> If the AD users in our organisation were to authenticate with their UPN (e.g. jdoe@acme.com) instead of the short logon name, then we would have to set the `Login Attribute` to **userPrincipalName** instead.
We'll also set the `Search Attribute` parameter to **sAMAccountName|name**. That way users can be added to clusters/projects in the Rancher UI either by entering their username or full name.
### Identify Group Schema
Next, we'll query one of the groups associated with this user, in this case `CN=examplegroup,OU=Groups,DC=acme,DC=com`:
```
$ ldapsearch -x -D "acme\jdoe" -w "secret" -p 389 \
-h ad.acme.com -b "ou=groups,dc=acme,dc=com" \
-s sub "CN=examplegroup"
```
This command will inform us on the attributes used for group objects:
{{< img "/img/rancher/ldapsearch-group.png" "LDAP Group">}}
Again, this allows us to determine the correct values to enter in the group schema configuration:
- `Object Class`: **group** [1]
- `Name Attribute`: **name** [2]
- `Group Member Mapping Attribute`: **member** [3]
- `Search Attribute`: **sAMAccountName** [4]
Looking at the value of the **member** attribute, we can see that it contains the DN of the referenced user. This corresponds to the **distinguishedName** attribute in our user object. Accordingly will have to set the value of the `Group Member User Attribute` parameter to this attribute.
In the same way, we can observe that the value in the **memberOf** attribute in the user object corresponds to the **distinguishedName** [5] of the group. We therefore need to set the value for the `Group DN Attribute` parameter to this attribute.
## Annex: Troubleshooting
If you are experiencing issues while testing the connection to the Active Directory server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{<baseurl>}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation.
@@ -1,204 +0,0 @@
---
title: Configuring Azure AD
weight: 5
---
If you have an instance of Active Directory (AD) hosted in Azure, you can configure Rancher to allow your users to log in using their AD accounts. Configuration of Azure AD external authentication requires you to make configurations in both Azure and Rancher.
>**Note:** Azure AD integration only supports Service Provider initiated logins.
>**Prerequisite:** Have an instance of Azure AD configured.
>**Note:** Most of this procedure takes place from the [Microsoft Azure Portal](https://portal.azure.com/).
## Azure Active Directory Configuration Outline
Configuring Rancher to allow your users to authenticate with their Azure AD accounts involves multiple procedures. Review the outline below before getting started.
<a id="tip"></a>
>**Tip:** Before you start, we recommend creating an empty text file. You can use this file to copy values from Azure that you'll paste into Rancher later.
<!-- TOC -->
- [1. Register Rancher with Azure](#1-register-rancher-with-azure)
- [2. Create an Azure API Key](#2-create-an-azure-api-key)
- [3. Set Required Permissions for Rancher](#3-set-required-permissions-for-rancher)
- [4. Copy Azure Application Data](#4-copy-azure-application-data)
- [5. Configure Azure AD in Rancher](#5-configure-azure-ad-in-rancher)
<!-- /TOC -->
### 1. Register Rancher with Azure
Before enabling Azure AD within Rancher, you must register Rancher with Azure.
1. Log in to [Microsoft Azure](https://portal.azure.com/) as an administrative user. Configuration in future steps requires administrative access rights.
1. Use search to open the **App registrations** service.
![Open App Registrations]({{<baseurl>}}/img/rancher/search-app-registrations.png)
1. Click **New registrations** and complete the **Create** form.
![New App Registration]({{<baseurl>}}/img/rancher/new-app-registration.png)
1. Enter a **Name** (something like `Rancher`).
1. From **Supported account types**, select "Accounts in this organizational directory only (AzureADTest only - Single tenant)" This corresponds to the legacy app registration options.
1. In the **Redirect URI** section, make sure **Web** is selected from the dropdown and enter the URL of your Rancher Server in the text box next to the dropdown. This Rancher server URL should be appended with the verification path: `<MY_RANCHER_URL>/verify-auth-azure`.
>**Tip:** You can find your personalized Azure reply URL in Rancher on the Azure AD Authentication page (Global View > Security Authentication > Azure AD).
1. Click **Register**.
>**Note:** It can take up to five minutes for this change to take affect, so don't be alarmed if you can't authenticate immediately after Azure AD configuration.
### 2. Create a new client secret
From the Azure portal, create a client secret. Rancher will use this key to authenticate with Azure AD.
1. Use search to open **App registrations** services. Then open the entry for Rancher that you created in the last procedure.
![Open Rancher Registration]({{<baseurl>}}/img/rancher/open-rancher-app.png)
1. From the navigation pane on left, click **Certificates and Secrets**.
1. Click **New client secret**.
![Create new client secret]({{< baseurl >}}/img/rancher/select-client-secret.png)
1. Enter a **Description** (something like `Rancher`).
1. Select duration for the key from the options under **Expires**. This drop-down sets the expiration date for the key. Shorter durations are more secure, but require you to create a new key after expiration.
1. Click **Add** (you don't need to enter a value—it will automatically populate after you save).
<a id="secret"></a>
1. Copy the key value and save it to an [empty text file](#tip).
You'll enter this key into the Rancher UI later as your **Application Secret**.
You won't be able to access the key value again within the Azure UI.
### 3. Set Required Permissions for Rancher
Next, set API permissions for Rancher within Azure.
1. From the navigation pane on left, select **API permissions**.
![Open Required Permissions]({{<baseurl>}}/img/rancher/select-required-permissions.png)
1. Click **Add a permission**.
1. From the **Azure Active Directory Graph**, select the following **Delegated Permissions**:
![Select API Permissions]({{< baseurl >}}/img/rancher/select-required-permissions-2.png)
<br/>
<br/>
- **Access the directory as the signed-in user**
- **Read directory data**
- **Read all groups**
- **Read all users' full profiles**
- **Read all users' basic profiles**
- **Sign in and read user profile**
1. Click **Add permissions**.
1. From **API permissions**, click **Grant admin consent**. Then click **Yes**.
>**Note:** You must be signed in as an Azure administrator to successfully save your permission settings.
### 4. Add a Reply URL
To use Azure AD with Rancher you must whitelist Rancher with Azure. You can complete this whitelisting by providing Azure with a reply URL for Rancher, which is your Rancher Server URL followed with a verification path.
1. From the **Setting** blade, select **Reply URLs**.
![Azure: Enter Reply URL]({{<baseurl>}}/img/rancher/enter-azure-reply-url.png)
1. From the **Reply URLs** blade, enter the URL of your Rancher Server, appended with the verification path: `<MY_RANCHER_URL>/verify-auth-azure`.
>**Tip:** You can find your personalized Azure reply URL in Rancher on the Azure AD Authentication page (Global View > Security Authentication > Azure AD).
1. Click **Save**.
**Result:** Your reply URL is saved.
>**Note:** It can take up to five minutes for this change to take affect, so don't be alarmed if you can't authenticate immediately after Azure AD configuration.
### 5. Copy Azure Application Data
As your final step in Azure, copy the data that you'll use to configure Rancher for Azure AD authentication and paste it into an empty text file.
1. Obtain your Rancher **Tenant ID**.
1. Use search to open the **Azure Active Directory** service.
![Open Azure Active Directory]({{<baseurl>}}/img/rancher/search-azure-ad.png)
1. From the left navigation pane, open **Overview**.
2. Copy the **Directory ID** and paste it into your [text file](#tip).
You'll paste this value into Rancher as your **Tenant ID**.
1. Obtain your Rancher **Application ID**.
1. Use search to open **App registrations**.
![Open App Registrations]({{<baseurl>}}/img/rancher/search-app-registrations.png)
1. Find the entry you created for Rancher.
1. Copy the **Application ID** and paste it to your [text file](#tip).
1. Obtain your Rancher **Graph Endpoint**, **Token Endpoint**, and **Auth Endpoint**.
1. From **App registrations**, click **Endpoints**.
![Click Endpoints]({{<baseurl>}}/img/rancher/click-endpoints.png)
2. Copy the following endpoints to your clipboard and paste them into your [text file](#tip) (these values will be your Rancher endpoint values).
- **Microsoft Graph API endpoint** (Graph Endpoint)
- **OAuth 2.0 token endpoint (v1)** (Token Endpoint)
- **OAuth 2.0 authorization endpoint (v1)** (Auth Endpoint)
>**Note:** Copy the v1 version of the endpoints
### 5. Configure Azure AD in Rancher
From the Rancher UI, enter information about your AD instance hosted in Azure to complete configuration.
Enter the values that you copied to your [text file](#tip).
1. Log into Rancher. From the **Global** view, select **Security > Authentication**.
1. Select **Azure AD**.
1. Complete the **Configure Azure AD Account** form using the information you copied while completing [Copy Azure Application Data](#5-copy-azure-application-data).
>**Important:** When entering your Graph Endpoint, remove the tenant ID from the URL, like below.
>
><code>http<span>s://g</span>raph.windows.net/<del>abb5adde-bee8-4821-8b03-e63efdc7701c</del></code>
The following table maps the values you copied in the Azure portal to the fields in Rancher.
| Rancher Field | Azure Value |
| ------------------ | ------------------------------------- |
| Tenant ID | Directory ID |
| Application ID | Application ID |
| Application Secret | Key Value |
| Endpoint | https://login.microsoftonline.com/ |
| Graph Endpoint | Microsoft Azure AD Graph API Endpoint |
| Token Endpoint | OAuth 2.0 Token Endpoint |
| Auth Endpoint | OAuth 2.0 Authorization Endpoint |
1. Click **Authenticate with Azure**.
**Result:** Azure Active Directory authentication is configured.
@@ -1,52 +0,0 @@
---
title: Configuring FreeIPA
weight: 4
---
If your organization uses FreeIPA for user authentication, you can configure Rancher to allow your users to login using their FreeIPA credentials.
>**Prerequisites:**
>
>- You must have a [FreeIPA Server](https://www.freeipa.org/) configured.
>- Create a service account in FreeIPA with `read-only` access. Rancher uses this account to verify group membership when a user makes a request using an API key.
>- Read [External Authentication Configuration and Principal Users]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
1. Sign into Rancher using a local user assigned the `administrator` role (i.e., the _local principal_).
2. From the **Global** view, select **Security > Authentication** from the main menu.
3. Select **FreeIPA**.
4. Complete the **Configure an FreeIPA server** form.
You may need to log in to your domain controller to find the information requested in the form.
>**Using TLS?**
>If the certificate is self-signed or not from a recognized certificate authority, make sure you provide the complete chain. That chain is needed to verify the server's certificate.
<br/>
<br/>
>**User Search Base vs. Group Search Base**
>
>Search base allows Rancher to search for users and groups that are in your FreeIPA. These fields are only for search bases and not for search filters.
>
>* If your users and groups are in the same search base, complete only the User Search Base.
>* If your groups are in a different search base, you can optionally complete the Group Search Base. This field is dedicated to searching groups, but is not required.
5. If your FreeIPA deviates from the standard AD schema, complete the **Customize Schema** form to match it. Otherwise, skip this step.
>**Search Attribute** The Search Attribute field defaults with three specific values: `uid|sn|givenName`. After FreeIPA is configured, when a user enters text to add users or groups, Rancher automatically queries the FreeIPA server and attempts to match fields by user id, last name, or first name. Rancher specifically searches for users/groups that begin with the text entered in the search field.
>
>The default field value `uid|sn|givenName`, but you can configure this field to a subset of these fields. The pipe (`|`) between the fields separates these fields.
>
> * `uid`: User ID
> * `sn`: Last Name
> * `givenName`: First Name
>
> With this search attribute, Rancher creates search filters for users and groups, but you *cannot* add your own search filters in this field.
6. Enter your FreeIPA username and password in **Authenticate with FreeIPA** to confirm that Rancher is configured to use FreeIPA authentication.
**Result:**
- FreeIPA authentication is configured.
- You are signed into Rancher with your FreeIPA account (i.e., the _external principal_).
@@ -1,51 +0,0 @@
---
title: Configuring GitHub
weight: 6
---
In environments using GitHub, you can configure Rancher to allow sign on using GitHub credentials.
>**Prerequisites:** Read [External Authentication Configuration and Principal Users]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
1. Sign into Rancher using a local user assigned the `administrator` role (i.e., the _local principal_).
2. From the **Global** view, select **Security > Authentication** from the main menu.
3. Select **GitHub**.
4. Follow the directions displayed to **Setup a GitHub Application**. Rancher redirects you to GitHub to complete registration.
>**What's an Authorization Callback URL?**
>
>The Authorization Callback URL is the URL where users go to begin using your application (i.e. the splash screen).
>When you use external authentication, authentication does not actually take place in your application. Instead, authentication takes place externally (in this case, GitHub). After this external authentication completes successfully, the Authorization Callback URL is the location where the user re-enters your application.
5. From GitHub, copy the **Client ID** and **Client Secret**. Paste them into Rancher.
>**Where do I find the Client ID and Client Secret?**
>
>From GitHub, select Settings > Developer Settings > OAuth Apps. The Client ID and Client Secret are displayed prominently.
6. Click **Authenticate with GitHub**.
7. Use the **Site Access** options to configure the scope of user authorization.
- **Allow any valid Users**
_Any_ GitHub user can access Rancher. We generally discourage use of this setting!
- **Allow members of Clusters, Projects, plus Authorized Users and Organizations**
Any GitHub user or group added as a **Cluster Member** or **Project Member** can log in to Rancher. Additionally, any GitHub user or group you add to the **Authorized Users and Organizations** list may log in to Rancher.
- **Restrict access to only Authorized Users and Organizations**
Only GitHub users or groups added to the Authorized Users and Organizations can log in to Rancher.
<br/>
8. Click **Save**.
**Result:**
- GitHub authentication is configured.
- You are signed into Rancher with your GitHub account (i.e., the _external principal_).
@@ -1,106 +0,0 @@
---
title: Configuring Google OAuth
weight: 12
---
If your organization uses G Suite for user authentication, you can configure Rancher to allow your users to log in using their G Suite credentials.
Only admins of the G Suite domain have access to the Admin SDK. Therefore, only G Suite admins can configure Google OAuth for Rancher.
Within Rancher, only administrators or users with the **Manage Authentication** [global role]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) can configure authentication.
# Prerequisites
- You must have a [G Suite admin account](https://admin.google.com) configured.
- G Suite requires a [top private domain FQDN](https://github.com/google/guava/wiki/InternetDomainNameExplained#public-suffixes-and-private-domains) as an authorized domain. One way to get an FQDN is by creating an A-record in Route53 for your Rancher server. You do not need to update your Rancher Server URL setting with that record, because there could be clusters using that URL.
- You must have the Admin SDK API enabled for your G Suite domain. You can enable it using the steps on [this page.](https://support.google.com/a/answer/60757?hl=en)
After the Admin SDK API is enabled, your G Suite domain's API screen should look like this:
![Enable Admin APIs]({{<baseurl>}}/img/rancher/Google-Enable-APIs-Screen.png)
# Setting up G Suite for OAuth with Rancher
Before you can set up Google OAuth in Rancher, you need to log in to your G Suite account and do the following:
1. [Add Rancher as an authorized domain in G Suite](#1-adding-rancher-as-an-authorized-domain)
1. [Generate OAuth2 credentials for the Rancher server](#2-creating-oauth2-credentials-for-the-rancher-server)
1. [Create service account credentials for the Rancher server](#3-creating-service-account-credentials)
1. [Register the service account key as an OAuth Client](#4-register-the-service-account-key-as-an-oauth-client)
### 1. Adding Rancher as an Authorized Domain
1. Click [here](https://console.developers.google.com/apis/credentials) to go to credentials page of your Google domain.
1. Select your project and click **OAuth consent screen.**
![OAuth Consent Screen]({{<baseurl>}}/img/rancher/Google-OAuth-consent-screen-tab.png)
1. Go to **Authorized Domains** and enter the top private domain of your Rancher server URL in the list. The top private domain is the rightmost superdomain. So for example, www.foo.co.uk a top private domain of foo.co.uk. For more information on top-level domains, refer to [this article.](https://github.com/google/guava/wiki/InternetDomainNameExplained#public-suffixes-and-private-domains)
1. Go to **Scopes for Google APIs** and make sure **email,** **profile** and **openid** are enabled.
**Result:** Rancher has been added as an authorized domain for the Admin SDK API.
### 2. Creating OAuth2 Credentials for the Rancher Server
1. Go to the Google API console, select your project, and go to the [credentials page.](https://console.developers.google.com/apis/credentials)
![Credentials]({{<baseurl>}}/img/rancher/Google-Credentials-tab.png)
1. On the **Create Credentials** dropdown, select **OAuth client ID.**
1. Click **Web application.**
1. Provide a name.
1. Fill out the **Authorized JavaScript origins** and **Authorized redirect URIs.** Note: The Rancher UI page for setting up Google OAuth (available from the Global view under **Security > Authentication > Google**) provides you the exact links to enter for this step.
- Under **Authorized JavaScript origins,** enter your Rancher server URL.
- Under **Authorized redirect URIs,** enter your Rancher server URL appended with the path `verify-auth`. For example, if your URI is `https://rancherServer`, you will enter `https://rancherServer/verify-auth`.
1. Click on **Create.**
1. After the credential is created, you will see a screen with a list of your credentials. Choose the credential you just created, and in that row on rightmost side, click **Download JSON.** Save the file so that you can provide these credentials to Rancher.
**Result:** Your OAuth credentials have been successfully created.
### 3. Creating Service Account Credentials
Since the Google Admin SDK is available only to admins, regular users cannot use it to retrieve profiles of other users or their groups. Regular users cannot even retrieve their own groups.
Since Rancher provides group-based membership access, we require the users to be able to get their own groups, and look up other users and groups when needed.
As a workaround to get this capability, G Suite recommends creating a service account and delegating authority of your G Suite domain to that service account.
This section describes how to:
- Create a service account
- Create a key for the service account and download the credentials as JSON
1. Click [here](https://console.developers.google.com/iam-admin/serviceaccounts) and select your project for which you generated OAuth credentials.
1. Click on **Create Service Account.**
1. Enter a name and click **Create.**
![Service account creation Step 1]({{<baseurl>}}/img/rancher/Google-svc-acc-step1.png)
1. Don't provide any roles on the **Service account permissions** page and click **Continue**
![Service account creation Step 2]({{<baseurl>}}/img/rancher/Google-svc-acc-step2.png)
1. Click on **Create Key** and select the JSON option. Download the JSON file and save it so that you can provide it as the service account credentials to Rancher.
![Service account creation Step 3]({{<baseurl>}}/img/rancher/Google-svc-acc-step3-key-creation.png)
**Result:** Your service account is created.
### 4. Register the Service Account Key as an OAuth Client
You will need to grant some permissions to the service account you created in the last step. Rancher requires you to grant only read-only permissions for users and groups.
Using the Unique ID of the service account key, register it as an Oauth Client using the following steps:
1. Get the Unique ID of the key you just created. If it's not displayed in the list of keys right next to the one you created, you will have to enable it. To enable it, click **Unique ID** and click **OK.** This will add a **Unique ID** column to the list of service account keys. Save the one listed for the service account you created. NOTE: This is a numeric key, not to be confused with the alphanumeric field **Key ID.**
![Service account Unique ID]({{<baseurl>}}/img/rancher/Google-Select-UniqueID-column.png)
1. Go to the [**Manage OAuth Client Access** page.](https://admin.google.com/AdminHome?chromeless=1#OGX:ManageOauthClients)
1. Add the Unique ID obtained in the previous step in the **Client Name** field.
1. In the **One or More API Scopes** field, add the following scopes:
```
openid,profile,email,https://www.googleapis.com/auth/admin.directory.user.readonly,https://www.googleapis.com/auth/admin.directory.group.readonly
```
1. Click **Authorize.**
**Result:** The service account is registered as an OAuth client in your G Suite account.
# Configuring Google OAuth in Rancher
1. Sign into Rancher using a local user assigned the [administrator]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions) role. This user is also called the local principal.
1. From the **Global** view, click **Security > Authentication** from the main menu.
1. Click **Google.** The instructions in the UI cover the steps to set up authentication with Google OAuth.
1. Admin Email: Provide the email of an administrator account from your GSuite setup. In order to perform user and group lookups, google apis require an administrator's email in conjunction with the service account key.
1. Domain: Provide the domain on which you have configured GSuite. Provide the exact domain and not any aliases.
1. Nested Group Membership: Check this box to enable nested group memberships. Rancher admins can disable this at any time after configuring auth.
- **Step One** is about adding Rancher as an authorized domain, which we already covered in [this section.](#1-adding-rancher-as-an-authorized-domain)
- For **Step Two,** provide the OAuth credentials JSON that you downloaded after completing [this section.](#2-creating-oauth2-credentials-for-the-rancher-server) You can upload the file or paste the contents into the **OAuth Credentials** field.
- For **Step Three,** provide the service account credentials JSON that downloaded at the end of [this section.](#3-creating-service-account-credentials) The credentials will only work if you successfully [registered the service account key](#4-register-the-service-account-key-as-an-oauth-client) as an OAuth client in your G Suite account.
1. Click **Authenticate with Google**.
1. Click **Save**.
**Result:** Google authentication is successfully configured.
@@ -1,119 +0,0 @@
---
title: Configuring Keycloak (SAML)
description: Create a Keycloak SAML client and configure Rancher to work with Keycloak. By the end your users will be able to sign into Rancher using their Keycloak logins
weight: 7
---
If your organization uses Keycloak Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials.
## Prerequisites
- You must have a [Keycloak IdP Server](https://www.keycloak.org/docs/latest/server_installation/) configured.
- In Keycloak, create a [new SAML client](https://www.keycloak.org/docs/latest/server_admin/#saml-clients), with the settings below. See the [Keycloak documentation](https://www.keycloak.org/docs/latest/server_admin/#saml-clients) for help.
Setting | Value
------------|------------
`Sign Documents` | `ON` <sup>1</sup>
`Sign Assertions` | `ON` <sup>1</sup>
All other `ON/OFF` Settings | `OFF`
`Client ID` | `https://yourRancherHostURL/v1-saml/keycloak/saml/metadata`<sup>2</sup>
`Client Name` | <CLIENT_NAME> (e.g. `rancher`)
`Client Protocol` | `SAML`
`Valid Redirect URI` | `https://yourRancherHostURL/v1-saml/keycloak/saml/acs`
><sup>1</sup>: Optionally, you can enable either one or both of these settings.
><sup>2</sup>: Rancher SAML metadata won't be generated until a SAML provider is configured and saved.
- Export a `metadata.xml` file from your Keycloak client:
From the `Installation` tab, choose the `SAML Metadata IDPSSODescriptor` format option and download your file.
## Configuring Keycloak in Rancher
1. From the **Global** view, select **Security > Authentication** from the main menu.
1. Select **Keycloak**.
1. Complete the **Configure Keycloak Account** form. Keycloak IdP lets you specify what data store you want to use. You can either add a database or use an existing LDAP server. For example, if you select your Active Directory (AD) server, the examples below describe how you can map AD attributes to fields within Rancher.
| Field | Description |
| ------------------------- | ----------------------------------------------------------------------------- |
| Display Name Field | The AD attribute that contains the display name of users. |
| User Name Field | The AD attribute that contains the user name/given name. |
| UID Field | An AD attribute that is unique to every user. |
| Groups Field | Make entries for managing group memberships. |
| Rancher API Host | The URL for your Rancher Server. |
| Private Key / Certificate | A key/certificate pair to create a secure shell between Rancher and your IdP. |
| IDP-metadata | The `metadata.xml` file that you exported from your IdP server. |
>**Tip:** You can generate a key/certificate pair using an openssl command. For example:
>
> openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout myservice.key -out myservice.cert
1. After you complete the **Configure Keycloak Account** form, click **Authenticate with Keycloak**, which is at the bottom of the page.
Rancher redirects you to the IdP login page. Enter credentials that authenticate with Keycloak IdP to validate your Rancher Keycloak configuration.
>**Note:** You may have to disable your popup blocker to see the IdP login page.
**Result:** Rancher is configured to work with Keycloak. Your users can now sign into Rancher using their Keycloak logins.
{{< saml_caveats >}}
## Annex: Troubleshooting
If you are experiencing issues while testing the connection to the Keycloak server, first double-check the configuration option of your SAML client. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{<baseurl>}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation.
### You are not redirected to Keycloak
When you click on **Authenticate with Keycloak**, your are not redirected to your IdP.
* Verify your Keycloak client configuration.
* Make sure `Force Post Binding` set to `OFF`.
### Forbidden message displayed after IdP login
You are correctly redirected to your IdP login page and you are able to enter your credentials, however you get a `Forbidden` message afterwards.
* Check the Rancher debug log.
* If the log displays `ERROR: either the Response or Assertion must be signed`, make sure either `Sign Documents` or `Sign assertions` is set to `ON` in your Keycloak client.
### HTTP 502 when trying to access /v1-saml/keycloak/saml/metadata
This is usually due to the metadata not being created until a SAML provider is configured.
Try configuring and saving keycloak as your SAML provider and then accessing the metadata.
### Keycloak Error: "We're sorry, failed to process response"
* Check your Keycloak log.
* If the log displays `failed: org.keycloak.common.VerificationException: Client does not have a public key`, set `Encrypt Assertions` to `OFF` in your Keycloak client.
### Keycloak Error: "We're sorry, invalid requester"
* Check your Keycloak log.
* If the log displays `request validation failed: org.keycloak.common.VerificationException: SigAlg was null`, set `Client Signature Required` to `OFF` in your Keycloak client.
### Keycloak 6.0.0+: IDPSSODescriptor missing from options
Keycloak versions 6.0.0 and up no longer provide the IDP metadata under the `Installation` tab.
You can still get the XML from the following url:
`https://{KEYCLOAK-URL}/auth/realms/{REALM-NAME}/protocol/saml/descriptor`
The XML obtained from this URL contains `EntitiesDescriptor` as the root element. Rancher expects the root element to be `EntityDescriptor` rather than `EntitiesDescriptor`. So before passing this XML to Rancher, follow these steps to adjust it:
* Copy all the tags from `EntitiesDescriptor` to the `EntityDescriptor`.
* Remove the `<EntitiesDescriptor>` tag from the beginning.
* Remove the `</EntitiesDescriptor>` from the end of the xml.
You are left with something similar as the example below:
```
<EntityDescriptor xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" entityID="https://{KEYCLOAK-URL}/auth/realms/{REALM-NAME}">
....
</EntityDescriptor>
```
@@ -1,14 +0,0 @@
---
title: Local Authentication
weight: 1
---
Local authentication is the default until you configure an external authentication provider. Local authentication is where Rancher stores the user information, i.e. names and passwords, of who can log in to Rancher. By default, the `admin` user that logs in to Rancher for the first time is a local user.
## Adding Local Users
Regardless of whether you use external authentication, you should create a few local authentication users so that you can continue using Rancher if your external authentication service encounters issues.
1. From the **Global** view, select **Users** from the navigation bar.
2. Click **Add User**. Then complete the **Add User** form. Click **Create** when you're done.
@@ -1,35 +0,0 @@
---
title: Configuring Microsoft Active Directory Federation Service (SAML)
weight: 9
---
If your organization uses Microsoft Active Directory Federation Services (AD FS) for user authentication, you can configure Rancher to allow your users to log in using their AD FS credentials.
## Prerequisites
- You must have Rancher installed.
- Obtain your Rancher Server URL. During AD FS configuration, substitute this URL for the `<RANCHER_SERVER>` placeholder.
- You must have a global administrator account on your Rancher installation.
- You must have a [Microsoft AD FS Server](https://docs.microsoft.com/en-us/windows-server/identity/active-directory-federation-services) configured.
- Obtain your AD FS Server IP/DNS name. During AD FS configuration, substitute this IP/DNS name for the `<AD_SERVER>` placeholder.
- You must have access to add [Relying Party Trusts](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/create-a-relying-party-trust) on your AD FS Server.
## Setup Outline
Setting up Microsoft AD FS with Rancher Server requires configuring AD FS on your Active Directory server, and configuring Rancher to utilize your AD FS server. The following pages serve as guides for setting up Microsoft AD FS authentication on your Rancher installation.
- [1 — Configuring Microsoft AD FS for Rancher]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup)
- [2 — Configuring Rancher for Microsoft AD FS]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup)
{{< saml_caveats >}}
### [Next: Configuring Microsoft AD FS for Rancher]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup)
@@ -1,82 +0,0 @@
---
title: 1 — Configuring Microsoft AD FS for Rancher
weight: 1205
---
Before configuring Rancher to support AD FS users, you must add Rancher as a [relying party trust](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/technical-reference/understanding-key-ad-fs-concepts) in AD FS.
1. Log into your AD server as an administrative user.
1. Open the **AD FS Management** console. Select **Add Relying Party Trust...** from the **Actions** menu and click **Start**.
{{< img "/img/rancher/adfs/adfs-overview.png" "">}}
1. Select **Enter data about the relying party manually** as the option for obtaining data about the relying party.
{{< img "/img/rancher/adfs/adfs-add-rpt-2.png" "">}}
1. Enter your desired **Display name** for your Relying Party Trust. For example, `Rancher`.
{{< img "/img/rancher/adfs/adfs-add-rpt-3.png" "">}}
1. Select **AD FS profile** as the configuration profile for your relying party trust.
{{< img "/img/rancher/adfs/adfs-add-rpt-4.png" "">}}
1. Leave the **optional token encryption certificate** empty, as Rancher AD FS will not be using one.
{{< img "/img/rancher/adfs/adfs-add-rpt-5.png" "">}}
1. Select **Enable support for the SAML 2.0 WebSSO protocol**
and enter `https://<rancher-server>/v1-saml/adfs/saml/acs` for the service URL.
{{< img "/img/rancher/adfs/adfs-add-rpt-6.png" "">}}
1. Add `https://<rancher-server>/v1-saml/adfs/saml/metadata` as the **Relying party trust identifier**.
{{< img "/img/rancher/adfs/adfs-add-rpt-7.png" "">}}
1. This tutorial will not cover multi-factor authentication; please refer to the [Microsoft documentation](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/configure-additional-authentication-methods-for-ad-fs) if you would like to configure multi-factor authentication.
{{< img "/img/rancher/adfs/adfs-add-rpt-8.png" "">}}
1. From **Choose Issuance Authorization RUles**, you may select either of the options available according to use case. However, for the purposes of this guide, select **Permit all users to access this relying party**.
{{< img "/img/rancher/adfs/adfs-add-rpt-9.png" "">}}
1. After reviewing your settings, select **Next** to add the relying party trust.
{{< img "/img/rancher/adfs/adfs-add-rpt-10.png" "">}}
1. Select **Open the Edit Claim Rules...** and click **Close**.
{{< img "/img/rancher/adfs/adfs-add-rpt-11.png" "">}}
1. On the **Issuance Transform Rules** tab, click **Add Rule...**.
{{< img "/img/rancher/adfs/adfs-edit-cr.png" "">}}
1. Select **Send LDAP Attributes as Claims** as the **Claim rule template**.
{{< img "/img/rancher/adfs/adfs-add-tcr-1.png" "">}}
1. Set the **Claim rule name** to your desired name (for example, `Rancher Attributes`) and select **Active Directory** as the **Attribute store**. Create the following mapping to reflect the table below:
| LDAP Attribute | Outgoing Claim Type |
| -------------------------------------------- | ------------------- |
| Given-Name | Given Name |
| User-Principal-Name | UPN |
| Token-Groups - Qualified by Long Domain Name | Group |
| SAM-Account-Name | Name |
<br/>
{{< img "/img/rancher/adfs/adfs-add-tcr-2.png" "">}}
1. Download the `federationmetadata.xml` from your AD server at:
```
https://<AD_SERVER>/federationmetadata/2007-06/federationmetadata.xml
```
**Result:** You've added Rancher as a relying trust party. Now you can configure Rancher to leverage AD.
### [Next: Configuring Rancher for Microsoft AD FS]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/)
@@ -1,44 +0,0 @@
---
title: 2 — Configuring Rancher for Microsoft AD FS
weight: 1205
---
After you complete [Configuring Microsoft AD FS for Rancher]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup/), enter your AD FS information into Rancher to allow AD FS users to authenticate with Rancher.
>**Important Notes For Configuring Your AD FS Server:**
>
>- The SAML 2.0 WebSSO Protocol Service URL is: `https://<RANCHER_SERVER>/v1-saml/adfs/saml/acs`
>- The Relying Party Trust identifier URL is: `https://<RANCHER_SERVER>/v1-saml/adfs/saml/metadata`
>- You must export the `federationmetadata.xml` file from your AD FS server. This can be found at: `https://<AD_SERVER>/federationmetadata/2007-06/federationmetadata.xml`
1. From the **Global** view, select **Security > Authentication** from the main menu.
1. Select **Microsoft Active Directory Federation Services**.
1. Complete the **Configure AD FS Account** form. Microsoft AD FS lets you specify an existing Active Directory (AD) server. The examples below describe how you can map AD attributes to fields within Rancher.
| Field | Description |
| ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Display Name Field | The AD attribute that contains the display name of users. <br/><br/>Example: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name` |
| User Name Field | The AD attribute that contains the user name/given name. <br/><br/>Example: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname` |
| UID Field | An AD attribute that is unique to every user. <br/><br/>Example: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` |
| Groups Field | Make entries for managing group memberships. <br/><br/>Example: `http://schemas.xmlsoap.org/claims/Group` |
| Rancher API Host | The URL for your Rancher Server. |
| Private Key / Certificate | This is a key-certificate pair to create a secure shell between Rancher and your AD FS. Ensure you set the Common Name (CN) to your Rancher Server URL.<br/><br/>[Certificate creation command](#cert-command) |
| Metadata XML | The `federationmetadata.xml` file exported from your AD FS server. <br/><br/>You can find this file at `https://<AD_SERVER>/federationmetadata/2007-06/federationmetadata.xml`. |
<a id="cert-command"></a>
>**Tip:** You can generate a certificate using an openssl command. For example:
>
> openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com"
1. After you complete the **Configure AD FS Account** form, click **Authenticate with AD FS**, which is at the bottom of the page.
Rancher redirects you to the AD FS login page. Enter credentials that authenticate with Microsoft AD FS to validate your Rancher AD FS configuration.
>**Note:** You may have to disable your popup blocker to see the AD FS login page.
**Result:** Rancher is configured to work with MS FS. Your users can now sign into Rancher using their MS FS logins.
@@ -1,51 +0,0 @@
---
title: Configuring Okta (SAML)
weight: 10
---
If your organization uses Okta Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials.
>**Note:** Okta integration only supports Service Provider initiated logins.
## Prerequisites
In Okta, create a SAML Application with the settings below. See the [Okta documentation](https://developer.okta.com/standards/SAML/setting_up_a_saml_application_in_okta) for help.
Setting | Value
------------|------------
`Single Sign on URL` | `https://yourRancherHostURL/v1-saml/okta/saml/acs`
`Audience URI (SP Entity ID)` | `https://yourRancherHostURL/v1-saml/okta/saml/metadata`
## Configuring Okta in Rancher
1. From the **Global** view, select **Security > Authentication** from the main menu.
1. Select **Okta**.
1. Complete the **Configure Okta Account** form. The examples below describe how you can map Okta attributes from attribute statements to fields within Rancher.
| Field | Description |
| ------------------------- | ----------------------------------------------------------------------------- |
| Display Name Field | The attribute name from an attribute statement that contains the display name of users. |
| User Name Field | The attribute name from an attribute statement that contains the user name/given name. |
| UID Field | The attribute name from an attribute statement that is unique to every user. |
| Groups Field | The attribute name in a group attribute statement that exposes your groups. |
| Rancher API Host | The URL for your Rancher Server. |
| Private Key / Certificate | A key/certificate pair used for Assertion Encryption. |
| Metadata XML | The `Identity Provider metadata` file that you find in the application `Sign On` section. |
>**Tip:** You can generate a key/certificate pair using an openssl command. For example:
>
> openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout myservice.key -out myservice.crt
1. After you complete the **Configure Okta Account** form, click **Authenticate with Okta**, which is at the bottom of the page.
Rancher redirects you to the IdP login page. Enter credentials that authenticate with Okta IdP to validate your Rancher Okta configuration.
>**Note:** If nothing seems to happen, it's likely because your browser blocked the pop-up. Make sure you disable the pop-up blocker for your rancher domain and whitelist it in any other extensions you might utilize.
**Result:** Rancher is configured to work with Okta. Your users can now sign into Rancher using their Okta logins.
{{< saml_caveats >}}
@@ -1,48 +0,0 @@
---
title: Configuring OpenLDAP
weight: 3
---
If your organization uses LDAP for user authentication, you can configure Rancher to communicate with an OpenLDAP server to authenticate users. This allows Rancher admins to control access to clusters and projects based on users and groups managed externally in the organisation's central user repository, while allowing end-users to authenticate with their LDAP credentials when logging in to the Rancher UI.
## Prerequisites
Rancher must be configured with a LDAP bind account (aka service account) to search and retrieve LDAP entries pertaining to users and groups that should have access. It is recommended to not use an administrator account or personal account for this purpose and instead create a dedicated account in OpenLDAP with read-only access to users and groups under the configured search base (see below).
> **Using TLS?**
>
> If the certificate used by the OpenLDAP server is self-signed or not from a recognised certificate authority, make sure have at hand the CA certificate (concatenated with any intermediate certificates) in PEM format. You will have to paste in this certificate during the configuration so that Rancher is able to validate the certificate chain.
## Configure OpenLDAP in Rancher
Configure the settings for the OpenLDAP server, groups and users. For help filling out each field, refer to the [configuration reference.](./openldap-config)
> Before you proceed with the configuration, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
1. Log into the Rancher UI using the initial local `admin` account.
2. From the **Global** view, navigate to **Security** > **Authentication**
3. Select **OpenLDAP**. The **Configure an OpenLDAP server** form will be displayed.
### Test Authentication
Once you have completed the configuration, proceed by testing the connection to the OpenLDAP server. Authentication with OpenLDAP will be enabled implicitly if the test is successful.
> **Note:**
>
> The OpenLDAP user pertaining to the credentials entered in this step will be mapped to the local principal account and assigned administrator privileges in Rancher. You should therefore make a conscious decision on which LDAP account you use to perform this step.
1. Enter the **username** and **password** for the OpenLDAP account that should be mapped to the local principal account.
2. Click **Authenticate With OpenLDAP** to test the OpenLDAP connection and finalise the setup.
**Result:**
- OpenLDAP authentication is configured.
- The LDAP user pertaining to the entered credentials is mapped to the local principal (administrative) account.
> **Note:**
>
> You will still be able to login using the locally configured `admin` account and password in case of a disruption of LDAP services.
## Annex: Troubleshooting
If you are experiencing issues while testing the connection to the OpenLDAP server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{<baseurl>}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation.
@@ -1,86 +0,0 @@
---
title: OpenLDAP Configuration Reference
weight: 2
---
This section is intended to be used as a reference when setting up an OpenLDAP authentication provider in Rancher.
For further details on configuring OpenLDAP, refer to the [official documentation.](https://www.openldap.org/doc/)
> Before you proceed with the configuration, please familiarize yourself with the concepts of [External Authentication Configuration and Principal Users]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
- [Background: OpenLDAP Authentication Flow](#background-openldap-authentication-flow)
- [OpenLDAP server configuration](#openldap-server-configuration)
- [User/group schema configuration](#user-group-schema-configuration)
- [User schema configuration](#user-schema-configuration)
- [Group schema configuration](#group-schema-configuration)
## Background: OpenLDAP Authentication Flow
1. When a user attempts to login with his LDAP credentials, Rancher creates an initial bind to the LDAP server using a service account with permissions to search the directory and read user/group attributes.
2. Rancher then searches the directory for the user by using a search filter based on the provided username and configured attribute mappings.
3. Once the user has been found, he is authenticated with another LDAP bind request using the user's DN and provided password.
4. Once authentication succeeded, Rancher then resolves the group memberships both from the membership attribute in the user's object and by performing a group search based on the configured user mapping attribute.
# OpenLDAP Server Configuration
You will need to enter the address, port, and protocol to connect to your OpenLDAP server. `389` is the standard port for insecure traffic, `636` for TLS traffic.
> **Using TLS?**
>
> If the certificate used by the OpenLDAP server is self-signed or not from a recognized certificate authority, make sure have at hand the CA certificate (concatenated with any intermediate certificates) in PEM format. You will have to paste in this certificate during the configuration so that Rancher is able to validate the certificate chain.
If you are in doubt about the correct values to enter in the user/group Search Base configuration fields, consult your LDAP administrator or refer to the section [Identify Search Base and Schema using ldapsearch]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/ad/#annex-identify-search-base-and-schema-using-ldapsearch) in the Active Directory authentication documentation.
<figcaption>OpenLDAP Server Parameters</figcaption>
| Parameter | Description |
|:--|:--|
| Hostname | Specify the hostname or IP address of the OpenLDAP server |
| Port | Specify the port at which the OpenLDAP server is listening for connections. Unencrypted LDAP normally uses the standard port of 389, while LDAPS uses port 636.|
| TLS | Check this box to enable LDAP over SSL/TLS (commonly known as LDAPS). You will also need to paste in the CA certificate if the server uses a self-signed/enterprise-signed certificate. |
| Server Connection Timeout | The duration in number of seconds that Rancher waits before considering the server unreachable. |
| Service Account Distinguished Name | Enter the Distinguished Name (DN) of the user that should be used to bind, search and retrieve LDAP entries. (see [Prerequisites](#prerequisites)). |
| Service Account Password | The password for the service account. |
| User Search Base | Enter the Distinguished Name of the node in your directory tree from which to start searching for user objects. All users must be descendents of this base DN. For example: "ou=people,dc=acme,dc=com".|
| Group Search Base | If your groups live under a different node than the one configured under `User Search Base` you will need to provide the Distinguished Name here. Otherwise leave this field empty. For example: "ou=groups,dc=acme,dc=com".|
# User/Group Schema Configuration
If your OpenLDAP directory deviates from the standard OpenLDAP schema, you must complete the **Customize Schema** section to match it.
Note that the attribute mappings configured in this section are used by Rancher to construct search filters and resolve group membership. It is therefore always recommended to verify that the configuration here matches the schema used in your OpenLDAP.
If you are unfamiliar with the user/group schema used in the OpenLDAP server, consult your LDAP administrator or refer to the section [Identify Search Base and Schema using ldapsearch]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/ad/#annex-identify-search-base-and-schema-using-ldapsearch) in the Active Directory authentication documentation.
### User Schema Configuration
The table below details the parameters for the user schema configuration.
<figcaption>User Schema Configuration Parameters</figcaption>
| Parameter | Description |
|:--|:--|
| Object Class | The name of the object class used for user objects in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
| Username Attribute | The user attribute whose value is suitable as a display name. |
| Login Attribute | The attribute whose value matches the username part of credentials entered by your users when logging in to Rancher. This is typically `uid`. |
| User Member Attribute | The user attribute containing the Distinguished Name of groups a user is member of. Usually this is one of `memberOf` or `isMemberOf`. |
| Search Attribute | When a user enters text to add users or groups in the UI, Rancher queries the LDAP server and attempts to match users by the attributes provided in this setting. Multiple attributes can be specified by separating them with the pipe ("\|") symbol. |
| User Enabled Attribute | If the schema of your OpenLDAP server supports a user attribute whose value can be evaluated to determine if the account is disabled or locked, enter the name of that attribute. The default OpenLDAP schema does not support this and the field should usually be left empty. |
| Disabled Status Bitmask | This is the value for a disabled/locked user account. The parameter is ignored if `User Enabled Attribute` is empty. |
### Group Schema Configuration
The table below details the parameters for the group schema configuration.
<figcaption>Group Schema Configuration Parameters<figcaption>
| Parameter | Description |
|:--|:--|
| Object Class | The name of the object class used for group entries in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
| Name Attribute | The group attribute whose value is suitable for a display name. |
| Group Member User Attribute | The name of the **user attribute** whose format matches the group members in the `Group Member Mapping Attribute`. |
| Group Member Mapping Attribute | The name of the group attribute containing the members of a group. |
| Search Attribute | Attribute used to construct search filters when adding groups to clusters or projects in the UI. See description of user schema `Search Attribute`. |
| Group DN Attribute | The name of the group attribute whose format matches the values in the user's group membership attribute. See `User Member Attribute`. |
| Nested Group Membership | This settings defines whether Rancher should resolve nested group memberships. Use only if your organization makes use of these nested memberships (ie. you have groups that contain other groups as members). This option is disabled if you are using Shibboleth. |
@@ -1,51 +0,0 @@
---
title: Configuring PingIdentity (SAML)
weight: 8
---
If your organization uses Ping Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials.
>**Prerequisites:**
>
>- You must have a [Ping IdP Server](https://www.pingidentity.com/) configured.
>- Following are the Rancher Service Provider URLs needed for configuration:
Metadata URL: `https://<rancher-server>/v1-saml/ping/saml/metadata`
Assertion Consumer Service (ACS) URL: `https://<rancher-server>/v1-saml/ping/saml/acs`
Note that these URLs will not return valid data until the authentication configuration is saved in Rancher.
>- Export a `metadata.xml` file from your IdP Server. For more information, see the [PingIdentity documentation](https://documentation.pingidentity.com/pingfederate/pf83/index.shtml#concept_exportingMetadata.html).
1. From the **Global** view, select **Security > Authentication** from the main menu.
1. Select **PingIdentity**.
1. Complete the **Configure Ping Account** form. Ping IdP lets you specify what data store you want to use. You can either add a database or use an existing ldap server. For example, if you select your Active Directory (AD) server, the examples below describe how you can map AD attributes to fields within Rancher.
1. **Display Name Field**: Enter the AD attribute that contains the display name of users (example: `displayName`).
1. **User Name Field**: Enter the AD attribute that contains the user name/given name (example: `givenName`).
1. **UID Field**: Enter an AD attribute that is unique to every user (example: `sAMAccountName`, `distinguishedName`).
1. **Groups Field**: Make entries for managing group memberships (example: `memberOf`).
1. **Rancher API Host**: Enter the URL for your Rancher Server.
1. **Private Key** and **Certificate**: This is a key-certificate pair to create a secure shell between Rancher and your IdP.
You can generate one using an openssl command. For example:
```
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com"
```
1. **IDP-metadata**: The `metadata.xml` file that you [exported from your IdP server](https://documentation.pingidentity.com/pingfederate/pf83/index.shtml#concept_exportingMetadata.html).
1. After you complete the **Configure Ping Account** form, click **Authenticate with Ping**, which is at the bottom of the page.
Rancher redirects you to the IdP login page. Enter credentials that authenticate with Ping IdP to validate your Rancher PingIdentity configuration.
>**Note:** You may have to disable your popup blocker to see the IdP login page.
**Result:** Rancher is configured to work with PingIdentity. Your users can now sign into Rancher using their PingIdentity logins.
{{< saml_caveats >}}
@@ -1,107 +0,0 @@
---
title: Configuring Shibboleth (SAML)
weight: 11
---
If your organization uses Shibboleth Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in to Rancher using their Shibboleth credentials.
In this configuration, when Rancher users log in, they will be redirected to the Shibboleth IdP to enter their credentials. After authentication, they will be redirected back to the Rancher UI.
If you also configure OpenLDAP as the back end to Shibboleth, it will return a SAML assertion to Rancher with user attributes that include groups. Then the authenticated user will be able to access resources in Rancher that their groups have permissions for.
> The instructions in this section assume that you understand how Rancher, Shibboleth, and OpenLDAP work together. For a more detailed explanation of how it works, refer to [this page.](./about)
This section covers the following topics:
- [Setting up Shibboleth in Rancher](#setting-up-shibboleth-in-rancher)
- [Shibboleth Prerequisites](#shibboleth-prerequisites)
- [Configure Shibboleth in Rancher](#configure-shibboleth-in-rancher)
- [SAML Provider Caveats](#saml-provider-caveats)
- [Setting up OpenLDAP in Rancher](#setting-up-openldap-in-rancher)
- [OpenLDAP Prerequisites](#openldap-prerequisites)
- [Configure OpenLDAP in Rancher](#configure-openldap-in-rancher)
- [Troubleshooting](#troubleshooting)
# Setting up Shibboleth in Rancher
### Shibboleth Prerequisites
>
>- You must have a Shibboleth IdP Server configured.
>- Following are the Rancher Service Provider URLs needed for configuration:
Metadata URL: `https://<rancher-server>/v1-saml/shibboleth/saml/metadata`
Assertion Consumer Service (ACS) URL: `https://<rancher-server>/v1-saml/shibboleth/saml/acs`
>- Export a `metadata.xml` file from your IdP Server. For more information, see the [Shibboleth documentation.](https://wiki.shibboleth.net/confluence/display/SP3/Home)
### Configure Shibboleth in Rancher
If your organization uses Shibboleth for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials.
1. From the **Global** view, select **Security > Authentication** from the main menu.
1. Select **Shibboleth**.
1. Complete the **Configure Shibboleth Account** form. Shibboleth IdP lets you specify what data store you want to use. You can either add a database or use an existing ldap server. For example, if you select your Active Directory (AD) server, the examples below describe how you can map AD attributes to fields within Rancher.
1. **Display Name Field**: Enter the AD attribute that contains the display name of users (example: `displayName`).
1. **User Name Field**: Enter the AD attribute that contains the user name/given name (example: `givenName`).
1. **UID Field**: Enter an AD attribute that is unique to every user (example: `sAMAccountName`, `distinguishedName`).
1. **Groups Field**: Make entries for managing group memberships (example: `memberOf`).
1. **Rancher API Host**: Enter the URL for your Rancher Server.
1. **Private Key** and **Certificate**: This is a key-certificate pair to create a secure shell between Rancher and your IdP.
You can generate one using an openssl command. For example:
```
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com"
```
1. **IDP-metadata**: The `metadata.xml` file that you exported from your IdP server.
1. After you complete the **Configure Shibboleth Account** form, click **Authenticate with Shibboleth**, which is at the bottom of the page.
Rancher redirects you to the IdP login page. Enter credentials that authenticate with Shibboleth IdP to validate your Rancher Shibboleth configuration.
>**Note:** You may have to disable your popup blocker to see the IdP login page.
**Result:** Rancher is configured to work with Shibboleth. Your users can now sign into Rancher using their Shibboleth logins.
### SAML Provider Caveats
If you configure Shibboleth without OpenLDAP, the following caveats apply due to the fact that SAML Protocol does not support search or lookup for users or groups.
- There is no validation on users or groups when assigning permissions to them in Rancher.
- When adding users, the exact user IDs (i.e. UID Field) must be entered correctly. As you type the user ID, there will be no search for other user IDs that may match.
- When adding groups, you must select the group from the drop-down that is next to the text box. Rancher assumes that any input from the text box is a user.
- The group drop-down shows only the groups that you are a member of. You will not be able to add groups that you are not a member of.
To enable searching for groups when assigning permissions in Rancher, you will need to configure a back end for the SAML provider that supports groups, such as OpenLDAP.
# Setting up OpenLDAP in Rancher
If you also configure OpenLDAP as the back end to Shibboleth, it will return a SAML assertion to Rancher with user attributes that include groups. Then authenticated users will be able to access resources in Rancher that their groups have permissions for.
### OpenLDAP Prerequisites
Rancher must be configured with a LDAP bind account (aka service account) to search and retrieve LDAP entries pertaining to users and groups that should have access. It is recommended to not use an administrator account or personal account for this purpose and instead create a dedicated account in OpenLDAP with read-only access to users and groups under the configured search base (see below).
> **Using TLS?**
>
> If the certificate used by the OpenLDAP server is self-signed or not from a recognized certificate authority, make sure have at hand the CA certificate (concatenated with any intermediate certificates) in PEM format. You will have to paste in this certificate during the configuration so that Rancher is able to validate the certificate chain.
### Configure OpenLDAP in Rancher
Configure the settings for the OpenLDAP server, groups and users. For help filling out each field, refer to the [configuration reference.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/openldap/openldap-config) Note that nested group membership is not available for Shibboleth.
> Before you proceed with the configuration, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
1. Log into the Rancher UI using the initial local `admin` account.
2. From the **Global** view, navigate to **Security** > **Authentication**
3. Select **OpenLDAP**. The **Configure an OpenLDAP server** form will be displayed.
# Troubleshooting
If you are experiencing issues while testing the connection to the OpenLDAP server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{<baseurl>}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation.
@@ -1,32 +0,0 @@
---
title: Group Permissions with Shibboleth and OpenLDAP
weight: 1
---
This page provides background information and context for Rancher users who intend to set up the Shibboleth authentication provider in Rancher.
Because Shibboleth is a SAML provider, it does not support searching for groups. While a Shibboleth integration can validate user credentials, it can't be used to assign permissions to groups in Rancher without additional configuration.
One solution to this problem is to configure an OpenLDAP identity provider. With an OpenLDAP back end for Shibboleth, you will be able to search for groups in Rancher and assign them to resources such as clusters, projects, or namespaces from the Rancher UI.
### Terminology
- **Shibboleth** is a single sign-on log-in system for computer networks and the Internet. It allows people to sign in using just one identity to various systems. It validates user credentials, but does not, on its own, handle group memberships.
- **SAML:** Security Assertion Markup Language, an open standard for exchanging authentication and authorization data between an identity provider and a service provider.
- **OpenLDAP:** a free, open-source implementation of the Lightweight Directory Access Protocol (LDAP). It is used to manage an organizations computers and users. OpenLDAP is useful for Rancher users because it supports groups. In Rancher, it is possible to assign permissions to groups so that they can access resources such as clusters, projects, or namespaces, as long as the groups already exist in the identity provider.
- **IdP or IDP:** An identity provider. OpenLDAP is an example of an identity provider.
### Adding OpenLDAP Group Permissions to Rancher Resources
The diagram below illustrates how members of an OpenLDAP group can access resources in Rancher that the group has permissions for.
For example, a cluster owner could add an OpenLDAP group to a cluster so that they have permissions view most cluster level resources and create new projects. Then the OpenLDAP group members will have access to the cluster as soon as they log in to Rancher.
In this scenario, OpenLDAP allows the cluster owner to search for groups when assigning persmissions. Without OpenLDAP, the functionality to search for groups would not be supported.
When a member of the OpenLDAP group logs in to Rancher, she is redirected to Shibboleth and enters her username and password.
Shibboleth validates her credentials, and retrieves user attributes from OpenLDAP, including groups. Then Shibboleth sends a SAML assertion to Rancher including the user attributes. Rancher uses the group data so that she can access all of the resources and permissions that her groups have permissions for.
![Adding OpenLDAP Group Permissions to Rancher Resources]({{<baseurl>}}/img/rancher/shibboleth-with-openldap-groups.svg)
@@ -1,60 +0,0 @@
---
title: Users and Groups
weight: 1
---
Rancher relies on users and groups to determine who is allowed to log in to Rancher and which resources they can access. When you configure an external authentication provider, users from that provider will be able to log in to your Rancher server. When a user logs in, the authentication provider will supply your Rancher server with a list of groups to which the user belongs.
Access to clusters, projects, multi-cluster apps, and global DNS providers and entries can be controlled by adding either individual users or groups to these resources. When you add a group to a resource, all users who are members of that group in the authentication provider, will be able to access the resource with the permissions that you've specified for the group. For more information on roles and permissions, see [Role Based Access Control]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/).
## Managing Members
When adding a user or group to a resource, you can search for users or groups by beginning to type their name. The Rancher server will query the authentication provider to find users and groups that match what you've entered. Searching is limited to the authentication provider that you are currently logged in with. For example, if you've enabled GitHub authentication but are logged in using a [local]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/local/) user account, you will not be able to search for GitHub users or groups.
All users, whether they are local users or from an authentication provider, can be viewed and managed. From the **Global** view, click on **Users**.
{{< saml_caveats >}}
## User Information
Rancher maintains information about each user that logs in through an authentication provider. This information includes whether the user is allowed to access your Rancher server and the list of groups that the user belongs to. Rancher keeps this user information so that the CLI, API, and kubectl can accurately reflect the access that the user has based on their group membership in the authentication provider.
Whenever a user logs in to the UI using an authentication provider, Rancher automatically updates this user information.
### Automatically Refreshing User Information
Rancher will periodically refresh the user information even before a user logs in through the UI. You can control how often Rancher performs this refresh. From the **Global** view, click on **Settings**. Two settings control this behavior:
- **`auth-user-info-max-age-seconds`**
This setting controls how old a user's information can be before Rancher refreshes it. If a user makes an API call (either directly or by using the Rancher CLI or kubectl) and the time since the user's last refresh is greater than this setting, then Rancher will trigger a refresh. This setting defaults to `3600` seconds, i.e. 1 hour.
- **`auth-user-info-resync-cron`**
This setting controls a recurring schedule for resyncing authentication provider information for all users. Regardless of whether a user has logged in or used the API recently, this will cause the user to be refreshed at the specified interval. This setting defaults to `0 0 * * *`, i.e. once a day at midnight. See the [Cron documentation](https://en.wikipedia.org/wiki/Cron) for more information on valid values for this setting.
> **Note:** Since SAML does not support user lookup, SAML-based authentication providers do not support periodically refreshing user information. User information will only be refreshed when the user logs into the Rancher UI.
### Manually Refreshing User Information
If you are not sure the last time Rancher performed an automatic refresh of user information, you can perform a manual refresh of all users.
1. From the **Global** view, click on **Users** in the navigation bar.
1. Click on **Refresh Group Memberships**.
**Results:** Rancher refreshes the user information for all users. Requesting this refresh will update which users can access Rancher as well as all the groups that each user belongs to.
>**Note:** Since SAML does not support user lookup, SAML-based authentication providers do not support the ability to manually refresh user information. User information will only be refreshed when the user logs into the Rancher UI.
## Session Length
The default length (TTL) of each user session is adjustable. The default session length is 16 hours.
1. From the **Global** view, click on **Settings**.
1. In the **Settings** page, find **`auth-user-session-ttl-minutes`** and click **Edit.**
1. Enter the amount of time in minutes a session length should last and click **Save.**
**Result:** Users are automatically logged out of Rancher after the set number of minutes.
@@ -1,28 +0,0 @@
---
title: Role-Based Access Control (RBAC)
weight: 2
---
> This section is under construction.
Within Rancher, each person authenticates as a _user_, which is a login that grants you access to Rancher. As mentioned in [Authentication]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/), users can either be local or external.
After you configure external authentication, the users that display on the **Users** page changes.
- If you are logged in as a local user, only local users display.
- If you are logged in as an external user, both external and local users display.
## Users and Roles
Once the user logs in to Rancher, their _authorization_, or their access rights within the system, is determined by _global permissions_, and _cluster and project roles_.
- [Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/):
Define user authorization outside the scope of any particular cluster.
- [Cluster and Project Roles]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/):
Define user authorization inside the specific cluster or project where they are assigned the role.
Both global permissions and cluster and project roles are implemented on top of [Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/). Therefore, enforcement of permissions and roles is performed by Kubernetes.
@@ -1,55 +0,0 @@
---
title: Adding Users to a Cluster
weight: 1
---
> This section is under construction.
If you want to provide a user with access and permissions to _all_ projects, nodes, and resources within a cluster, assign the user a cluster membership.
>**Tip:** Want to provide a user with access to a _specific_ project within a cluster? See [Adding Project Members]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/project-members/) instead.
There are two contexts where you can add cluster members:
- Adding Members to a New Cluster
You can add members to a cluster as you create it (recommended if possible).
- [Adding Members to an Existing Cluster](#editing-cluster-membership)
You can always add members to a cluster after a cluster is provisioned.
## Editing Cluster Membership
Cluster administrators can edit the membership for a cluster, controlling which Rancher users can access the cluster and what features they can use.
1. From the **Global** view, open the cluster that you want to add members to.
2. From the main menu, select **Members**. Then click **Add Member**.
3. Search for the user or group that you want to add to the cluster.
If external authentication is configured:
- Rancher returns users from your [external authentication]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/) source as you type.
>**Using AD but can't find your users?**
>There may be an issue with your search attribute configuration. See [Configuring Active Directory Authentication: Step 5]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/ad/).
- A drop-down allows you to add groups instead of individual users. The drop-down only lists groups that you, the logged in user, are part of.
>**Note:** If you are logged in as a local user, external users do not display in your search results. For more information, see [External Authentication Configuration and Principal Users]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
4. Assign the user or group **Cluster** roles.
[What are Cluster Roles?]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/)
>**Tip:** For Custom Roles, you can modify the list of individual roles available for assignment.
>
> - To add roles to the list, [Add a Custom Role]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/).
> - To remove roles from the list, [Lock/Unlock Roles]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/locked-roles).
**Result:** The chosen users are added to the cluster.
- To revoke cluster membership, select the user and click **Delete**. This action deletes membership, not the user.
- To modify a user's roles in the cluster, delete them from the cluster, and then re-add them with modified roles.
@@ -1,158 +0,0 @@
---
title: Custom Roles
weight: 2
---
Within Rancher, _roles_ determine what actions a user can make within a cluster or project.
Note that _roles_ are different from _permissions_, which determine what clusters and projects you can access.
This section covers the following topics:
- [Prerequisites](#prerequisites)
- [Creating a custom role for a cluster or project](#creating-a-custom-role-for-a-cluster-or-project)
- [Creating a custom global role](#creating-a-custom-global-role)
- [Deleting a custom global role](#deleting-a-custom-global-role)
- [Assigning a custom global role to a group](#assigning-a-custom-global-role-to-a-group)
## Prerequisites
To complete the tasks on this page, one of the following permissions are required:
- [Administrator Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/).
- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Roles]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned.
## Creating A Custom Role for a Cluster or Project
While Rancher comes out-of-the-box with a set of default user roles, you can also create default custom roles to provide users with very specific permissions within Rancher.
The steps to add custom roles differ depending on the version of Rancher.
{{% tabs %}}
{{% tab "Rancher v2.0.7+" %}}
1. From the **Global** view, select **Security > Roles** from the main menu.
1. Select a tab to determine the scope of the roles you're adding. The tabs are:
- **Cluster:** The role is valid for assignment when adding/managing members to _only_ clusters.
- **Project:** The role is valid for assignment when adding/managing members to _only_ projects.
1. Click **Add Cluster/Project Role.**
1. **Name** the role.
1. Optional: Choose the **Cluster/Project Creator Default** option to assign this role to a user when they create a new cluster or project. Using this feature, you can expand or restrict the default roles for cluster/project creators.
> Out of the box, the Cluster Creator Default and the Project Creator Default roles are `Cluster Owner` and `Project Owner` respectively.
1. Use the **Grant Resources** options to assign individual [Kubernetes API endpoints](https://kubernetes.io/docs/reference/) to the role.
> When viewing the resources associated with default roles created by Rancher, if there are multiple Kubernetes API resources on one line item, the resource will have `(Custom)` appended to it. These are not custom resources but just an indication that there are multiple Kubernetes API resources as one resource.
You can also choose the individual cURL methods (`Create`, `Delete`, `Get`, etc.) available for use with each endpoint you assign.
1. Use the **Inherit from a Role** options to assign individual Rancher roles to your custom roles. Note: When a custom role inherits from a parent role, the parent role cannot be deleted until the child role is deleted.
1. Click **Create**.
{{% /tab %}}
{{% tab "Rancher prior to v2.0.7" %}}
1. From the **Global** view, select **Security > Roles** from the main menu.
1. Click **Add Role**.
1. **Name** the role.
1. Choose whether to set the role to a status of [locked]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/locked-roles/).
> **Note:** Locked roles cannot be assigned to users.
1. In the **Context** dropdown menu, choose the scope of the role assigned to the user. The contexts are:
- **All:** The user can use their assigned role regardless of context. This role is valid for assignment when adding/managing members to clusters or projects.
- **Cluster:** This role is valid for assignment when adding/managing members to _only_ clusters.
- **Project:** This role is valid for assignment when adding/managing members to _only_ projects.
1. Use the **Grant Resources** options to assign individual [Kubernetes API endpoints](https://kubernetes.io/docs/reference/) to the role.
> When viewing the resources associated with default roles created by Rancher, if there are multiple Kubernetes API resources on one line item, the resource will have `(Custom)` appended to it. These are not custom resources but just an indication that there are multiple Kubernetes API resources as one resource.
You can also choose the individual cURL methods (`Create`, `Delete`, `Get`, etc.) available for use with each endpoint you assign.
1. Use the **Inherit from a Role** options to assign individual Rancher roles to your custom roles. Note: When a custom role inherits from a parent role, the parent role cannot be deleted until the child role is deleted.
1. Click **Create**.
{{% /tab %}}
{{% /tabs %}}
## Creating a Custom Global Role
### Creating a Custom Global Role that Copies Rules from an Existing Role
If you have a group of individuals that need the same level of access in Rancher, it can save time to create a custom global role in which all of the rules from another role, such as the administrator role, are copied into a new role. This allows you to only configure the variations between the existing role and the new role.
The custom global role can then be assigned to a user or group so that the custom global role takes effect the first time the user or users sign into Rancher.
To create a custom global role based on an existing role,
1. Go to the **Global** view and click **Security > Roles.**
1. On the **Global** tab, go to the role that the custom global role will be based on. Click **&#8942; (…) > Clone.**
1. Enter a name for the role.
1. Optional: To assign the custom role default for new users, go to the **New User Default** section and click **Yes: Default role for new users.**
1. In the **Grant Resources** section, select the Kubernetes resource operations that will be enabled for users with the custom role.
1. Click **Save.**
### Creating a Custom Global Role that Does Not Copy Rules from Another Role
Custom global roles don't have to be based on existing roles. To create a custom global role by choosing the specific Kubernetes resource operations that should be allowed for the role, follow these steps:
1. Go to the **Global** view and click **Security > Roles.**
1. On the **Global** tab, click **Add Global Role.**
1. Enter a name for the role.
1. Optional: To assign the custom role default for new users, go to the **New User Default** section and click **Yes: Default role for new users.**
1. In the **Grant Resources** section, select the Kubernetes resource operations that will be enabled for users with the custom role.
1. Click **Save.**
## Deleting a Custom Global Role
When deleting a custom global role, all global role bindings with this custom role are deleted.
If a user is only assigned one custom global role, and the role is deleted, the user would lose access to Rancher. For the user to regain access, an administrator would need to edit the user and apply new global permissions.
Custom global roles can be deleted, but built-in roles cannot be deleted.
To delete a custom global role,
1. Go to the **Global** view and click **Security > Roles.**
2. On the **Global** tab, go to the custom global role that should be deleted and click **&#8942; (…) > Delete.**
3. Click **Delete.**
## Assigning a Custom Global Role to a Group
If you have a group of individuals that need the same level of access in Rancher, it can save time to create a custom global role. When the role is assigned to a group, the users in the group have the appropriate level of access the first time they sign into Rancher.
When a user in the group logs in, they get the built-in Standard User global role by default. They will also get the permissions assigned to their groups.
If a user is removed from the external authentication provider group, they would lose their permissions from the custom global role that was assigned to the group. They would continue to have their individual Standard User role.
> **Prerequisites:** You can only assign a global role to a group if:
>
> * You have set up an [external authentication provider]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/#external-vs-local-authentication)
> * The external authentication provider supports [user groups]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/user-groups/)
> * You have already set up at least one user group with the authentication provider
To assign a custom global role to a group, follow these steps:
1. From the **Global** view, go to **Security > Groups.**
1. Click **Assign Global Role.**
1. In the **Select Group To Add** field, choose the existing group that will be assigned the custom global role.
1. In the **Custom** section, choose any custom global role that will be assigned to the group.
1. Optional: In the **Global Permissions** or **Built-in** sections, select any additional permissions that the group should have.
1. Click **Create.**
**Result:** The custom global role will take effect when the users in the group log into Rancher.
@@ -1,37 +0,0 @@
---
title: Locked Roles
weight: 3
---
You can set roles to a status of `locked`. Locking roles prevent them from being assigned users in the future.
Locked roles:
- Cannot be assigned to users that don't already have it assigned.
- Are not listed in the **Member Roles** drop-down when you are adding a user to a cluster or project.
- Do not affect users assigned the role before you lock the role. These users retain access that the role provides.
**Example:** let's say your organization creates an internal policy that users assigned to a cluster are prohibited from creating new projects. It's your job to enforce this policy.
To enforce it, before you add new users to the cluster, you should lock the following roles: `Cluster Owner`, `Cluster Member`, and `Create Projects`. Then you could create a new custom role that includes the same permissions as a __Cluster Member__, except the ability to create projects. Then, you use this new custom role when adding users to a cluster.
Roles can be locked by the following users:
- Any user assigned the `Administrator` global permission.
- Any user assigned the `Custom Users` permission, along with the `Manage Roles` role.
## Locking/Unlocking Roles
If you want to prevent a role from being assigned to users, you can set it to a status of `locked`.
You can lock roles in two contexts:
- When you're [adding a custom role]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/).
- When you editing an existing role (see below).
1. From the **Global** view, select **Security** > **Roles**.
2. From the role that you want to lock (or unlock), select **&#8942;** > **Edit**.
3. From the **Locked** option, choose the **Yes** or **No** radio button. Then click **Save**.
@@ -1,7 +0,0 @@
---
title: Access Control for the Enterprise Cluster Manager and Projects
shortTitle: Enterprise Cluster Manager
weight: 4
---
> This section is under construction.
@@ -1,43 +0,0 @@
---
title: How the Authorized Cluster Endpoint Works
weight: 7
---
This section describes how the kubectl CLI, the kubeconfig file, and the authorized cluster endpoint work together to allow you to access a downstream Kubernetes cluster directly, without authenticating through the Rancher server. It is intended to provide background information and context to the instructions for [how to set up kubectl to directly access a cluster.](../kubectl/#authenticating-directly-with-a-downstream-cluster)
### About the kubeconfig File
The _kubeconfig file_ is a file used to configure access to Kubernetes when used in conjunction with the kubectl command line tool (or other clients).
This kubeconfig file and its contents are specific to the cluster you are viewing. It can be downloaded from the cluster view in Rancher. You will need a separate kubeconfig file for each cluster that you have access to in Rancher.
After you download the kubeconfig file, you will be able to use the kubeconfig file and its Kubernetes [contexts](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration) to access your downstream cluster.
### Two Authentication Methods for RKE Clusters
If the cluster is not an [RKE cluster,]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) the kubeconfig file allows you to access the cluster in only one way: it lets you be authenticated with the Rancher server, then Rancher allows you to run kubectl commands on the cluster.
For RKE clusters, the kubeconfig file allows you to be authenticated in two ways:
- **Through the Rancher server authentication proxy:** Rancher's authentication proxy validates your identity, then connects you to the downstream cluster that you want to access.
- **Directly with the downstream cluster's API server:** RKE clusters have an authorized cluster endpoint enabled by default. This endpoint allows you to access your downstream Kubernetes cluster with the kubectl CLI and a kubeconfig file, and it is enabled by default for RKE clusters. In this scenario, the downstream cluster's Kubernetes API server authenticates you by calling a webhook (the `kube-api-auth` microservice) that Rancher set up.
This second method, the capability to connect directly to the cluster's Kubernetes API server, is important because it lets you access your downstream cluster if you can't connect to Rancher.
To use the authorized cluster endpoint, you will need to configure kubectl to use the extra kubectl context in the kubeconfig file that Rancher generates for you when the RKE cluster is created. This file can be downloaded from the cluster view in the Rancher UI, and the instructions for configuring kubectl are on [this page.](../kubectl/#authenticating-directly-with-a-downstream-cluster)
These methods of communicating with downstream Kubernetes clusters are also explained in the [architecture page]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/#communicating-with-downstream-user-clusters) in the larger context of explaining how Rancher works and how Rancher communicates with downstream clusters.
### About the kube-api-auth Authentication Webhook
The `kube-api-auth` microservice is deployed to provide the user authentication functionality for the [authorized cluster endpoint,]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/#4-authorized-cluster-endpoint) which is only available for [RKE clusters.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) When you access the user cluster using `kubectl`, the cluster's Kubernetes API server authenticates you by using the `kube-api-auth` service as a webhook.
During cluster provisioning, the file `/etc/kubernetes/kube-api-authn-webhook.yaml` is deployed and `kube-apiserver` is configured with `--authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml`. This configures the `kube-apiserver` to query `http://127.0.0.1:6440/v1/authenticate` to determine authentication for bearer tokens.
The scheduling rules for `kube-api-auth` are listed below:
_Applies to v2.3.0 and higher_
| Component | nodeAffinity nodeSelectorTerms | nodeSelector | Tolerations |
| -------------------- | ------------------------------------------ | ------------ | ------------------------------------------------------------------------------ |
| kube-api-auth | `beta.kubernetes.io/os:NotIn:windows`<br/>`node-role.kubernetes.io/controlplane:In:"true"` | none | `operator:Exists` |
@@ -1,183 +0,0 @@
---
title: Cluster and Project Roles
weight: 5
---
Cluster and project roles define user authorization inside a cluster or project. You can manage these roles from the **Global > Security > Roles** page.
### Membership and Role Assignment
The projects and clusters accessible to non-administrative users is determined by _membership_. Membership is a list of users who have access to a specific cluster or project based on the roles they were assigned in that cluster or project. Each cluster and project includes a tab that a user with the appropriate permissions can use to manage membership.
When you create a cluster or project, Rancher automatically assigns you as the `Owner` for it. Users assigned the `Owner` role can assign other users roles in the cluster or project.
> **Note:** Non-administrative users cannot access any existing projects/clusters by default. A user with appropriate permissions (typically the owner) must explicitly assign the project and cluster membership.
### Cluster Roles
_Cluster roles_ are roles that you can assign to users, granting them access to a cluster. There are two primary cluster roles: `Owner` and `Member`.
- **Cluster Owner:**
These users have full control over the cluster and all resources in it.
- **Cluster Member:**
These users can view most cluster level resources and create new projects.
#### Custom Cluster Roles
Rancher lets you assign _custom cluster roles_ to a standard user instead of the typical `Owner` or `Member` roles. These roles can be either a built-in custom cluster role or one defined by a Rancher administrator. They are convenient for defining narrow or specialized access for a standard user within a cluster. See the table below for a list of built-in custom cluster roles.
#### Cluster Role Reference
The following table lists each built-in custom cluster role available and whether that level of access is included in the default cluster-level permissions, `Cluster Owner` and `Cluster Member`.
| Built-in Cluster Role | Owner | Member <a id="clus-roles"></a> |
| ---------------------------------- | ------------- | --------------------------------- |
| Create Projects | ✓ | ✓ |
| Manage Cluster Backups             | ✓ | |
| Manage Cluster Catalogs | ✓ | |
| Manage Cluster Members | ✓ | |
| Manage Nodes | ✓ | |
| Manage Storage | ✓ | |
| View All Projects | ✓ | |
| View Cluster Catalogs | ✓ | ✓ |
| View Cluster Members | ✓ | ✓ |
| View Nodes | ✓ | ✓ |
For details on how each cluster role can access Kubernetes resources, you can go to the **Global** view in the Rancher UI. Then click **Security > Roles** and go to the **Clusters** tab. If you click an individual role, you can refer to the **Grant Resources** table to see all of the operations and resources that are permitted by the role.
> **Note:**
>When viewing the resources associated with default roles created by Rancher, if there are multiple Kubernetes API resources on one line item, the resource will have `(Custom)` appended to it. These are not custom resources but just an indication that there are multiple Kubernetes API resources as one resource.
### Giving a Custom Cluster Role to a Cluster Member
After an administrator [sets up a custom cluster role,]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/#adding-a-custom-role) cluster owners and admins can then assign those roles to cluster members.
To assign a custom role to a new cluster member, you can use the Rancher UI. To modify the permissions of an existing member, you will need to use the Rancher API view.
To assign the role to a new cluster member,
1. Go to the **Cluster** view, then go to the **Members** tab.
1. Click **Add Member.** Then in the **Cluster Permissions** section, choose the custom cluster role that should be assigned to the member.
1. Click **Create.**
**Result:** The member has the assigned role.
To assign any custom role to an existing cluster member,
1. Go to the member you want to give the role to. Click the **&#8942; > View in API.**
1. In the **roleTemplateId** field, go to the drop-down menu and choose the role you want to assign to the member. Click **Show Request** and **Send Request.**
**Result:** The member has the assigned role.
### Project Roles
_Project roles_ are roles that can be used to grant users access to a project. There are three primary project roles: `Owner`, `Member`, and `Read Only`.
- **Project Owner:**
These users have full control over the project and all resources in it.
- **Project Member:**
These users can manage project-scoped resources like namespaces and workloads, but cannot manage other project members.
- **Read Only:**
These users can view everything in the project but cannot create, update, or delete anything.
>**Caveat:**
>
>Users assigned the `Owner` or `Member` role for a project automatically inherit the `namespace creation` role. However, this role is a [Kubernetes ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole), meaning its scope extends to all projects in the cluster. Therefore, users explicitly assigned the `owner` or `member` role for a project can create namespaces in other projects they're assigned to, even with only the `Read Only` role assigned.
#### Custom Project Roles
Rancher lets you assign _custom project roles_ to a standard user instead of the typical `Owner`, `Member`, or `Read Only` roles. These roles can be either a built-in custom project role or one defined by a Rancher administrator. They are convenient for defining narrow or specialized access for a standard user within a project. See the table below for a list of built-in custom project roles.
#### Project Role Reference
The following table lists each built-in custom project role available in Rancher and whether it is also granted by the `Owner`, `Member`, or `Read Only` role.
| Built-in Project Role | Owner | Member<a id="proj-roles"><a/> | Read Only |
| ---------------------------------- | ------------- | ----------------------------- | ------------- |
| Manage Project Members | ✓ | | |
| Create Namespaces | ✓ | ✓ | |
| Manage Config Maps | ✓ | ✓ | |
| Manage Ingress | ✓ | ✓ | |
| Manage Project Catalogs | ✓ | | |
| Manage Secrets | ✓ | ✓ | |
| Manage Service Accounts | ✓ | ✓ | |
| Manage Services | ✓ | ✓ | |
| Manage Volumes | ✓ | ✓ | |
| Manage Workloads | ✓ | ✓ | |
| View Config Maps | ✓ | ✓ | ✓ |
| View Ingress | ✓ | ✓ | ✓ |
| View Project Members | ✓ | ✓ | ✓ |
| View Project Catalogs | ✓ | ✓ | ✓ |
| View Secrets | ✓ | ✓ | ✓ |
| View Service Accounts | ✓ | ✓ | ✓ |
| View Services | ✓ | ✓ | ✓ |
| View Volumes | ✓ | ✓ | ✓ |
| View Workloads | ✓ | ✓ | ✓ |
> **Notes:**
>
>- Each project role listed above, including `Owner`, `Member`, and `Read Only`, is comprised of multiple rules granting access to various resources. You can view the roles and their rules on the Global > Security > Roles page.
>- When viewing the resources associated with default roles created by Rancher, if there are multiple Kubernetes API resources on one line item, the resource will have `(Custom)` appended to it. These are not custom resources but just an indication that there are multiple Kubernetes API resources as one resource.
>- The `Manage Project Members` role allows the project owner to manage any members of the project **and** grant them any project scoped role regardless of their access to the project resources. Be cautious when assigning this role out individually.
### Defining Custom Roles
As previously mentioned, custom roles can be defined for use at the cluster or project level. The context field defines whether the role will appear on the cluster member page, project member page, or both.
When defining a custom role, you can grant access to specific resources or specify roles from which the custom role should inherit. A custom role can be made up of a combination of specific grants and inherited roles. All grants are additive. This means that defining a narrower grant for a specific resource **will not** override a broader grant defined in a role that the custom role is inheriting from.
### Default Cluster and Project Roles
By default, when a standard user creates a new cluster or project, they are automatically assigned an ownership role: either [cluster owner](#cluster-roles) or [project owner](#project-roles). However, in some organizations, these roles may overextend administrative access. In this use case, you can change the default role to something more restrictive, such as a set of individual roles or a custom role.
There are two methods for changing default cluster/project roles:
- **Assign Custom Roles**: Create a [custom role]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles) for either your [cluster](#custom-cluster-roles) or [project](#custom-project-roles), and then set the custom role as default.
- **Assign Individual Roles**: Configure multiple [cluster](#cluster-role-reference)/[project](#project-role-reference) roles as default for assignment to the creating user.
For example, instead of assigning a role that inherits other roles (such as `cluster owner`), you can choose a mix of individual roles (such as `manage nodes` and `manage storage`).
>**Note:**
>
>- Although you can [lock]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/locked-roles/) a default role, the system still assigns the role to users who create a cluster/project.
>- Only users that create clusters/projects inherit their roles. Users added to the cluster/project membership afterward must be explicitly assigned their roles.
### Configuring Default Roles for Cluster and Project Creators
You can change the cluster or project role(s) that are automatically assigned to the creating user.
1. From the **Global** view, select **Security > Roles** from the main menu. Select either the **Cluster** or **Project** tab.
1. Find the custom or individual role that you want to use as default. Then edit the role by selecting **&#8942; > Edit**.
1. Enable the role as default.
{{% accordion id="cluster" label="For Clusters" %}}
1. From **Cluster Creator Default**, choose **Yes: Default role for new cluster creation**.
1. Click **Save**.
{{% /accordion %}}
{{% accordion id="project" label="For Projects" %}}
1. From **Project Creator Default**, choose **Yes: Default role for new project creation**.
1. Click **Save**.
{{% /accordion %}}
1. If you want to remove a default role, edit the permission and select **No** from the default roles option.
**Result:** The default roles are configured based on your changes. Roles assigned to cluster/project creators display a check in the **Cluster/Project Creator Default** column.
### Cluster Membership Revocation Behavior
When you revoke the cluster membership for a standard user that's explicitly assigned membership to both the cluster _and_ a project within the cluster, that standard user [loses their cluster roles](#clus-roles) but [retains their project roles](#proj-roles). In other words, although you have revoked the user's permissions to access the cluster and its nodes, the standard user can still:
- Access the projects they hold membership in.
- Exercise any [individual project roles](#project-role-reference) they are assigned.
If you want to completely revoke a user's access within a cluster, revoke both their cluster and project memberships.
@@ -1,6 +0,0 @@
---
title: Custom Global Roles
weight: 3
---
This page is under construction.
@@ -1,172 +0,0 @@
---
title: Global Permissions
weight: 4
---
_Permissions_ are individual access rights that you can assign when selecting a custom permission for a user.
Global Permissions define user authorization outside the scope of any particular cluster. Out-of-the-box, there are three default global permissions: `Administrator`, `Standard User` and `User-base`.
- **Administrator:** These users have full control over the entire Rancher system and all clusters within it.
- <a id="user"></a>**Standard User:** These users can create new clusters and use them. Standard users can also assign other users permissions to their clusters.
- **User-Base:** User-Base users have login-access only.
You cannot update or delete the built-in Global Permissions.
This section covers the following topics:
- [Global permission assignment](#global-permission-assignment)
- [Global permissions for new local users](#global-permissions-for-new-local-users)
- [Global permissions for users with external authentication](#global-permissions-for-users-with-external-authentication)
- [Custom global permissions](#custom-global-permissions)
- [Custom global permissions reference](#custom-global-permissions-reference)
- [Configuring default global permissions for new users](#configuring-default-global-permissions)
- [Configuring global permissions for existing individual users](#configuring-global-permissions-for-existing-individual-users)
- [Configuring global permissions for groups](#configuring-global-permissions-for-groups)
- [Refreshing group memberships](#refreshing-group-memberships)
# Global Permission Assignment
Global permissions for local users are assigned differently than users who log in to Rancher using external authentication.
### Global Permissions for New Local Users
When you create a new local user, you assign them a global permission as you complete the **Add User** form.
To see the default permissions for new users, go to the **Global** view and click **Security > Roles.** On the **Global** tab, there is a column named **New User Default.** When adding a new local user, the user receives all default global permissions that are marked as checked in this column. You can [change the default global permissions to meet your needs.](#configuring-default-global-permissions)
### Global Permissions for Users with External Authentication
When a user logs into Rancher using an external authentication provider for the first time, they are automatically assigned the **New User Default** global permissions. By default, Rancher assigns the **Standard User** permission for new users.
To see the default permissions for new users, go to the **Global** view and click **Security > Roles.** On the **Global** tab, there is a column named **New User Default.** When adding a new local user, the user receives all default global permissions that are marked as checked in this column, and you can [change them to meet your needs.](#configuring-default-global-permissions)
Permissions can be assigned to an individual user with [these steps.](#configuring-global-permissions-for-existing-individual-users)
As of Rancher v2.4.0, you can [assign a role to everyone in the group at the same time](#configuring-global-permissions-for-groups) if the external authentication provider supports groups.
# Custom Global Permissions
Using custom permissions is convenient for providing users with narrow or specialized access to Rancher.
When a user from an [external authentication source]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/) signs into Rancher for the first time, they're automatically assigned a set of global permissions (hereafter, permissions). By default, after a user logs in for the first time, they are created as a user and assigned the default `user` permission. The standard `user` permission allows users to login and create clusters.
However, in some organizations, these permissions may extend too much access. Rather than assigning users the default global permissions of `Administrator` or `Standard User`, you can assign them a more restrictive set of custom global permissions.
The default roles, Administrator and Standard User, each come with multiple global permissions built into them. The Administrator role includes all global permissions, while the default user role includes three global permissions: Create Clusters, Use Catalog Templates, and User Base, which is equivalent to the minimum permission to log in to Rancher. In other words, the custom global permissions are modularized so that if you want to change the default user role permissions, you can choose which subset of global permissions are included in the new default user role.
Administrators can enforce custom global permissions in multiple ways:
- [Changing the default permissions for new users](#configuring-default-global-permissions)
- [Editing the permissions of an existing user](#configuring-global-permissions-for-individual-users)
- [Assigning a custom global permission to a group](#assigning-a-custom-global-permission-to-a-group)
### Custom Global Permissions Reference
The following table lists each custom global permission available and whether it is included in the default global permissions, `Administrator`, `Standard User` and `User-Base`.
| Custom Global Permission | Administrator | Standard User | User-Base |
| ---------------------------------- | ------------- | ------------- |-----------|
| Create Clusters | ✓ | ✓ | |
| Create RKE Templates | ✓ | ✓ | |
| Manage Authentication | ✓ | | |
| Manage Catalogs | ✓ | | |
| Manage Cluster Drivers | ✓ | | |
| Manage Node Drivers | ✓ | | |
| Manage PodSecurityPolicy Templates | ✓ | | |
| Manage Roles | ✓ | | |
| Manage Settings | ✓ | | |
| Manage Users | ✓ | | |
| Use Catalog Templates | ✓ | ✓ | |
| User Base\* (Basic log-in access) | ✓ | ✓ | |
> \*This role has two names:
>
> - When you go to the <b>Users</b> tab and edit a user's global role, this role is called <b>Login Access</b> in the custom global permissions list.
> - When you go to the <b>Security</b> tab and edit the roles from the roles page, this role is called <b>User Base.</b>
For details on which Kubernetes resources correspond to each global permission, you can go to the **Global** view in the Rancher UI. Then click **Security > Roles** and go to the **Global** tab. If you click an individual role, you can refer to the **Grant Resources** table to see all of the operations and resources that are permitted by the role.
> **Notes:**
>
> - Each permission listed above is comprised of multiple individual permissions not listed in the Rancher UI. For a full list of these permissions and the rules they are comprised of, access through the API at `/v3/globalRoles`.
> - When viewing the resources associated with default roles created by Rancher, if there are multiple Kubernetes API resources on one line item, the resource will have `(Custom)` appended to it. These are not custom resources but just an indication that there are multiple Kubernetes API resources as one resource.
### Configuring Default Global Permissions
If you want to restrict the default permissions for new users, you can remove the `user` permission as default role and then assign multiple individual permissions as default instead. Conversely, you can also add administrative permissions on top of a set of other standard permissions.
> **Note:** Default roles are only assigned to users added from an external authentication provider. For local users, you must explicitly assign global permissions when adding a user to Rancher. You can customize these global permissions when adding the user.
To change the default global permissions that are assigned to external users upon their first log in, follow these steps:
1. From the **Global** view, select **Security > Roles** from the main menu. Make sure the **Global** tab is selected.
1. Find the permissions set that you want to add or remove as a default. Then edit the permission by selecting **&#8942; > Edit**.
1. If you want to add the permission as a default, Select **Yes: Default role for new users** and then click **Save**.
1. If you want to remove a default permission, edit the permission and select **No** from **New User Default**.
**Result:** The default global permissions are configured based on your changes. Permissions assigned to new users display a check in the **New User Default** column.
### Configuring Global Permissions for Existing Individual Users
To configure permission for a user,
1. Go to the **Users** tab.
1. On this page, go to the user whose access level you want to change and click **&#8942; > Edit.**
1. In the **Global Permissions** section, click **Custom.**
1. Check the boxes for each subset of permissions you want the user to have access to.
1. Click **Save.**
> **Result:** The user's global permissions have been updated.
### Configuring Global Permissions for Groups
If you have a group of individuals that need the same level of access in Rancher, it can save time to assign permissions to the entire group at once, so that the users in the group have the appropriate level of access the first time they sign into Rancher.
After you assign a custom global role to a group, the custom global role will be assigned to a user in the group when they log in to Rancher.
For existing users, the new permissions will take effect when the users log out of Rancher and back in again, or when an administrator [refreshes the group memberships.](#refreshing-group-memberships)
For new users, the new permissions take effect when the users log in to Rancher for the first time. New users from this group will receive the permissions from the custom global role in addition to the **New User Default** global permissions. By default, the **New User Default** permissions are equivalent to the **Standard User** global role, but the default permissions can be [configured.](#configuring-default-global-permissions)
If a user is removed from the external authentication provider group, they would lose their permissions from the custom global role that was assigned to the group. They would continue to have any remaining roles that were assigned to them, which would typically include the roles marked as **New User Default.** Rancher will remove the permissions that are associated with the group when the user logs out, or when an administrator [refreshes group memberships,]((#refreshing-group-memberships)) whichever comes first.
> **Prerequisites:** You can only assign a global role to a group if:
>
> * You have set up an [external authentication provider]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/#external-vs-local-authentication)
> * The external authentication provider supports [user groups]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/user-groups/)
> * You have already set up at least one user group with the authentication provider
To assign a custom global role to a group, follow these steps:
1. From the **Global** view, go to **Security > Groups.**
1. Click **Assign Global Role.**
1. In the **Select Group To Add** field, choose the existing group that will be assigned the custom global role.
1. In the **Global Permissions,** **Custom,** and/or **Built-in** sections, select the permissions that the group should have.
1. Click **Create.**
**Result:** The custom global role will take effect when the users in the group log into Rancher.
### Refreshing Group Memberships
When an administrator updates the global permissions for a group, the changes take effect for individual group members after they log out of Rancher and log in again.
To make the changes take effect immediately, an administrator or cluster owner can refresh group memberships.
An administrator might also want to refresh group memberships if a user is removed from a group in the external authentication service. In that case, the refresh makes Rancher aware that the user was removed from the group.
To refresh group memberships,
1. From the **Global** view, click **Security > Users.**
1. Click **Refresh Group Memberships.**
**Result:** Any changes to the group members' permissions will take effect.
@@ -1,103 +0,0 @@
---
title: "Access a Cluster with Kubectl and kubeconfig"
description: "Learn how you can access and manage your Kubernetes clusters using kubectl with kubectl Shell or with kubectl CLI and kubeconfig file. A kubeconfig file is used to configure access to Kubernetes. When you create a cluster with Rancher, it automatically creates a kubeconfig for your cluster."
weight: 6
---
This section describes how to manipulate your downstream Kubernetes cluster with kubectl from the Rancher UI or from your workstation.
For more information on using kubectl, see [Kubernetes Documentation: Overview of kubectl](https://kubernetes.io/docs/reference/kubectl/overview/).
- [Accessing clusters with kubectl shell in the Rancher UI](#accessing-clusters-with-kubectl-shell-in-the-rancher-ui)
- [Accessing clusters with kubectl from your workstation](#accessing-clusters-with-kubectl-from-your-workstation)
- [Note on Resources created using kubectl](#note-on-resources-created-using-kubectl)
- [Authenticating Directly with a Downstream Cluster](#authenticating-directly-with-a-downstream-cluster)
- [Connecting Directly to Clusters with FQDN Defined](#connecting-directly-to-clusters-with-fqdn-defined)
- [Connecting Directly to Clusters without FQDN Defined](#connecting-directly-to-clusters-without-fqdn-defined)
### Accessing Clusters with kubectl Shell in the Rancher UI
You can access and manage your clusters by logging into Rancher and opening the kubectl shell in the UI. No further configuration necessary.
1. From the **Global** view, open the cluster that you want to access with kubectl.
2. Click **Launch kubectl**. Use the window that opens to interact with your Kubernetes cluster.
### Accessing Clusters with kubectl from Your Workstation
This section describes how to download your cluster's kubeconfig file, launch kubectl from your workstation, and access your downstream cluster.
This alternative method of accessing the cluster allows you to authenticate with Rancher and manage your cluster without using the Rancher UI.
> **Prerequisites:** These instructions assume that you have already created a Kubernetes cluster, and that kubectl is installed on your workstation. For help installing kubectl, refer to the official [Kubernetes documentation.](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
1. Log into Rancher. From the **Global** view, open the cluster that you want to access with kubectl.
1. Click **Kubeconfig File**.
1. Copy the contents displayed to your clipboard.
1. Paste the contents into a new file on your local computer. Move the file to `~/.kube/config`. Note: The default location that kubectl uses for the kubeconfig file is `~/.kube/config`, but you can use any directory and specify it using the `--kubeconfig` flag, as in this command:
```
kubectl --kubeconfig /custom/path/kube.config get pods
```
1. From your workstation, launch kubectl. Use it to interact with your kubernetes cluster.
### Note on Resources Created Using kubectl
Rancher will discover and show resources created by `kubectl`. However, these resources might not have all the necessary annotations on discovery. If an operation (for instance, scaling the workload) is done to the resource using the Rancher UI/API, this may trigger recreation of the resources due to the missing annotations. This should only happen the first time an operation is done to the discovered resource.
# Authenticating Directly with a Downstream Cluster
This section intended to help you set up an alternative method to access an [RKE cluster.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters)
This method is only available for RKE clusters that have the [authorized cluster endpoint]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/#4-authorized-cluster-endpoint) enabled. When Rancher creates this RKE cluster, it generates a kubeconfig file that includes additional kubectl context(s) for accessing your cluster. This additional context allows you to use kubectl to authenticate with the downstream cluster without authenticating through Rancher. For a longer explanation of how the authorized cluster endpoint works, refer to [this page.](../ace)
We recommend that as a best practice, you should set up this method to access your RKE cluster, so that just in case you cant connect to Rancher, you can still access the cluster.
> **Prerequisites:** The following steps assume that you have created a Kubernetes cluster and followed the steps to [connect to your cluster with kubectl from your workstation.](#accessing-clusters-with-kubectl-from-your-workstation)
To find the name of the context(s) in your downloaded kubeconfig file, run:
```
kubectl config get-contexts --kubeconfig /custom/path/kube.config
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* my-cluster my-cluster user-46tmn
my-cluster-controlplane-1 my-cluster-controlplane-1 user-46tmn
```
In this example, when you use `kubectl` with the first context, `my-cluster`, you will be authenticated through the Rancher server.
With the second context, `my-cluster-controlplane-1`, you would authenticate with the authorized cluster endpoint, communicating with an downstream RKE cluster directly.
We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.]({{<baseurl>}}/rancher/v2.x/en/overview/architecture-recommendations/#architecture-for-an-authorized-cluster-endpoint)
Now that you have the name of the context needed to authenticate directly with the cluster, you can pass the name of the context in as an option when running kubectl commands. The commands will differ depending on whether your cluster has an FQDN defined. Examples are provided in the sections below.
When `kubectl` works normally, it confirms that you can access your cluster while bypassing Rancher's authentication proxy.
### Connecting Directly to Clusters with FQDN Defined
If an FQDN is defined for the cluster, a single context referencing the FQDN will be created. The context will be named `<CLUSTER_NAME>-fqdn`. When you want to use `kubectl` to access this cluster without Rancher, you will need to use this context.
Assuming the kubeconfig file is located at `~/.kube/config`:
```
kubectl --context <CLUSTER_NAME>-fqdn get nodes
```
Directly referencing the location of the kubeconfig file:
```
kubectl --kubeconfig /custom/path/kube.config --context <CLUSTER_NAME>-fqdn get pods
```
### Connecting Directly to Clusters without FQDN Defined
If there is no FQDN defined for the cluster, extra contexts will be created referencing the IP address of each node in the control plane. Each context will be named `<CLUSTER_NAME>-<NODE_NAME>`. When you want to use `kubectl` to access this cluster without Rancher, you will need to use this context.
Assuming the kubeconfig file is located at `~/.kube/config`:
```
kubectl --context <CLUSTER_NAME>-<NODE_NAME> get nodes
```
Directly referencing the location of the kubeconfig file:
```
kubectl --kubeconfig /custom/path/kube.config --context <CLUSTER_NAME>-<NODE_NAME> get pods
```
@@ -1,50 +0,0 @@
---
title: Adding Users to Projects
weight: 2
---
If you want to provide a user with access and permissions to _specific_ projects and resources within a cluster, assign the user a project membership.
You can add members to a project as it is created, or add them to an existing project.
>**Tip:** Want to provide a user with access to _all_ projects within a cluster? See [Adding Cluster Members]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/cluster-members/) instead.
### Adding Members to a New Project
You can add members to a project as you create it (recommended if possible). For details on creating a new project, refer to the [cluster administration section.]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/)
### Adding Members to an Existing Project
Following project creation, you can add users as project members so that they can access its resources.
1. From the **Global** view, open the project that you want to add members to.
2. From the main menu, select **Members**. Then click **Add Member**.
3. Search for the user or group that you want to add to the project.
If external authentication is configured:
- Rancher returns users from your external authentication source as you type.
- A drop-down allows you to add groups instead of individual users. The dropdown only lists groups that you, the logged in user, are included in.
>**Note:** If you are logged in as a local user, external users do not display in your search results.
1. Assign the user or group **Project** roles.
[What are Project Roles?]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/)
>**Notes:**
>
>- Users assigned the `Owner` or `Member` role for a project automatically inherit the `namespace creation` role. However, this role is a [Kubernetes ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole), meaning its scope extends to all projects in the cluster. Therefore, users explicitly assigned the `Owner` or `Member` role for a project can create namespaces in other projects they're assigned to, even with only the `Read Only` role assigned.
>
>- For `Custom` roles, you can modify the list of individual roles available for assignment.
>
> - To add roles to the list, [Add a Custom Role]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles).
> - To remove roles from the list, [Lock/Unlock Roles]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/locked-roles/).
**Result:** The chosen users are added to the project.
- To revoke project membership, select the user and click **Delete**. This action deletes membership, not the user.
- To modify a user's roles in the project, delete them from the project, and then re-add them with modified roles.
@@ -1,137 +0,0 @@
---
title: Backing up a Cluster
weight: 7
---
In the Rancher UI, etcd backup and recovery for [Rancher launched Kubernetes clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) can be easily performed.
Rancher recommends configuring recurrent `etcd` snapshots for all production clusters. Additionally, one-time snapshots can easily be taken as well.
Snapshots of the etcd database are taken and saved either [locally onto the etcd nodes](#local-backup-target) or to a [S3 compatible target](#s3-backup-target). The advantages of configuring S3 is that if all etcd nodes are lost, your snapshot is saved remotely and can be used to restore the cluster.
This section covers the following topics:
- [How snapshots work](#how-snapshots-work)
- [Configuring recurring snapshots](#configuring-recurring-snapshots)
- [One-time snapshots](#one-time-snapshots)
- [Snapshot backup targets](#snapshot-backup-targets)
- [Local backup target](#local-backup-target)
- [S3 backup target](#s3-backup-target)
- [Using a custom CA certificate for S3](#using-a-custom-ca-certificate-for-s3)
- [IAM Support for storing snapshots in S3](#iam-support-for-storing-snapshots-in-s3)
- [Viewing available snapshots](#viewing-available-snapshots)
- [Safe timestamps](#safe-timestamps)
- [Enabling snapshot features for clusters created before Rancher v2.2.0](#enabling-snapshot-features-for-clusters-created-before-rancher-v2-2-0)
# How Snapshots Work
{{% tabs %}}
{{% tab "Rancher v2.4.0+" %}}
When Rancher creates a snapshot, it includes three components:
- The cluster data in etcd
- The Kubernetes version
- The cluster configuration in the form of the `cluster.yml`
Because the Kubernetes version is now included in the snapshot, it is possible to restore a cluster to a prior Kubernetes version.
The multiple components of the snapshot allow you to select from the following options if you need to a cluster from a snapshot:
- **Restore just the etcd contents:** This restoration is similar to restoring to snapshots in Rancher prior to v2.4.0.
- **Restore etcd and Kubernetes version:** This option should be used if a Kubernetes upgrade is the reason that your cluster is failing, and you haven't made any cluster configuration changes.
- **Restore etcd, Kubernetes versions and cluster configuration:** This option should be used if you changed both the Kubernetes version and cluster configuration when upgrading.
It's always recommended to take a new snapshot before any upgrades.
{{% /tab %}}
{{% tab "Rancher prior to v2.4.0" %}}
When Rancher creates a snapshot, only the etcd data is included in the snapshot.
Because the Kubernetes version is not included in the snapshot, there is no option to restore a cluster to a different Kubernetes version.
It's always recommended to take a new snapshot before any upgrades.
{{% /tab %}}
{{% /tabs %}}
# Configuring Recurring Snapshots
Select how often you want recurring snapshots to be taken as well as how many snapshots to keep. The amount of time is measured in hours. With timestamped snapshots, the user has the ability to do a point-in-time recovery.
By default, [Rancher launched Kubernetes clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) are configured to take recurring snapshots (saved to local disk). To protect against local disk failure, using the [S3 Target](#s3-backup-target) or replicating the path on disk is advised.
During cluster provisioning or editing the cluster, the configuration for snapshots can be found in the advanced section for **Cluster Options**. Click on **Show advanced options**.
In the **Advanced Cluster Options** section, there are several options available to configure:
| Option | Description | Default Value|
| --- | ---| --- |
|[etcd Snapshot Backup Target](#snapshot-backup-targets)| Select where you want the snapshots to be saved. Options are either local or in S3 | local|
|Recurring etcd Snapshot Enabled| Enable/Disable recurring snapshots | Yes|
|[Recurring etcd Snapshot Creation Period](#snapshot-creation-period-and-retention-count) | Time in hours between recurring snapshots| 12 hours |
|[Recurring etcd Snapshot Retention Count](#snapshot-creation-period-and-retention-count)| Number of snapshots to retain| 6 |
# One-Time Snapshots
In addition to recurring snapshots, you may want to take a "one-time" snapshot. For example, before upgrading the Kubernetes version of a cluster it's best to backup the state of the cluster to protect against upgrade failure.
1. In the **Global** view, navigate to the cluster that you want to take a one-time snapshot.
2. Click the **&#8942; > Snapshot Now**.
**Result:** Based on your [snapshot backup target](#snapshot-backup-targets), a one-time snapshot will be taken and saved in the selected backup target.
# Snapshot Backup Targets
Rancher supports two different backup targets:
* [Local Target](#local-backup-target)
* [S3 Target](#s3-backup-target)
### Local Backup Target
By default, the `local` backup target is selected. The benefits of this option is that there is no external configuration. Snapshots are automatically saved locally to the etcd nodes in the [Rancher launched Kubernetes clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) in `/opt/rke/etcd-snapshots`. All recurring snapshots are taken at configured intervals. The downside of using the `local` backup target is that if there is a total disaster and _all_ etcd nodes are lost, there is no ability to restore the cluster.
### S3 Backup Target
The `S3` backup target allows users to configure a S3 compatible backend to store the snapshots. The primary benefit of this option is that if the cluster loses all the etcd nodes, the cluster can still be restored as the snapshots are stored externally. Rancher recommends external targets like `S3` backup, however its configuration requirements do require additional effort that should be considered.
| Option | Description | Required|
|---|---|---|
|S3 Bucket Name| S3 bucket name where backups will be stored| *|
|S3 Region|S3 region for the backup bucket| |
|S3 Region Endpoint|S3 regions endpoint for the backup bucket|* |
|S3 Access Key|S3 access key with permission to access the backup bucket|*|
|S3 Secret Key|S3 secret key with permission to access the backup bucket|*|
| Custom CA Certificate | A custom certificate used to access private S3 backends ||
### Using a custom CA certificate for S3
The backup snapshot can be stored on a custom `S3` backup like [minio](https://min.io/). If the S3 back end uses a self-signed or custom certificate, provide a custom certificate using the `Custom CA Certificate` option to connect to the S3 backend.
### IAM Support for Storing Snapshots in S3
The `S3` backup target supports using IAM authentication to AWS API in addition to using API credentials. An IAM role gives temporary permissions that an application can use when making API calls to S3 storage. To use IAM authentication, the following requirements must be met:
- The cluster etcd nodes must have an instance role that has read/write access to the designated backup bucket.
- The cluster etcd nodes must have network access to the specified S3 endpoint.
- The Rancher Server worker node(s) must have an instance role that has read/write to the designated backup bucket.
- The Rancher Server worker node(s) must have network access to the specified S3 endpoint.
To give an application access to S3, refer to the AWS documentation on [Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances.](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html)
# Viewing Available Snapshots
The list of all available snapshots for the cluster is available in the Rancher UI.
1. In the **Global** view, navigate to the cluster that you want to view snapshots.
2. Click **Tools > Snapshots** from the navigation bar to view the list of saved snapshots. These snapshots include a timestamp of when they were created.
# Safe Timestamps
As of v2.2.6, snapshot files are timestamped to simplify processing the files using external tools and scripts, but in some S3 compatible backends, these timestamps were unusable. As of Rancher v2.3.0, the option `safe_timestamp` is added to support compatible file names. When this flag is set to `true`, all special characters in the snapshot filename timestamp are replaced.
This option is not available directly in the UI, and is only available through the `Edit as Yaml` interface.
# Enabling Snapshot Features for Clusters Created Before Rancher v2.2.0
If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Rancher, you must [edit the cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/restoring-etcd/).
@@ -1,23 +0,0 @@
---
title: Best Practices Guide
weight: 3
---
The purpose of this section is to consolidate best practices for Rancher implementations. This also includes recommendations for related technologies, such as Kubernetes, Docker, containers, and more. The objective is to improve the outcome of a Rancher implementation using the operational experience of Rancher and its customers.
If you have any questions about how these might apply to your use case, please contact your Customer Success Manager or Support.
Use the navigation bar on the left to find the current best practices for managing and deploying the Rancher Server.
For more guidance on best practices, you can consult these resources:
- [Rancher Docs]({{<baseurl>}})
- [Monitoring]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/)
- [Backups and Disaster Recovery]({{<baseurl>}}/rancher/v2.x/en/backups/)
- [Security]({{<baseurl>}}/rancher/v2.x/en/security/)
- [Rancher Blog](https://rancher.com/blog/)
- [Articles about best practices on the Rancher blog](https://rancher.com/tags/best-practices/)
- [101 More Security Best Practices for Kubernetes](https://rancher.com/blog/2019/2019-01-17-101-more-kubernetes-security-best-practices/)
- [Rancher Forum](https://forums.rancher.com/)
- [Rancher Users Slack](https://slack.rancher.io/)
- [Rancher Labs YouTube Channel - Online Meetups, Demos, Training, and Webinars](https://www.youtube.com/channel/UCh5Xtp82q8wjijP8npkVTBA/featured)

Some files were not shown because too many files have changed in this diff Show More