Merge pull request #2965 from catherineluse/formatting

Fix internal links
This commit is contained in:
Catherine Luse
2021-01-12 11:39:52 -07:00
committed by GitHub
68 changed files with 130 additions and 113 deletions
@@ -26,10 +26,11 @@ Configuring Rancher to allow your users to authenticate with their Azure AD acco
<!-- TOC -->
- [1. Register Rancher with Azure](#1-register-rancher-with-azure)
- [2. Create an Azure API Key](#2-create-an-azure-api-key)
- [2. Create a new client secret](#2-create-a-new-client-secret)
- [3. Set Required Permissions for Rancher](#3-set-required-permissions-for-rancher)
- [4. Copy Azure Application Data](#4-copy-azure-application-data)
- [5. Configure Azure AD in Rancher](#5-configure-azure-ad-in-rancher)
- [4. Add a Reply URL](#4-add-a-reply-url)
- [5. Copy Azure Application Data](#5-copy-azure-application-data)
- [6. Configure Azure AD in Rancher](#6-configure-azure-ad-in-rancher)
<!-- /TOC -->
@@ -175,7 +176,7 @@ As your final step in Azure, copy the data that you'll use to configure Rancher
>**Note:** Copy the v1 version of the endpoints
### 5. Configure Azure AD in Rancher
### 6. Configure Azure AD in Rancher
From the Rancher UI, enter information about your AD instance hosted in Azure to complete configuration.
@@ -7,7 +7,7 @@ If your organization uses G Suite for user authentication, you can configure Ran
Only admins of the G Suite domain have access to the Admin SDK. Therefore, only G Suite admins can configure Google OAuth for Rancher.
Within Rancher, only administrators or users with the **Manage Authentication** [global role]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) can configure authentication.
Within Rancher, only administrators or users with the **Manage Authentication** [global role]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) can configure authentication.
# Prerequisites
- You must have a [G Suite admin account](https://admin.google.com) configured.
@@ -40,7 +40,7 @@ If you are in doubt about the correct values to enter in the user/group Search B
| Port | Specify the port at which the OpenLDAP server is listening for connections. Unencrypted LDAP normally uses the standard port of 389, while LDAPS uses port 636.|
| TLS | Check this box to enable LDAP over SSL/TLS (commonly known as LDAPS). You will also need to paste in the CA certificate if the server uses a self-signed/enterprise-signed certificate. |
| Server Connection Timeout | The duration in number of seconds that Rancher waits before considering the server unreachable. |
| Service Account Distinguished Name | Enter the Distinguished Name (DN) of the user that should be used to bind, search and retrieve LDAP entries. (see [Prerequisites](#prerequisites)). |
| Service Account Distinguished Name | Enter the Distinguished Name (DN) of the user that should be used to bind, search and retrieve LDAP entries. |
| Service Account Password | The password for the service account. |
| User Search Base | Enter the Distinguished Name of the node in your directory tree from which to start searching for user objects. All users must be descendents of this base DN. For example: "ou=people,dc=acme,dc=com".|
| Group Search Base | If your groups live under a different node than the one configured under `User Search Base` you will need to provide the Distinguished Name here. Otherwise leave this field empty. For example: "ou=groups,dc=acme,dc=com".|
@@ -14,7 +14,7 @@ If there are specific cluster drivers that you do not want to show your users, y
>**Prerequisites:** To create, edit, or delete cluster drivers, you need _one_ of the following permissions:
>
>- [Administrator Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/)
>- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Cluster Drivers]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned.
>- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Cluster Drivers]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) role assigned.
## Activating/Deactivating Cluster Drivers
@@ -15,7 +15,7 @@ If there are specific node drivers that you don't want to show to your users, yo
>**Prerequisites:** To create, edit, or delete drivers, you need _one_ of the following permissions:
>
>- [Administrator Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/)
>- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Node Drivers]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned.
>- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Node Drivers]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) role assigned.
## Activating/Deactivating Node Drivers
@@ -53,7 +53,7 @@ For details on how each cluster role can access Kubernetes resources, you can go
### Giving a Custom Cluster Role to a Cluster Member
After an administrator [sets up a custom cluster role,]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/#adding-a-custom-role) cluster owners and admins can then assign those roles to cluster members.
After an administrator [sets up a custom cluster role,]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/) cluster owners and admins can then assign those roles to cluster members.
To assign a custom role to a new cluster member, you can use the Rancher UI. To modify the permissions of an existing member, you will need to use the Rancher API view.
@@ -22,7 +22,7 @@ This section covers the following topics:
To complete the tasks on this page, one of the following permissions are required:
- [Administrator Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/).
- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Roles]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned.
- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Roles]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) role assigned.
## Creating A Custom Role for a Cluster or Project
@@ -104,8 +104,8 @@ The default roles, Administrator and Standard User, each come with multiple glob
Administrators can enforce custom global permissions in multiple ways:
- [Changing the default permissions for new users](#configuring-default-global-permissions)
- [Editing the permissions of an existing user](#configuring-global-permissions-for-individual-users)
- [Assigning a custom global permission to a group](#assigning-a-custom-global-permission-to-a-group)
- [Configuring global permissions for individual users](#configuring-global-permissions-for-individual-users)
- [Configuring global permissions for groups](#configuring-global-permissions-for-groups)
### Custom Global Permissions Reference
@@ -156,7 +156,7 @@ To change the default global permissions that are assigned to external users upo
**Result:** The default global permissions are configured based on your changes. Permissions assigned to new users display a check in the **New User Default** column.
### Configuring Global Permissions for Existing Individual Users
### Configuring Global Permissions for Individual Users
To configure permission for a user,
@@ -83,16 +83,16 @@ The documents in this section explain the details of RKE template management:
- [Getting permission to create templates]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/creator-permissions/)
- [Creating and revising templates]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/creating-and-revising/)
- [Enforcing template settings]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/enforcement/#requiring-new-clusters-to-use-a-cluster-template)
- [Enforcing template settings](./enforcement/#requiring-new-clusters-to-use-an-rke-template)
- [Overriding template settings]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/overrides/)
- [Sharing templates with cluster creators]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing/#sharing-templates-with-specific-users)
- [Sharing templates with cluster creators]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing/#sharing-templates-with-specific-users-or-groups)
- [Sharing ownership of a template]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing/#sharing-ownership-of-templates)
An [example YAML configuration file for a template]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/example-yaml) is provided for reference.
# Applying Templates
You can [create a cluster from a template]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/applying-templates/#creating-a-cluster-from-a-cluster-template) that you created, or from a template that has been [shared with you.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing)
You can [create a cluster from a template]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/applying-templates/#creating-a-cluster-from-an-rke-template) that you created, or from a template that has been [shared with you.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing)
If the RKE template owner creates a new revision of the template, you can [upgrade your cluster to that revision.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/applying-templates/#updating-a-cluster-created-with-an-rke-template)
@@ -37,7 +37,7 @@ You can revise, share, and delete a template if you are an owner of the template
1. From the **Global** view, click **Tools > RKE Templates.**
1. Click **Add Template.**
1. Provide a name for the template. An auto-generated name is already provided for the template' first version, which is created along with this template.
1. Optional: Share the template with other users or groups by [adding them as members.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing/#sharing-templates-with-specific-users) You can also make the template public to share with everyone in the Rancher setup.
1. Optional: Share the template with other users or groups by [adding them as members.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing/#sharing-templates-with-specific-users-or-groups) You can also make the template public to share with everyone in the Rancher setup.
1. Then follow the form on screen to save the cluster configuration parameters as part of the template's revision. The revision can be marked as default for this template.
**Result:** An RKE template with one revision is configured. You can use this RKE template revision later when you [provision a Rancher-launched cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters). After a cluster is managed by an RKE template, it cannot be disconnected and the option to uncheck **Use an existing RKE Template and Revision** will be unavailable.
@@ -31,7 +31,7 @@ In this way, the administrators enforce the Kubernetes version across the organi
Let's say an organization has both basic and advanced users. Administrators want the basic users to be required to use a template, while the advanced users and administrators create their clusters however they want.
1. First, an administrator turns on [RKE template enforcement.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/enforcement/#requiring-new-clusters-to-use-a-cluster-template) This means that every [standard user]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) in Rancher will need to use an RKE template when they create a cluster.
1. First, an administrator turns on [RKE template enforcement.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/enforcement/#requiring-new-clusters-to-use-an-rke-template) This means that every [standard user]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) in Rancher will need to use an RKE template when they create a cluster.
1. The administrator then creates two templates:
- One template for basic users, with almost every option specified except for access keys
@@ -54,18 +54,18 @@ This procedure creates a backup that you can restore if Rancher encounters a dis
1. Using a remote Terminal connection, log into the node running your Rancher Server.
1. Stop the container currently running Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the [name of your Rancher container](#before-you-start).
1. Stop the container currently running Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the [name of your Rancher container](#how-to-read-placeholders).
```
docker stop <RANCHER_CONTAINER_NAME>
```
1. <a id="backup"></a>Use the command below, replacing each [placeholder](#before-you-start), to create a data container from the Rancher container that you just stopped.
1. <a id="backup"></a>Use the command below, replacing each placeholder, to create a data container from the Rancher container that you just stopped.
```
docker create --volumes-from <RANCHER_CONTAINER_NAME> --name rancher-data-<DATE> rancher/rancher:<RANCHER_CONTAINER_TAG>
```
1. <a id="tarball"></a>From the data container that you just created (`rancher-data-<DATE>`), create a backup tarball (`rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`). Use the following command, replacing each [placeholder](#before-you-start).
1. <a id="tarball"></a>From the data container that you just created (`rancher-data-<DATE>`), create a backup tarball (`rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`). Use the following command, replacing each placeholder.
```
docker run --volumes-from rancher-data-<DATE> -v $PWD:/backup:z busybox tar pzcvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz /var/lib/rancher
@@ -77,7 +77,7 @@ This procedure creates a backup that you can restore if Rancher encounters a dis
1. Move your backup tarball to a safe location external to your Rancher Server. Then delete the `rancher-data-<DATE>` container from your Rancher Server.
1. Restart Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the name of your [Rancher container](#before-you-start).
1. Restart Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the name of your Rancher container.
```
docker start <RANCHER_CONTAINER_NAME>
@@ -42,7 +42,7 @@ Using a [backup]({{<baseurl>}}/rancher/v2.x/en/backups/backups/single-node-backu
1. Using a remote Terminal connection, log into the node running your Rancher Server.
1. Stop the container currently running Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the [name of your Rancher container](#before-you-start).
1. Stop the container currently running Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the name of your Rancher container.
```
docker stop <RANCHER_CONTAINER_NAME>
@@ -51,7 +51,7 @@ Using a [backup]({{<baseurl>}}/rancher/v2.x/en/backups/backups/single-node-backu
If you followed the naming convention we suggested in [Creating Backups—Docker Installs]({{<baseurl>}}/rancher/v2.x/en/backups/backups/single-node-backups/), it will have a name similar to `rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`.
1. Enter the following command to delete your current state data and replace it with your backup data, replacing the [placeholders](#before-you-start). Don't forget to close the quotes.
1. Enter the following command to delete your current state data and replace it with your backup data, replacing the placeholders. Don't forget to close the quotes.
>**Warning!** This command deletes all current state data from your Rancher Server container. Any changes saved after your backup tarball was created will be lost.
@@ -63,7 +63,7 @@ Using a [backup]({{<baseurl>}}/rancher/v2.x/en/backups/backups/single-node-backu
**Step Result:** A series of commands should run.
1. Restart your Rancher Server container, replacing the [placeholder](#before-you-start). It will restart using your backup data.
1. Restart your Rancher Server container, replacing the placeholder. It will restart using your backup data.
```
docker start <RANCHER_CONTAINER_NAME>
@@ -11,14 +11,13 @@ The Backup Create page lets you configure a schedule, enable encryption and spec
{{< img "/img/rancher/backup_restore/backup/backup.png" "">}}
- [Schedule](#schedule)
- [Encryption](#encryptionconfigname)
- [Storage Location](#storagelocation)
- [Encryption](#encryption)
- [Storage Location](#storage-location)
- [S3](#s3)
- [Example S3 Storage Configuration](#example-s3-storage-configuration)
- [Example MinIO Configuration](#example-minio-configuration)
- [Example credentialSecret](#example-credentialsecret)
- [IAM Permissions for EC2 Nodes to Access S3](#iam-permissions-for-ec2-nodes-to-access-s3)
- [RetentionCount](#retentioncount)
- [Examples](#examples)
@@ -91,7 +90,7 @@ The S3 storage location contains the following configuration fields:
1. **Region** (optional): The AWS [region](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) where the S3 bucket is located. This field isn't needed for configuring MinIO.
1. **Folder** (optional): The name of the folder in the S3 bucket where backup files will be stored.
1. **Endpoint**: The [endpoint](https://docs.aws.amazon.com/general/latest/gr/s3.html) that is used to access S3 in the region of your bucket.
1. **Endpoint CA** (optional): This should be the Base64 encoded CA cert. For an example, refer to the [example S3 compatible configuration.](#example-s3-compatible-storage-configuration)
1. **Endpoint CA** (optional): This should be the Base64 encoded CA cert. For an example, refer to the [example S3 compatible configuration.](#example-s3-storage-configuration)
1. **Skip TLS Verifications** (optional): Set to true if you are not using TLS.
@@ -103,7 +102,7 @@ The S3 storage location contains the following configuration fields:
| `folder` | The name of the folder in the S3 bucket where backup files will be stored. | |
| `region` | The AWS [region](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) where the S3 bucket is located. | ✓ |
| `endpoint` | The [endpoint](https://docs.aws.amazon.com/general/latest/gr/s3.html) that is used to access S3 in the region of your bucket. | ✓ |
| `endpointCA` | This should be the Base64 encoded CA cert. For an example, refer to the [example S3 compatible configuration.](#example-s3-compatible-storage-configuration) | |
| `endpointCA` | This should be the Base64 encoded CA cert. For an example, refer to the [example S3 compatible configuration.](#example-s3-storage-configuration) | |
| `insecureTLSSkipVerify` | Set to true if you are not using TLS. | |
### Example S3 Storage Configuration
@@ -41,7 +41,7 @@ Select this option if you are restoring from a backup file that exists in the de
### An S3-compatible object store
Select this option if no default storage location is configured at the operator-level, OR if the backup file exists in a different S3 bucket than the one configured as the default storage location. Provide the exact filename in the **Backup Filename** field. Refer to [this section](#getting-the-backup-filename-from-s3) for exact steps on getting the backup filename from s3. Fill in all the details for the S3 compatible object store. Its fields are exactly same as ones for the `backup.StorageLocation` configuration in the [Backup custom resource.](../../configuration/backup-config/#storagelocation)
Select this option if no default storage location is configured at the operator-level, OR if the backup file exists in a different S3 bucket than the one configured as the default storage location. Provide the exact filename in the **Backup Filename** field. Refer to [this section](#getting-the-backup-filename-from-s3) for exact steps on getting the backup filename from s3. Fill in all the details for the S3 compatible object store. Its fields are exactly same as ones for the `backup.StorageLocation` configuration in the [Backup custom resource.](../../configuration/backup-config/#storage-location)
{{< img "/img/rancher/backup_restore/restore/s3store.png" "">}}
@@ -40,7 +40,7 @@ You can choose to not have any operator-level storage location configured. If yo
Installing the `rancher-backup` chart by selecting the StorageClass option will create a Persistent Volume Claim (PVC), and Kubernetes will in turn dynamically provision a Persistent Volume (PV) where all the backups will be saved by default.
For information about creating storage classes refer to [this section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/#1-add-a-storage-class-and-configure-it-to-use-your-storage-provider)
For information about creating storage classes refer to [this section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/)
> **Important**
It is highly recommended to use a StorageClass with a reclaim policy of "Retain". Otherwise if the PVC created by the `rancher-backup` chart gets deleted (either during app upgrade, or accidentally), the PV will get deleted too, which means all backups saved in it will get deleted.
@@ -44,18 +44,18 @@ This procedure creates a backup that you can restore if Rancher encounters a dis
1. Using a remote Terminal connection, log into the node running your Rancher Server.
1. Stop the container currently running Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the [name of your Rancher container](#before-you-start).
1. Stop the container currently running Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the name of your Rancher container.
```
docker stop <RANCHER_CONTAINER_NAME>
```
1. <a id="backup"></a>Use the command below, replacing each [placeholder](#before-you-start), to create a data container from the Rancher container that you just stopped.
1. <a id="backup"></a>Use the command below, replacing each placeholder, to create a data container from the Rancher container that you just stopped.
```
docker create --volumes-from <RANCHER_CONTAINER_NAME> --name rancher-data-<DATE> rancher/rancher:<RANCHER_CONTAINER_TAG>
```
1. <a id="tarball"></a>From the data container that you just created (`rancher-data-<DATE>`), create a backup tarball (`rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`). Use the following command, replacing each [placeholder](#before-you-start).
1. <a id="tarball"></a>From the data container that you just created (`rancher-data-<DATE>`), create a backup tarball (`rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`). Use the following command, replacing each placeholder:
```
docker run --volumes-from rancher-data-<DATE> -v $PWD:/backup:z busybox tar pzcvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz /var/lib/rancher
@@ -67,7 +67,7 @@ This procedure creates a backup that you can restore if Rancher encounters a dis
1. Move your backup tarball to a safe location external to your Rancher Server. Then delete the `rancher-data-<DATE>` container from your Rancher Server.
1. Restart Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the name of your [Rancher container](#before-you-start).
1. Restart Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the name of your Rancher container:
```
docker start <RANCHER_CONTAINER_NAME>
@@ -42,7 +42,7 @@ Using a [backup]({{<baseurl>}}/rancher/v2.x/en/backups/backups/single-node-backu
1. Using a remote Terminal connection, log into the node running your Rancher Server.
1. Stop the container currently running Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the [name of your Rancher container](#before-you-start).
1. Stop the container currently running Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the name of your Rancher container:
```
docker stop <RANCHER_CONTAINER_NAME>
@@ -51,7 +51,7 @@ Using a [backup]({{<baseurl>}}/rancher/v2.x/en/backups/backups/single-node-backu
If you followed the naming convention we suggested in [Creating Backups—Docker Installs]({{<baseurl>}}/rancher/v2.x/en/backups/backups/single-node-backups/), it will have a name similar to `rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`.
1. Enter the following command to delete your current state data and replace it with your backup data, replacing the [placeholders](#before-you-start). Don't forget to close the quotes.
1. Enter the following command to delete your current state data and replace it with your backup data, replacing the placeholders. Don't forget to close the quotes.
>**Warning!** This command deletes all current state data from your Rancher Server container. Any changes saved after your backup tarball was created will be lost.
@@ -63,7 +63,7 @@ Using a [backup]({{<baseurl>}}/rancher/v2.x/en/backups/backups/single-node-backu
**Step Result:** A series of commands should run.
1. Restart your Rancher Server container, replacing the [placeholder](#before-you-start). It will restart using your backup data.
1. Restart your Rancher Server container, replacing the placeholder. It will restart using your backup data.
```
docker start <RANCHER_CONTAINER_NAME>
@@ -7,8 +7,8 @@ aliases:
There are two recommended deployment strategies. Each one has its own pros and cons. Read more about which one would fit best for your use case:
* [Hub and Spoke](#hub-and-spoke)
* [Regional](#regional)
* [Hub and Spoke](#hub-and-spoke-strategy)
* [Regional](#regional-strategy)
# Hub & Spoke Strategy
---
@@ -5,8 +5,8 @@ weight: 100
There are two recommended deployment strategies for a Rancher server that manages downstream Kubernetes clusters. Each one has its own pros and cons. Read more about which one would fit best for your use case:
* [Hub and Spoke](#hub-and-spoke)
* [Regional](#regional)
* [Hub and Spoke](#hub-and-spoke-strategy)
* [Regional](#regional-strategy)
# Hub & Spoke Strategy
---
@@ -153,7 +153,7 @@ For more information about alerts, refer to [this page.]({{<baseurl>}}/rancher/v
1. From the cluster view in Rancher, click **Tools > CIS Scans.**
1. Go to the report that you want to download. Click **&#8942; > Download.**
**Result:** The report is downloaded in CSV format. For more information on each columns, refer to the [section about the generated report.](#about-the-generated-report)
**Result:** The report is downloaded in CSV format.
# List of Skipped and Not Applicable Tests
+3 -3
View File
@@ -59,15 +59,15 @@ The following commands are available for use in Rancher CLI.
| Command | Result |
|---|---|
| `apps, [app]` | Performs operations on catalog applications (i.e. individual [Helm charts](https://docs.helm.sh/developing_charts/) or [Rancher charts]({{<baseurl>}}/rancher/v2.x/en/helm-charts/legacy-catalogs/adding-catalogs/#chart-directory-structure). |
| `apps, [app]` | Performs operations on catalog applications (i.e. individual [Helm charts](https://docs.helm.sh/developing_charts/) or Rancher charts. |
| `catalog` | Performs operations on [catalogs]({{<baseurl>}}/rancher/v2.x/en/catalog/). |
| `clusters, [cluster]` | Performs operations on your [clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/). |
| `context` | Switches between Rancher [projects]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). For an example, see [Project Selection](#project-selection). |
| `inspect [OPTIONS] [RESOURCEID RESOURCENAME]` | Displays details about [Kubernetes resources](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#resource-types) or Rancher resources (i.e.: [projects]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/) and [workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/)). Specify resources by name or ID. |
| `kubectl` |Runs [kubectl commands](https://kubernetes.io/docs/reference/kubectl/overview/#operations). |
| `login, [l]` | Logs into a Rancher Server. For an example, see [CLI Authentication](#cli-authentication). |
| `namespaces, [namespace]` |Performs operations on [namespaces]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces). |
| `nodes, [node]` |Performs operations on [nodes]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/#kubernetes). |
| `namespaces, [namespace]` |Performs operations on namespaces. |
| `nodes, [node]` |Performs operations on nodes. |
| `projects, [project]` | Performs operations on [projects]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). |
| `ps` | Displays [workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads) in a project. |
| `settings, [setting]` | Shows the current settings for your Rancher Server. |
@@ -6,7 +6,7 @@ weight: 2055
This section describes how to disconnect a node from a Rancher-launched Kubernetes cluster and remove all of the Kubernetes components from the node. This process allows you to use the node for other purposes.
When you use Rancher to [launch nodes for a cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher), resources (containers/virtual network interfaces) and configuration items (certificates/configuration files) are created.
When you use Rancher to install Kubernetes on new nodes in an infrastructure provider, resources (containers/virtual network interfaces) and configuration items (certificates/configuration files) are created.
When removing nodes from your Rancher launched Kubernetes cluster (provided that they are in `Active` state), those resources are automatically cleaned, and the only action needed is to restart the node. When a node has become unreachable and the automatic cleanup process cannot be used, we describe the steps that need to be executed before the node can be added to a cluster again.
@@ -59,7 +59,7 @@ After the imported cluster is detached from Rancher, the cluster's workloads wil
{{% tab "By UI / API" %}}
>**Warning:** This process will remove data from your cluster. Make sure you have created a backup of files you want to keep before executing the command, as data will be lost.
After you initiate the removal of an [imported cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/#import-existing-cluster) using the Rancher UI (or API), the following events occur.
After you initiate the removal of an imported cluster using the Rancher UI (or API), the following events occur.
1. Rancher creates a `serviceAccount` that it uses to remove the Rancher components from the cluster. This account is assigned the [clusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) and [clusterRoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) permissions, which are required to remove the Rancher components.
@@ -20,7 +20,7 @@ Rancher provides an intuitive user interface for interacting with your clusters.
You can use the Kubernetes command-line tool, [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), to manage your clusters. You have two options for using kubectl:
- **Rancher kubectl shell:** Interact with your clusters by launching a kubectl shell available in the Rancher UI. This option requires no configuration actions on your part. For more information, see [Accessing Clusters with kubectl Shell]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-shell).
- **Rancher kubectl shell:** Interact with your clusters by launching a kubectl shell available in the Rancher UI. This option requires no configuration actions on your part. For more information, see [Accessing Clusters with kubectl Shell]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/kubectl/).
- **Terminal remote connection:** You can also interact with your clusters by installing [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your local desktop and then copying the cluster's kubeconfig file to your local `~/.kube/config` directory. For more information, see [Accessing Clusters with kubectl and a kubeconfig File]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-and-a-kubeconfig-file).
### Rancher CLI
@@ -3,7 +3,7 @@ title: Nodes and Node Pools
weight: 2030
---
After you launch a Kubernetes cluster in Rancher, you can manage individual nodes from the cluster's **Node** tab. Depending on the [option used]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) to provision the cluster, there are different node options available.
After you launch a Kubernetes cluster in Rancher, you can manage individual nodes from the cluster's **Node** tab. Depending on the [option used]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/) to provision the cluster, there are different node options available.
> If you want to manage the _cluster_ and not individual nodes, see [Editing Clusters]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters).
@@ -61,7 +61,7 @@ These steps describe how to set up a PVC in the namespace where your stateful wo
1. Enter a **Name** for the volume claim.
1. Select the [Namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) of the workload that you want to add the persistent storage to.
1. Select the namespace of the workload that you want to add the persistent storage to.
1. In the section called **Use an existing persistent volume,** go to the **Persistent Volume** drop-down and choose the persistent volume that you created.
@@ -70,7 +70,7 @@ These steps describe how to set up a PVC in the namespace where your stateful wo
1. Enter a **Name** for the volume claim.
1. Select the [Namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) of the volume claim.
1. Select the namespace of the volume claim.
1. In the **Source** field, click **Use a Storage Class to provision a new persistent volume.**
@@ -9,7 +9,7 @@ There are three roles that can be assigned to nodes: `etcd`, `controlplane` and
When designing your cluster(s), you have two options:
* Use dedicated nodes for each role. This ensures resource availability for the components needed for the specified role. It also strictly isolates network traffic between each of the roles according to the [port requirements]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#networking-requirements/).
* Use dedicated nodes for each role. This ensures resource availability for the components needed for the specified role. It also strictly isolates network traffic between each of the roles according to the [port requirements]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#networking-requirements/).
* Assign the `etcd` and `controlplane` roles to the same nodes. These nodes must meet the hardware requirements for both roles.
In either case, the `worker` role should not be used or added to nodes with the `etcd` or `controlplane` role.
@@ -64,8 +64,6 @@ A node template defines the configuration of a node, like what operating system
The benefit of using a node pool is that if a node is destroyed or deleted, you can increase the number of live nodes to compensate for the node that was lost. The node pool helps you ensure that the count of the node pool is as expected.
Each node pool is assigned with a [node component]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/#kubernetes-cluster-node-components) to specify how these nodes should be configured for the Kubernetes cluster.
Each node pool must have one or more nodes roles assigned.
Each node role (i.e. etcd, control plane, and worker) should be assigned to a distinct node pool. Although it is possible to assign multiple node roles to a node pool, this should not be done for production clusters.
@@ -49,7 +49,7 @@ To begin provisioning a custom cluster with Windows support, prepare your host s
- VMs from virtualization clusters
- Bare-metal servers
The table below lists the [Kubernetes roles]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/#kubernetes-cluster-node-components) you'll assign to each host, although you won't enable these roles until further along in the configuration process—we're just informing you of each node's purpose. The first node, a Linux host, is primarily responsible for managing the Kubernetes control plane, although, in this use case, we're installing all three roles on this node. Node 2 is also a Linux worker, which is responsible for Ingress support. Finally, the third node is your Windows worker, which will run your Windows applications.
The table below lists the Kubernetes node roles you'll assign to each host, although you won't enable these roles until further along in the configuration process—we're just informing you of each node's purpose. The first node, a Linux host, is primarily responsible for managing the Kubernetes control plane, although, in this use case, we're installing all three roles on this node. Node 2 is also a Linux worker, which is responsible for Ingress support. Finally, the third node is your Windows worker, which will run your Windows applications.
Node | Operating System | Future Cluster Role(s)
--------|------------------|------
@@ -54,7 +54,7 @@ For more information on private Git/Helm catalogs, refer to the [custom catalog
>**Prerequisites:** In order to manage the [built-in catalogs]({{<baseurl>}}/rancher/v2.x/en/catalog/built-in/) or manage global catalogs, you need _one_ of the following permissions:
>
>- [Administrator Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/)
>- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Catalogs]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned.
>- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Catalogs]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) role assigned.
1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar.
2. Click **Add Catalog**.
@@ -11,7 +11,7 @@ You can fill your custom catalogs with either Helm Charts or Rancher Charts, alt
> For a complete walkthrough of developing charts, see the upstream Helm chart [developer reference](https://helm.sh/docs/chart_template_guide/).
1. Within the GitHub repo that you're using as your custom catalog, create a directory structure that mirrors the structure listed in [Chart Directory Structure]({{<baseurl>}}/rancher/v2.x/en/helm-charts/legacy-catalogs/adding-catalogs/#chart-directory-structure).
1. Within the GitHub repo that you're using as your custom catalog, create a directory structure that mirrors the structure listed in the [Chart Directory Structure]({{<baseurl>}}/rancher/v2.x/en/helm-charts/legacy-catalogs/creating-apps/#chart-directory-structure).
Rancher requires this directory structure, although `app-readme.md` and `questions.yml` are optional.
@@ -93,7 +93,7 @@ Before installing Rancher, make sure that your nodes fulfill all of the [install
# Architecture Tip
For the best performance and greater security, we recommend a separate, dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads.
For the best performance and greater security, we recommend a separate, dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/) for running your workloads.
For more architecture recommendations, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/overview/architecture-recommendations)
@@ -173,9 +173,9 @@ Reset the cluster nodes' network policies to restore connectivity.
{{% /tab %}}
{{% tab "Rancher Launched Kubernetes" %}}
<br/>
If you can access Rancher, but one or more of the clusters that you launched using Rancher has no networking, you can repair them by moving the
If you can access Rancher, but one or more of the clusters that you launched using Rancher has no networking, you can repair them by moving them:
- From the cluster's [embedded kubectl shell]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-shell).
- Using the cluster's [embedded kubectl shell]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/kubectl/).
- By [downloading the cluster kubeconfig file and running it]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-and-a-kubeconfig-file) from your workstation.
```
@@ -44,7 +44,7 @@ If you have issues upgrading Rancher, roll it back to its latest known healthy s
1. Using a remote Terminal connection, log into the node running your Rancher Server.
1. Pull the version of Rancher that you were running prior to upgrade. Replace the `<PRIOR_RANCHER_VERSION>` with [that version](#before-you-start).
1. Pull the version of Rancher that you were running prior to upgrade. Replace the `<PRIOR_RANCHER_VERSION>` with that version.
For example, if you were running Rancher v2.0.5 before upgrade, pull v2.0.5.
@@ -63,7 +63,7 @@ If you have issues upgrading Rancher, roll it back to its latest known healthy s
If you followed the naming convention we suggested in [Docker Upgrade]({{<baseurl>}}/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/), it will have a name similar to (`rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`).
1. Run the following command to replace the data in the `rancher-data` container with the data in the backup tarball, replacing the [placeholder](#before-you-start). Don't forget to close the quotes.
1. Run the following command to replace the data in the `rancher-data` container with the data in the backup tarball, replacing the placeholder. Don't forget to close the quotes.
```
docker run --volumes-from rancher-data \
@@ -71,7 +71,7 @@ If you have issues upgrading Rancher, roll it back to its latest known healthy s
&& tar zxvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz"
```
1. Start a new Rancher Server container with the `<PRIOR_RANCHER_VERSION>` tag [placeholder](#before-you-start) pointing to the data container.
1. Start a new Rancher Server container with the `<PRIOR_RANCHER_VERSION>` tag placeholder pointing to the data container.
```
docker run -d --volumes-from rancher-data \
--restart=unless-stopped \
@@ -17,7 +17,7 @@ This procedure walks you through setting up a 3-node cluster with Rancher Kubern
> **Important:** The Rancher management server can only be run on an RKE-managed Kubernetes cluster. Use of Rancher on hosted Kubernetes or other providers is not supported.
> **Important:** For the best performance, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads.
> **Important:** For the best performance, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/) for running your workloads.
## Recommended Architecture
@@ -0,0 +1,20 @@
---
title: Installing Docker
weight: 1
aliases:
- /rancher/v2.x/en/installation/requirements/installing-docker
---
Docker is required to be installed on any node that runs the Rancher server.
There are a couple of options for installing Docker. One option is to refer to the [official Docker documentation](https://docs.docker.com/install/) about how to install Docker on Linux. The steps will vary based on the Linux distribution.
Another option is to use one of Rancher's Docker installation scripts, which are available for most recent versions of Docker.
For example, this command could be used to install Docker 19.03 on Ubuntu:
```
curl https://releases.rancher.com/install-docker/19.03.sh | sh
```
Rancher has installation scripts for every version of upstream Docker that Kubernetes supports. To find out whether a script is available for installing a certain Docker version, refer to this [GitHub repository,](https://github.com/rancher/install-docker) which contains all of Rancher's Docker installation scripts.
+2 -2
View File
@@ -114,11 +114,11 @@ By default, each Rancher-provisioned cluster has one NGINX ingress controller al
![In an Istio-enabled cluster, you can have two ingresses: the default Nginx ingress, and the default Istio controller.]({{<baseurl>}}/img/rancher/istio-ingress.svg)
Additional Istio Ingress gateways can be enabled via the [overlay file]({{<baseurl>}}/rancher/v2.x/en/istio/setup/enable-istio-in-cluster/#overlay-file).
Additional Istio Ingress gateways can be enabled via the [overlay file]({{<baseurl>}}/rancher/v2.x/en/istio/v2.5/configuration-reference/#overlay-file).
### Egress Support
By default the Egress gateway is disabled, but can be enabled on install or upgrade through the values.yaml or via the [overlay file]({{<baseurl>}}/rancher/v2.x/en/istio/setup/enable-istio-in-cluster/#overlay-file).
By default the Egress gateway is disabled, but can be enabled on install or upgrade through the values.yaml or via the [overlay file]({{<baseurl>}}/rancher/v2.x/en/istio/v2.5/configuration-reference/#overlay-file).
# Additional Steps for Installing Istio on an RKE2 Cluster
@@ -13,7 +13,7 @@ weight: 3
### Egress Support
By default the Egress gateway is disabled, but can be enabled on install or upgrade through the values.yaml or via the [overlay file]({{<baseurl>}}/rancher/v2.x/en/istio/setup/enable-istio-in-cluster/#overlay-file).
By default the Egress gateway is disabled, but can be enabled on install or upgrade through the values.yaml or via the [overlay file](#overlay-file).
### Enabling Automatic Sidecar Injection
@@ -25,7 +25,7 @@ Add SSL certificates to either projects, namespaces, or both. A project scoped c
- **Available to all namespaces in this project:** The certificate is available for any deployment in any namespaces in the project.
- **Available to a single namespace:** The certificate is only available for the deployments in one [namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces). If you choose this option, select a **Namespace** from the drop-down list or click **Add to a new namespace** to add the certificate to a namespace you create on the fly.
- **Available to a single namespace:** The certificate is only available for the deployments in one namespace. If you choose this option, select a **Namespace** from the drop-down list or click **Add to a new namespace** to add the certificate to a namespace you create on the fly.
1. From **Private Key**, either copy and paste your certificate's private key into the text box (include the header and footer), or click **Read from a file** to browse to the private key on your file system. If possible, we recommend using **Read from a file** to reduce likelihood of error.
@@ -113,7 +113,7 @@ I1002 12:55:32.925630 1 heapster.go:101] Starting Heapster API server...
I1002 12:55:32.928597 1 serve.go:85] Serving securely on 0.0.0.0:443
```
If you have created your cluster in Rancher v2.0.6 or before, please refer to [Manual installation](#manual-installation)
If you have created your cluster in Rancher v2.0.6 or before, please refer to the manual installation.
##### Configuring HPA to Scale Using Custom Metrics with Prometheus
@@ -12,7 +12,7 @@ Ingress can be added for workloads to provide load balancing, SSL termination an
1. From the **Global** view, open the project that you want to add ingress to.
1. Click **Resources** in the main navigation bar. Click the **Load Balancing** tab. (In versions prior to v2.3.0, just click the **Load Balancing** tab.) Then click **Add Ingress**.
1. Enter a **Name** for the ingress.
1. Select an existing **Namespace** from the drop-down list. Alternatively, you can create a new [namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) on the fly by clicking **Add to a new namespace**.
1. Select an existing **Namespace** from the drop-down list. Alternatively, you can create a new namespace on the fly by clicking **Add to a new namespace**.
1. Create ingress forwarding **Rules**. For help configuring the rules, refer to [this section.](#ingress-rule-configuration) If any of your ingress rules handle requests for encrypted ports, add a certificate to encrypt/decrypt communications.
1. **Optional:** click **Add Rule** to create additional ingress rules. For example, after you create ingress rules to direct requests for your hostname, you'll likely want to create a default backend to handle 404s.
@@ -32,7 +32,7 @@ Currently, deployments pull the private registry credentials automatically only
>**Note:** Kubernetes classifies secrets, certificates, and registries all as [secrets](https://kubernetes.io/docs/concepts/configuration/secret/), and no two secrets in a project or namespace can have duplicate names. Therefore, to prevent conflicts, your registry must have a unique name among all secrets within your workspace.
1. Select a **Scope** for the registry. You can either make the registry available for the entire project or a single [namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces).
1. Select a **Scope** for the registry. You can either make the registry available for the entire project or a single namespace.
1. Select the website that hosts your private registry. Then enter credentials that authenticate with the registry. For example, if you use DockerHub, provide your DockerHub username and password.
@@ -26,7 +26,7 @@ When creating a secret, you can make it available for any deployment within a pr
>**Note:** Kubernetes classifies secrets, certificates, and registries all as [secrets](https://kubernetes.io/docs/concepts/configuration/secret/), and no two secrets in a project or namespace can have duplicate names. Therefore, to prevent conflicts, your secret must have a unique name among all secrets within your workspace.
4. Select a **Scope** for the secret. You can either make the registry available for the entire project or a single [namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces).
4. Select a **Scope** for the secret. You can either make the registry available for the entire project or a single namespace.
5. From **Secret Values**, click **Add Secret Value** to add a key value pair. Add as many values as you need.
@@ -9,7 +9,7 @@ aliases:
For every workload created, a complementing Service Discovery entry is created. This Service Discovery entry enables DNS resolution for the workload's pods using the following naming convention:
`<workload>.<namespace>.svc.cluster.local`.
However, you also have the option of creating additional Service Discovery records. You can use these additional records so that a given [namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) resolves with one or more external IP addresses, an external hostname, an alias to another DNS record, other workloads, or a set of pods that match a selector that you create.
However, you also have the option of creating additional Service Discovery records. You can use these additional records so that a given namespace resolves with one or more external IP addresses, an external hostname, an alias to another DNS record, other workloads, or a set of pods that match a selector that you create.
1. From the **Global** view, open the project that you want to add a DNS record to.
@@ -19,7 +19,7 @@ Deploy a workload to run an application in one or more containers.
1. From the **Docker Image** field, enter the name of the Docker image that you want to deploy to the project, optionally prefacing it with the registry host (e.g. `quay.io`, `registry.gitlab.com`, etc.). During deployment, Rancher pulls this image from the specified public or private registry. If no registry host is provided, Rancher will pull the image from [Docker Hub](https://hub.docker.com/explore/). Enter the name exactly as it appears in the registry server, including any required path, and optionally including the desired tag (e.g. `registry.gitlab.com/user/path/image:tag`). If no tag is provided, the `latest` tag will be automatically used.
1. Either select an existing [namespace]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces), or click **Add to a new namespace** and enter a new namespace.
1. Either select an existing namespace, or click **Add to a new namespace** and enter a new namespace.
1. Click **Add Port** to enter a port mapping, which enables access to the application inside and outside of the cluster . For more information, see [Services]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/#services).
@@ -9,7 +9,7 @@ aliases:
Rancher can integrate with a variety of popular logging services and tools that exist outside of your Kubernetes clusters.
For background information about how logging integrations work, refer to the [cluster administration section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/#how-logging-integrations-work)
For background information about how logging integrations work, refer to the [cluster administration section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/v2.0.x-v2.4.x/cluster-logging/#how-logging-integrations-work)
Rancher supports the following services:
@@ -41,8 +41,8 @@ For details about what triggers the predefined alerts, refer to the [documentati
Some examples of alert events are:
- A Kubernetes [master component]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/#kubernetes-cluster-node-components) entering an unhealthy state.
- A node or [workload]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/) error occurring.
- A Kubernetes master component entering an unhealthy state.
- A node or workload error occurring.
- A scheduled deployment taking place as planned.
- A node's hardware resources becoming overstressed.
@@ -50,7 +50,7 @@ Some examples of alert events are:
When you edit an alert rule, you will have the opportunity to configure the alert to be triggered based on a Prometheus expression. For examples of expressions, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/expression/)
Monitoring must be [enabled]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/#enabling-cluster-monitoring) before you can trigger alerts with custom Prometheus queries or expressions.
Monitoring must be [enabled]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/) before you can trigger alerts with custom Prometheus queries or expressions.
### Urgency Levels
@@ -81,7 +81,7 @@ After you set up cluster alerts, you can manage each alert object. To manage ale
As a [cluster owner]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to send you alerts for cluster events.
>**Prerequisite:** Before you can receive cluster alerts, you must [add a notifier]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/#adding-notifiers).
>**Prerequisite:** Before you can receive cluster alerts, you must [add a notifier]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/).
1. From the **Global** view, navigate to the cluster that you want to configure cluster alerts for. Select **Tools > Alerts**. Then click **Add Alert Group**.
1. Enter a **Name** for the alert that describes its purpose, you could group alert rules for the different purpose.
@@ -47,7 +47,7 @@ Kubernetes events are objects that provide insight into what is happening inside
# Alerts for Nodes
Alerts can be triggered based on node metrics. Each computing resource in a Kubernetes cluster is called a node. [Nodes]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/#kubernetes-cluster-node-components) can be either bare-metal servers or virtual machines.
Alerts can be triggered based on node metrics. Each computing resource in a Kubernetes cluster is called a node. Nodes can be either bare-metal servers or virtual machines.
| Alert | Explanation |
|-------|-------------|
@@ -56,4 +56,4 @@ Alerts can be triggered based on node metrics. Each computing resource in a Kube
| Node disk is running full within 24 hours | A critical alert is triggered if the disk space on the node is expected to run out in the next 24 hours based on the disk growth over the last 6 hours. |
# Project-level Alerts
When you enable monitoring for the project, some project-level alerts are provided. For details, refer to the [section on project-level alerts.]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/alerts/#default-project-level-alerts)
When you enable monitoring for the project, some project-level alerts are provided. For details, refer to the [section on project-level alerts.]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/alerts/)
@@ -210,7 +210,7 @@ Input or select an **Expression**. The dropdown shows the original metrics from
- [**Container**](https://github.com/google/cadvisor)
- [**Kubernetes Resources**](https://github.com/kubernetes/kube-state-metrics)
- [**Customize**]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/#project-metrics)
- **Customize**
- [**Project Level Grafana**](http://docs.grafana.org/administration/metrics/)
- **Project Level Prometheus**
@@ -44,9 +44,9 @@ Using Prometheus, you can monitor Rancher at both the cluster level and [project
- Cluster monitoring allows you to view the health of your Kubernetes cluster. Prometheus collects metrics from the cluster components below, which you can view in graphs and charts.
- [Kubernetes control plane]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/#kubernetes-components-metrics)
- [etcd database]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/#etcd-metrics)
- [All nodes (including workers)]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/#cluster-metrics)
- Kubernetes control plane
- etcd database
- All nodes (including workers)
- [Project monitoring]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/) allows you to view the state of pods running in a given project. Prometheus collects metrics from the project's deployed HTTP and TCP/UDP workloads.
@@ -7,7 +7,7 @@ aliases:
- /rancher/v2.x/en/cluster-admin/tools/monitoring/custom-metrics/
---
After you've enabled [cluster level monitoring]({{< baseurl >}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#enabling-cluster-monitoring), You can view the metrics data from Rancher. You can also deploy the Prometheus custom metrics adapter then you can use the HPA with metrics stored in cluster monitoring.
After you've enabled [cluster level monitoring]({{< baseurl >}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/), You can view the metrics data from Rancher. You can also deploy the Prometheus custom metrics adapter then you can use the HPA with metrics stored in cluster monitoring.
## Deploy Prometheus Custom Metrics Adapter
@@ -9,7 +9,7 @@ aliases:
The PromQL expressions in this doc can be used to configure [alerts.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/alerts/)
> Before expression can be used in alerts, monitoring must be enabled. For more information, refer to the documentation on enabling monitoring [at the cluster level]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#enabling-cluster-monitoring) or [at the project level.]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring)
> Before expressions can be used in alerts, monitoring must be enabled. For more information, refer to the documentation on enabling monitoring [at the cluster level]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/) or [at the project level.]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/)
For more information about querying Prometheus, refer to the official [Prometheus documentation.](https://prometheus.io/docs/prometheus/latest/querying/basics/)
@@ -24,9 +24,9 @@ Using Prometheus, you can monitor Rancher at both the [cluster level]({{<baseurl
- [Cluster monitoring]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/) allows you to view the health of your Kubernetes cluster. Prometheus collects metrics from the cluster components below, which you can view in graphs and charts.
- [Kubernetes control plane]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/#kubernetes-components-metrics)
- [etcd database]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/#etcd-metrics)
- [All nodes (including workers)]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/#cluster-metrics)
- Kubernetes control plane
- etcd database
- All nodes (including workers)
- Project monitoring allows you to view the state of pods running in a given project. Prometheus collects metrics from the project's deployed HTTP and TCP/UDP workloads.
@@ -59,7 +59,6 @@ Grafana | 100m | 100Mi | 200m | 200Mi | No
> The default username and password for the Grafana instance will be `admin/admin`. However, Grafana dashboards are served via the Rancher authentication proxy, so only users who are currently authenticated into the Rancher server have access to the Grafana dashboard.
### Project Metrics
[Workload metrics]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/#workload-metrics) are available for the project if monitoring is enabled at the [cluster level]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/) and at the [project level.](#enabling-project-monitoring)
You can monitor custom metrics from any [exporters.](https://prometheus.io/docs/instrumenting/exporters/) You can also expose some custom endpoints on deployments without needing to configure Prometheus for your project.
@@ -9,7 +9,7 @@ aliases:
_Available as of v2.2.0_
While configuring monitoring at either the [cluster level]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#enabling-cluster-monitoring) or [project level]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), there are multiple options that can be configured.
While configuring monitoring at either the [cluster level]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/) or [project level]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/), there are multiple options that can be configured.
- [Basic Configuration](#basic-configuration)
- [Advanced Options](#advanced-options)
@@ -9,11 +9,11 @@ aliases:
_Available as of v2.2.0_
After you've enabled monitoring at either the [cluster level]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#enabling-cluster-monitoring) or [project level]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), you will want to be start viewing the data being collected. There are multiple ways to view this data.
After you've enabled monitoring at either the [cluster level]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/) or [project level]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/), you will want to be start viewing the data being collected. There are multiple ways to view this data.
## Rancher Dashboard
>**Note:** This is only available if you've enabled monitoring at the [cluster level]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#enabling-cluster-monitoring). Project specific analytics must be viewed using the project's Grafana instance.
>**Note:** This is only available if you've enabled monitoring at the [cluster level]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/). Project specific analytics must be viewed using the project's Grafana instance.
Rancher's dashboards are available at multiple locations:
@@ -37,7 +37,7 @@ When analyzing these metrics, don't be concerned about any single standalone met
## Grafana
If you've enabled monitoring at either the [cluster level]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#enabling-cluster-monitoring) or [project level]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), Rancher automatically creates a link to Grafana instance. Use this link to view monitoring data.
If you've enabled monitoring at either the [cluster level]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/) or [project level]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/), Rancher automatically creates a link to Grafana instance. Use this link to view monitoring data.
Grafana allows you to query, visualize, alert, and ultimately, understand your cluster and workload data. For more information on Grafana and its capabilities, visit the [Grafana website](https://grafana.com/grafana).
@@ -66,7 +66,7 @@ We recommend the following configurations for the load balancer and Ingress cont
It is strongly recommended to install Rancher on a Kubernetes cluster on hosted infrastructure such as Amazon's EC2 or Google Compute Engine.
For the best performance and greater security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads.
For the best performance and greater security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/) for running your workloads.
It is not recommended to install Rancher on top of a managed Kubernetes service such as Amazons EKS or Google Kubernetes Engine. These hosted Kubernetes solutions do not expose etcd to a degree that is manageable for Rancher, and their customizations can interfere with Rancher operations.
@@ -31,7 +31,7 @@ The majority of Rancher 2.x software runs on the Rancher Server. Rancher Server
The figure below illustrates the high-level architecture of Rancher 2.x. The figure depicts a Rancher Server installation that manages two downstream Kubernetes clusters: one created by RKE and another created by Amazon EKS (Elastic Kubernetes Service).
For the best performance and security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads.
For the best performance and security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/) for running your workloads.
The diagram below shows how users can manipulate both [Rancher-launched Kubernetes]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters and [hosted Kubernetes]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) clusters through Rancher's authentication proxy:
@@ -309,7 +309,7 @@ timeout: 30
# Notifications
You can enable notifications to any [notifiers]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) based on the build status of a pipeline. Before enabling notifications, Rancher recommends [setting up notifiers]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/#adding-notifiers) so it will be easy to add recipients immediately.
You can enable notifications to any [notifiers]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) based on the build status of a pipeline. Before enabling notifications, Rancher recommends [setting up notifiers]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/) so it will be easy to add recipients immediately.
### Configuring Notifications by UI
@@ -319,7 +319,7 @@ _Available as of v2.2.0_
1. Select the conditions for the notification. You can select to get a notification for the following statuses: `Failed`, `Success`, `Changed`. For example, if you want to receive notifications when an execution fails, select **Failed**.
1. If you don't have any existing [notifiers]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/notifiers), Rancher will provide a warning that no notifiers are set up and provide a link to be able to go to the notifiers page. Follow the [instructions]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/#adding-notifiers) to add a notifier. If you already have notifiers, you can add them to the notification by clicking the **Add Recipient** button.
1. If you don't have any existing [notifiers]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/notifiers), Rancher will provide a warning that no notifiers are set up and provide a link to be able to go to the notifiers page. Follow the [instructions]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/) to add a notifier. If you already have notifiers, you can add them to the notification by clicking the **Add Recipient** button.
> **Note:** Notifiers are configured at a cluster level and require a different level of permissions.
@@ -74,4 +74,4 @@ After enabling an example repository, run the pipeline to see how it works.
### What's Next?
For detailed information about setting up your own pipeline for your repository, [configure a version control provider]({{<baseurl>}}/rancher/v2.x/en/project-admin/pipelines), [enable a repository](#configure-repositories) and finally [configure your pipeline]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/#pipeline-configuration-reference).
For detailed information about setting up your own pipeline for your repository, [configure a version control provider]({{<baseurl>}}/rancher/v2.x/en/project-admin/pipelines), enable a repository and finally configure your pipeline.
@@ -5,7 +5,7 @@ aliases:
- /rancher/v2.x/en/k8s-in-rancher/pipelines/storage
---
The internal [Docker registry](#how-pipelines-work) and the [Minio](#how-pipelines-work) workloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes.
The pipelines' internal Docker registry and the Minio workloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes.
This section assumes that you understand how persistent storage works in Kubernetes. For more information, refer to the section on [how storage works.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/how-storage-works/)
@@ -65,4 +65,4 @@ Cluster admins and members may occasionally need to move a namespace to another
You can always override the namespace default limit to provide a specific namespace with access to more (or less) project resources.
For more information, see how to [edit namespace resource quotas]({{<baseurl>}}/rancher/v2.x/en/project-admin//resource-quotas/override-namespace-default/#editing-namespace-resource-quotas).
For more information, see how to [edit namespace resource quotas]({{<baseurl>}}/rancher/v2.x/en/project-admin//resource-quotas/override-namespace-default/).
@@ -5,12 +5,12 @@ weight: 2
Although the **Namespace Default Limit** propagates from the project to each namespace when created, in some cases, you may need to increase (or decrease) the quotas for a specific namespace. In this situation, you can override the default limits by editing the namespace.
In the diagram below, the Rancher administrator has a resource quota in effect for their project. However, the administrator wants to override the namespace limits for `Namespace 3` so that it has more resources available. Therefore, the administrator [raises the namespace limits]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#editing-namespace-resource-quotas) for `Namespace 3` so that the namespace can access more resources.
In the diagram below, the Rancher administrator has a resource quota in effect for their project. However, the administrator wants to override the namespace limits for `Namespace 3` so that it has more resources available. Therefore, the administrator [raises the namespace limits]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/) for `Namespace 3` so that the namespace can access more resources.
<sup>Namespace Default Limit Override</sup>
![Namespace Default Limit Override]({{<baseurl>}}/img/rancher/rancher-resource-quota-override.svg)
How to: [Editing Namespace Resource Quotas]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#editing-namespace-resource-quotas)
How to: [Editing Namespace Resource Quotas]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/)
### Editing Namespace Resource Quotas
@@ -872,7 +872,7 @@ Upgrade the Rancher server installation using Helm, and configure the audit log
#### Reference
- <https://rancher.com/docs/rancher/v2.x/en/installation/resources/chart-options/#advanced-options>
- <https://rancher.com/docs/rancher/v2.x/en/installation/resources/chart-options/>
## 3.2 - Rancher Management Control Plane Authentication
@@ -915,7 +915,7 @@ Upgrade the Rancher server installation using Helm, and configure the audit log
#### Reference
- <https://rancher.com/docs/rancher/v2.x/en/installation/resources/chart-options/#advanced-options>
- <https://rancher.com/docs/rancher/v2.x/en/installation/resources/chart-options/>
## 3.2 - Rancher Management Control Plane Authentication
@@ -413,7 +413,7 @@ Verify that the permissions are `700` or more restrictive.
**Remediation**
Follow the steps as documented in [1.4.12]({{<baseurl>}}/rancher/v2.x/en/security/hardening-2.3/#1-4-12-ensure-that-the-etcd-data-directory-ownership-is-set-to-etcd-etcd) remediation.
Follow the steps as documented in [1.4.12](#1-4-12-ensure-that-the-etcd-data-directory-ownership-is-set-to-etcd-etcd) remediation.
### 1.4.12 - Ensure that the etcd data directory ownership is set to `etcd:etcd`
@@ -1023,7 +1023,7 @@ Upgrade the Rancher server installation using Helm, and configure the audit log
#### Reference
- <https://rancher.com/docs/rancher/v2.x/en/installation/resources/chart-options/#advanced-options>
- <https://rancher.com/docs/rancher/v2.x/en/installation/resources/chart-options/>
## 3.2 - Rancher Management Control Plane Authentication
@@ -151,7 +151,7 @@ Verify that the permissions are `700` or more restrictive.
**Remediation**
Follow the steps as documented in [1.4.12]({{<baseurl>}}/rancher/v2.x/en/security/hardening-2.3.3/#1-4-12-ensure-that-the-etcd-data-directory-ownership-is-set-to-etcd-etcd) remediation.
Follow the steps as documented in [1.4.12](#1-4-12-ensure-that-the-etcd-data-directory-ownership-is-set-to-etcd-etcd) remediation.
### 1.4.12 - Ensure that the etcd data directory ownership is set to `etcd:etcd`
@@ -763,7 +763,7 @@ Upgrade the Rancher server installation using Helm, and configure the audit log
#### Reference
- <https://rancher.com/docs/rancher/v2.x/en/installation/resources/chart-options/#advanced-options>
- <https://rancher.com/docs/rancher/v2.x/en/installation/resources/chart-options/>
## 3.2 - Rancher Management Control Plane Authentication
@@ -266,6 +266,6 @@ kubectl get pods --all-namespaces -o go-template='{{range .items}}{{if eq .statu
### Job does not complete
If you have enabled Istio, and you are having issues with a Job you deployed not completing, you will need to add an annotation to your pod using [these steps.](../../cluster-admin/tools/istio/setup/enable-istio-in-namespace/#excluding-workloads-from-being-injected-with-the-istio-sidecar)
If you have enabled Istio, and you are having issues with a Job you deployed not completing, you will need to add an annotation to your pod using [these steps.]({{<baseurl>}}/rancher/v2.x/en/istio/v2.3.x-v2.4.x/setup/enable-istio-in-namespace/#excluding-workloads-from-being-injected-with-the-istio-sidecar)
Since Istio Sidecars run indefinitely, a Job cannot be considered complete even after its task has completed. This is a temporary workaround and will disable Istio for any traffic to/from the annotated Pod. Keep in mind this may not allow you to continue to use a Job for integration testing, as the Job will not have access to the service mesh.