Merge pull request #2750 from rancher/staging

Merge staging into master
This commit is contained in:
Catherine Luse
2020-10-05 21:02:31 -07:00
committed by GitHub
285 changed files with 8747 additions and 7455 deletions
Binary file not shown.

After

Width:  |  Height:  |  Size: 329 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 248 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 240 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 281 KiB

@@ -40,7 +40,7 @@ Apply the Canal YAML.
Ensure the settings were applied by running the following command on the host:
```
cat /etc/cni/net.d/10-canal.conflist
cat /etc/cni/net.d/10-calico.conflist
```
You should see that IP forwarding is set to true.
@@ -61,7 +61,7 @@ Apply the Calico YAML.
Ensure the settings were applied by running the following command on the host:
```
cat /etc/cni/net.d/10-calico.conflist
cat /etc/cni/net.d/10-canal.conflist
```
You should see that IP forwarding is set to true.
@@ -0,0 +1,5 @@
---
title: Tutorials
weight: 10000
---
@@ -0,0 +1,118 @@
---
title: Setting up a High-availability K3s Kubernetes Cluster for Rancher
shortTitle: Set up K3s for Rancher
weight: 2
---
> This page is under construction.
This section describes how to install a Kubernetes cluster according to the [best practices for the Rancher server environment.]({{<baseurl>}}/rancher/v2.x/en/overview/architecture-recommendations/#environment-for-kubernetes-installations)
For systems without direct internet access, refer to the air gap installation instructions.
> **Single-node Installation Tip:**
> In a single-node Kubernetes cluster, the Rancher server does not have high availability, which is important for running Rancher in production. However, installing Rancher on a single-node cluster can be useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path.
>
> To set up a single-node K3s cluster, run the Rancher server installation command on just one node instead of two nodes.
>
> In both single-node setups, Rancher can be installed with Helm on the Kubernetes cluster in the same way that it would be installed on any other cluster.
# Prerequisites
These instructions assume you have set up two nodes, a load balancer, a DNS record, and an external MySQL database as described in [this section.](../infra-for-ha-with-external-db)
# Installing Kubernetes
### 1. Install Kubernetes and Set up the K3s Server
When running the command to start the K3s Kubernetes API server, you will pass in an option to use the external datastore that you set up earlier.
1. Connect to one of the Linux nodes that you have prepared to run the Rancher server.
1. On the Linux node, run this command to start the K3s server and connect it to the external datastore:
```
curl -sfL https://get.k3s.io | sh -s - server \
--datastore-endpoint="mysql://username:password@tcp(hostname:3306)/database-name"
```
Note: The datastore endpoint can also be passed in using the environment variable `$K3S_DATASTORE_ENDPOINT`.
1. Repeat the same command on your second K3s server node.
### 2. Confirm that K3s is Running
To confirm that K3s has been set up successfully, run the following command on either of the K3s server nodes:
```
sudo k3s kubectl get nodes
```
Then you should see two nodes with the master role:
```
ubuntu@ip-172-31-60-194:~$ sudo k3s kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-60-194 Ready master 44m v1.17.2+k3s1
ip-172-31-63-88 Ready master 6m8s v1.17.2+k3s1
```
Then test the health of the cluster pods:
```
sudo k3s kubectl get pods --all-namespaces
```
**Result:** You have successfully set up a K3s Kubernetes cluster.
### 3. Save and Start Using the kubeconfig File
When you installed K3s on each Rancher server node, a `kubeconfig` file was created on the node at `/etc/rancher/k3s/k3s.yaml`. This file contains credentials for full access to the cluster, and you should save this file in a secure location.
To use this `kubeconfig` file,
1. Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool.
2. Copy the file at `/etc/rancher/k3s/k3s.yaml` and save it to the directory `~/.kube/config` on your local machine.
3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your load balancer, referring to port 6443. (The Kubernetes API server will be reached at port 6443, while the Rancher server will be reached at ports 80 and 443.) Here is an example `k3s.yaml`:
```yml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: [CERTIFICATE-DATA]
server: [LOAD-BALANCER-DNS]:6443 # Edit this line
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
password: [PASSWORD]
username: admin
```
**Result:** You can now use `kubectl` to manage your K3s cluster. If you have more than one kubeconfig file, you can specify which one you want to use by passing in the path to the file when using `kubectl`:
```
kubectl --kubeconfig ~/.kube/config/k3s.yaml get pods --all-namespaces
```
For more information about the `kubeconfig` file, refer to the [K3s documentation]({{<baseurl>}}/k3s/latest/en/cluster-access/) or the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) about organizing cluster access using `kubeconfig` files.
### 4. Check the Health of Your Cluster Pods
Now that you have set up the `kubeconfig` file, you can use `kubectl` to access the cluster from your local machine.
Check that all the required pods and containers are healthy are ready to continue:
```
ubuntu@ip-172-31-60-194:~$ sudo kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system metrics-server-6d684c7b5-bw59k 1/1 Running 0 8d
kube-system local-path-provisioner-58fb86bdfd-fmkvd 1/1 Running 0 8d
kube-system coredns-d798c9dd-ljjnf 1/1 Running 0 8d
```
**Result:** You have confirmed that you can access the cluster with `kubectl` and the K3s cluster is running successfully. Now the Rancher management server can be installed on the cluster.
### [Next: Install Rancher]({{<baseurl>}}/rancher/v2.x/en/installation/k8s-install/helm-rancher/)
@@ -1,10 +1,10 @@
---
title: '1. Set up Infrastructure'
weight: 185
aliases:
- /rancher/v2.x/en/installation/ha/create-nodes-lb
title: 'Set up Infrastructure for a High Availability K3s Kubernetes Cluster'
weight: 1
---
> This page is under construction.
In this section, you will provision the underlying infrastructure for your Rancher management server.
The recommended infrastructure for the Rancher-only Kubernetes cluster differs depending on whether Rancher will be installed on a K3s Kubernetes cluster, an RKE Kubernetes cluster, or a single Docker container.
@@ -13,8 +13,6 @@ For more information about each installation option, refer to [this page.]({{<ba
> **Note:** These nodes must be in the same region. You may place these servers in separate availability zones (datacenter).
{{% tabs %}}
{{% tab "K3s" %}}
To install the Rancher management server on a high-availability K3s cluster, we recommend setting up the following infrastructure:
- **Two Linux nodes,** typically virtual machines, in the infrastructure provider of your choice.
@@ -70,61 +68,4 @@ You will need to specify this hostname in a later step when you install Rancher,
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
{{% /tab %}}
{{% tab "RKE" %}}
To install the Rancher management server on a high-availability RKE cluster, we recommend setting up the following infrastructure:
- **Three Linux nodes,** typically virtual machines, in an infrastructure provider such as Amazon's EC2, Google Compute Engine, or vSphere.
- **A load balancer** to direct front-end traffic to the three nodes.
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
These nodes must be in the same region/data center. You may place these servers in separate availability zones.
### Why three nodes?
In an RKE cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes.
The etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can suffer from [split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems), requiring the cluster to be restored from backup. If one of the three etcd nodes fails, the two remaining nodes can elect a leader because they have the majority of the total number of etcd nodes.
### 1. Set up Linux Nodes
Make sure that your nodes fulfill the general installation requirements for [OS, container runtime, hardware, and networking.]({{<baseurl>}}/rancher/v2.x/en/installation/requirements/)
For an example of one way to set up Linux nodes, refer to this [tutorial]({{<baseurl>}}/rancher/v2.x/en/installation/options/ec2-node/) for setting up nodes as instances in Amazon EC2.
### 2. Set up the Load Balancer
You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
When Kubernetes gets set up in a later step, the RKE tool will deploy an NGINX Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the NGINX Ingress controller to listen for traffic destined for the Rancher hostname. The NGINX Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster.
For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer:
- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment.
- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.]({{<baseurl>}}/rancher/v2.x/en/installation/options/chart-options/#external-tls-termination)
For an example showing how to set up an NGINX load balancer, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/installation/options/nginx/)
For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/installation/options/nlb/)
> **Important:**
> Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications.
### 3. Set up the DNS Record
Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer.
Depending on your environment, this may be an A record pointing to the LB IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on.
You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one.
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
{{% /tab %}}
{{% /tabs %}}
### [Next: Set up a Kubernetes Cluster]({{<baseurl>}}/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/)
### [Next: Set up a Kubernetes Cluster]({{<baseurl>}}/rancher/v2.x/en/installation/resources/k8s-tutorials/ka-k3s/)
@@ -1,6 +1,6 @@
---
title: Authentication, Permissions and Global Configuration
weight: 1100
weight: 6
aliases:
- /rancher/v2.x/en/concepts/global-configuration/
- /rancher/v2.x/en/tasks/global-configuration/
@@ -17,6 +17,7 @@ You cannot update or delete the built-in Global Permissions.
This section covers the following topics:
- [Restricted Admin](#restricted-admin)
- [Global permission assignment](#global-permission-assignment)
- [Global permissions for new local users](#global-permissions-for-new-local-users)
- [Global permissions for users with external authentication](#global-permissions-for-users-with-external-authentication)
@@ -27,6 +28,49 @@ This section covers the following topics:
- [Configuring global permissions for groups](#configuring-global-permissions-for-groups)
- [Refreshing group memberships](#refreshing-group-memberships)
# Restricted Admin
_Available as of Rancher v2.5_
A new `restricted-admin` role was created in Rancher v2.5 in order to prevent privilege escalation from the local Rancher server Kubernetes cluster. This role has full administrator access to all downstream clusters managed by Rancher, but it does not have permission to alter the local Kubernetes cluster.
The `restricted-admin` can create other `restricted-admin` users with an equal level of access.
A new setting was added to Rancher to set the initial bootstrapped administrator to have the `restricted-admin` role. This applies to the first user created when the Rancher server is started for the first time. If the environment variable is set, then no global administrator would be created, and it would be impossible to create the global administrator through Rancher.
To bootstrap Rancher with the `restricted-admin` as the initial user, the Rancher server should be started with the following environment variable:
```
CATTLE_RESTRICTED_DEFAULT_ADMIN=true
```
### List of `restricted-admin` Permissions
The `restricted-admin` permissions are as follows:
- Has full admin access to all downstream clusters managed by Rancher.
- Has very limited access to the local Kubernetes cluster. Can access Rancher custom resource definitions, but has no access to any Kubernetes native types.
- Can add other users and assign them to clusters outside of the local cluster.
- Can create other restricted admins.
- Cannot grant any permissions in the local cluster they don't currently have. (This is how Kubernetes normally operates)
### Upgrading from Rancher with a Hidden Local Cluster
Prior to Rancher v2.5, it was possible to run the Rancher server using this flag to hide the local cluster:
```
--add-local=false
```
You will need to drop this flag when upgrading to Rancher v2.5. Otherwise, Rancher will not start. The `restricted-admin` role can be used to continue restricting access to the local cluster.
### Changing Global Administrators to Restricted Admins
If Rancher already has a global administrator, they should change all global administrators over to the new `restricted-admin` role.
This can be done through **Security > Users** and moving any Administrator role over to Restricted Administrator.
Signed-in users can change themselves over to the `restricted-admin` if they wish, but they should only do that as the last step, otherwise they won't have the permissions to do so.
# Global Permission Assignment
Global permissions for local users are assigned differently than users who log in to Rancher using external authentication.
+1 -1
View File
@@ -1,6 +1,6 @@
---
title: API
weight: 7500
weight: 24
---
## How to use the API
+114 -11
View File
@@ -1,19 +1,122 @@
---
title: Backups and Disaster Recovery
weight: 1000
weight: 5
---
This section is devoted to protecting your data in a disaster scenario.
In this section, you'll learn how to create backups of Rancher, how to restore Rancher from backup, and how to migrate Rancher to a new Kubernetes cluster.
To protect yourself from a disaster scenario, you should create backups on a regular basis.
As of Rancher v2.5, the `rancher-backup` operator is used to backup and restore Rancher. The `rancher-backup` Helm chart is [here.](https://github.com/rancher/charts/tree/main/charts/rancher-backup)
- Rancher server backups:
- [Rancher installed on a K3s Kubernetes cluster](./backups/k3s-backups)
- [Rancher installed on an RKE Kubernetes cluster](./backups/ha-backups)
- [Rancher installed with Docker](./backups/single-node-backups/)
- [Backing up Rancher Launched Kubernetes Clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/)
The backup-restore operator needs to be installed in the local cluster, and only backs up the Rancher app. The backup and restore operations are performed only in the local Kubernetes cluster.
In a disaster scenario, you can restore your `etcd` database by restoring a backup.
The Rancher version must be v2.5.0 and up to use this approach of backing up and restoring Rancher.
- [Rancher Server Restorations]({{<baseurl>}}/rancher/v2.x/en/backups/restorations)
- [Restoring Rancher Launched Kubernetes Clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/restoring-etcd/)
- [Changes in Rancher v2.5](#changes-in-rancher-v2-5)
- [Backup and Restore for Rancher v2.5 installed with Docker](#backup-and-restore-for-rancher-v2-5-installed-with-docker)
- [Backup and Restore for Rancher installed on a Kubernetes Cluster Prior to v2.5](#backup-and-restore-for-rancher-installed-on-a-kubernetes-cluster-prior-to-v2-5)
- [How Backups and Restores Work](#how-backups-and-restores-work)
- [Installing the rancher-backup Operator](#installing-the-rancher-backup-operator)
- [Installing rancher-backup with the Rancher UI](#installing-rancher-backup-with-the-rancher-ui)
- [Installing rancher-backup with the Helm CLI](#installing-rancher-backup-with-the-helm-cli)
- [Backing up Rancher](#backing-up-rancher)
- [Restoring Rancher](#restoring-rancher)
- [Migrating Rancher to a New Cluster](#migrating-rancher-to-a-new-cluster)
- [Default Storage Location Configuration](#default-storage-location-configuration)
- [Example values.yaml for the rancher-backup Helm Chart](#example-values-yaml-for-the-rancher-backup-helm-chart)
# Changes in Rancher v2.5
The new `rancher-backup` operator allows Rancher to be backed up and restored on any Kubernetes cluster. This application is a Helm chart, and it can be deployed through the Rancher **Apps & Marketplace** page, or by using the Helm CLI.
Previously, the way that cluster data was backed up depended on the type of Kubernetes cluster that was used.
In Rancher v2.4, it was only supported to install Rancher on two types of Kubernetes clusters: an RKE cluster, or a K3s cluster with an external database. If Rancher was installed on an RKE cluster, [RKE would be used]({{<baseurl>}}/rancher/v2.x/en/backups/legacy/backup/k8s-backups/ha-backups/) to take a snapshot of the etcd database and restore the cluster. If Rancher was installed on a K3s cluster with an external database, the database would need to be backed up and restored using the upstream documentation for the database.
In Rancher v2.5, it is now supported to install Rancher hosted Kubernetes clusters, such as Amazon EKS clusters, which do not expose etcd to a degree that would allow snapshots to be created by an external tool. etcd doesn't need to be exposed for `rancher-backup` to work, because the operator gathers resources by making calls to `kube-apiserver`.
### Backup and Restore for Rancher v2.5 installed with Docker
For Rancher installed with Docker, refer to the same steps used up till 2.5 for [backups](./docker-installs/docker-backups) and [restores.](./docker-installs/docker-backups)
### Backup and Restore for Rancher installed on a Kubernetes Cluster Prior to v2.5
For Rancher prior to v2.5, the way that Rancher is backed up and restored differs based on the way that Rancher was installed. Our legacy backup and restore documentation is here:
- For Rancher installed on an RKE Kubernetes cluster, refer to the legacy [backup]({{<baseurl>}}/rancher/v2.x/en/backups/legacy/backup/k8s-backups/ha-backups/) and [restore]({{<baseurl>}}/rancher/v2.x/en/backups/legacy/restore/k8s-restore/rke-restore/) documentation.
- For Rancher installed on a K3s Kubernetes cluster, refer to the legacy [backup]({{<baseurl>}}/rancher/v2.x/en/backups/legacy/backup/k8s-backups/k3s-backups/) and [restore]({{<baseurl>}}/rancher/v2.x/en/backups/legacy/restore/k8s-restore/k3s-restore/) documentation.
# How Backups and Restores Work
The `rancher-backup` operator introduces three custom resources: Backups, Restores, and ResourceSets. The following cluster-scoped custom resource definitions are added to the cluster:
- `backups.resources.cattle.io`
- `resourcesets.resources.cattle.io`
- `restores.resources.cattle.io`
The ResourceSet defines which Kubernetes resources need to be backed up. The ResourceSet is not available to be configured in the Rancher UI because the values required to back up Rancher are predefined. This ResourceSet should not be modified.
When a Backup custom resource is created, the `rancher-backup` operator calls the `kube-apiserver` to get the resources in the ResourceSet (specifically, the predefined `rancher-resource-set`) that the Backup custom resource refers to.
The operator then creates the backup file in the .tar.gz format and stores it in the location configured in the Backup resource.
When a Restore custom resource is created, the operator accesses the backup .tar.gz file specified by the Restore, and restores the application from that file.
The Backup and Restore custom resources can be created in the Rancher UI, or by using `kubectl apply`.
# Installing the rancher-backup Operator
The `rancher-backup` operator can be installed from the Rancher UI, or with the Helm CLI. In both cases, the `rancher-backup` Helm chart is installed on the Kubernetes cluster running the Rancher server. It is a cluster-admin only feature and available only for the local cluster.
### Installing rancher-backup with the Rancher UI
1. In the Rancher UI, go to the **Cluster Explorer.**
1. Click **Apps.**
1. Click the `rancher-backup` operator.
1. Optional: Configure the default storage location. For help, refer to the [configuration section.](./configuration/storage-config)
**Result:** The `rancher-backup` operator is installed.
From the **Cluster Explorer,** you can see the `rancher-backup` operator listed under **Deployments.**
To configure the backup app in Rancher, click **Cluster Explorer** in the upper left corner and click **Rancher Backups.**
### Installing rancher-backup with the Helm CLI
Install the backup app as a Helm chart:
```
helm repo add rancher-charts https://charts.rancher.io
helm repo update
helm install rancher-backup-crd rancher-charts/rancher-backup-crd -n cattle-resources-system --create-namespace
helm install rancher-backup rancher-charts/rancher-backup -n cattle-resources-system
```
### RBAC
Only the rancher admins, and local clusters cluster-owner can:
* Install the Chart
* See the navigation links for Backup and Restore CRDs
* Perform a backup or restore by creating a Backup CR and Restore CR respectively, list backups/restores performed so far
# Backing up Rancher
A backup is performed by creating a Backup custom resource. For a tutorial, refer to [this page.](./back-up-rancher)
# Restoring Rancher
A restore is performed by creating a Restore custom resource. For a tutorial, refer to [this page.](./restoring-rancher)
# Migrating Rancher to a New Cluster
A migration is performed by following [these steps.](./migrating-rancher)
# Default Storage Location Configuration
Configure a storage location where all backups are saved by default. You will have the option to override this with each backup, but will be limited to using an S3-compatible or Minio object store.
For information on configuring these options, refer to [this page.](./configuration/storage-config)
### Example values.yaml for the rancher-backup Helm Chart
The example [values.yaml file](./configuration/storage-config/#example-values-yaml-for-the-rancher-backup-helm-chart) can be used to configure the `rancher-backup` operator when the Helm CLI is used to install it.
@@ -0,0 +1,61 @@
---
title: Backing up Rancher
weight: 1
---
In this section, you'll learn how to back up Rancher running on any Kubernetes cluster. To backup Rancher installed with Docker, refer the instructions for [single node backups](../legacy/backup/single-node-backups/)
### Prerequisites
Rancher version must be v2.5.0 and up
### 1. Install the `rancher-backup` operator
The backup storage location is an operator-level setting, so it needs to be configured when `rancher-backup` is installed or upgraded.
Backups are created as .tar.gz files. These files can be pushed to S3 or Minio, or they can be stored in a persistent volume.
1. In the Rancher UI, go to the **Cluster Explorer.**
1. Click **Apps.**
1. Click `rancher-backup`.
1. Configure the default storage location. For help, refer to the [storage configuration section.](../configuration/storage-config)
### 2. Perform a Backup
To perform a backup, a custom resource of type Backup must be created.
1. In the **Cluster Explorer,** go to the dropdown menu in the upper left corner and click **Rancher Backups.**
1. Click **Backup.**
1. Create the Backup with the form, or with YAML editor.
1. For configuring the Backup details using the form, click **Create** and refer to the [configuration reference](../configuration/backup-config) and to the [examples.](../examples/#backup)
1. For using the YAML editor, we can click **Create > Create from YAML.** Enter the Backup YAML. This example Backup custom resource would create encrypted recurring backups in S3:
```yaml
apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
name: s3-recurring-backup
spec:
storageLocation:
s3:
credentialSecretName: s3-creds
credentialSecretNamespace: default
bucketName: rancher-backups
folder: rancher
region: us-west-2
endpoint: s3.us-west-2.amazonaws.com
resourceSetName: rancher-resource-set
encryptionConfigSecretName: encryptionconfig
schedule: "@every 1h"
retentionCount: 10
```
> **Note:** When creating the Backup resource using YAML editor, the `resourceSetName` must be set to `rancher-resource-set`
For help configuring the Backup, refer to the [configuration reference](../configuration/backup-config) and to the [examples.](../examples/#backup)
> **Important:** The `rancher-backup` operator doesn't save the EncryptionConfiguration file. The contents of the EncryptionConfiguration file must be saved when an encrypted backup is created, and the same file must be used when restoring from this backup.
1. Click **Create.**
**Result:** The backup file is created in the storage location configured in the Backup custom resource. The name of this file is used when performing a restore.
@@ -0,0 +1,10 @@
---
title: Rancher Backup Configuration Reference
shortTitle: Configuration
weight: 4
---
- [Backup configuration](./backup-config)
- [Restore configuration](./restore-config)
- [Storage location configuration](./storage-config)
- [Example Backup and Restore Custom Resources](../examples)
@@ -0,0 +1,184 @@
---
title: Backup Configuration
shortTitle: Backup
weight: 1
---
The Backup Create page lets you configure a schedule, enable encryption and specify the storage location for your backups.
{{< img "/img/rancher/backup_restore/backup/backup.png" "">}}
- [Schedule](#schedule)
- [Encryption](#encryptionconfigname)
- [Storage Location](#storagelocation)
- [S3](#s3)
- [Example S3 Storage Configuration](#example-s3-storage-configuration)
- [Example MinIO Configuration](#example-minio-configuration)
- [Example credentialSecret](#example-credentialsecret)
- [IAM Permissions for EC2 Nodes to Access S3](#iam-permissions-for-ec2-nodes-to-access-s3)
- [RetentionCount](#retentioncount)
- [Examples](#examples)
# Schedule
Select the first option to perform a one-time backup, or select the second option to schedule recurring backups. Selecting **Recurring Backups** lets you configure following two fields:
- **Schedule**: This field accepts
- Standard [cron expressions](https://en.wikipedia.org/wiki/Cron), such as `"0 * * * *"`
- Descriptors, such as `"@midnight"` or `"@every 1h30m"`
- **Retention Count**: This value specifies how many backup files must be retained. If files exceed the given retentionCount, the oldest files will be deleted. The default value is 10.
{{< img "/img/rancher/backup_restore/backup/schedule.png" "">}}
| YAML Directive Name | Description |
| ---------------- | ---------------- |
| `schedule` | Provide the cron string for scheduling recurring backups. |
| `retentionCount` | Provide the number of backup files to be retained. |
# Encryption
The rancher-backup gathers resources by making calls to the kube-apiserver. Objects returned by apiserver are decrypted, so even if [encryption At rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) is enabled, even the encrypted objects gathered by the backup will be in plaintext.
To avoid storing them in plaintext, you can use the same encryptionConfig file that was used for at-rest encryption, to encrypt certain resources in your backup.
> **Important:** You must save the encryptionConfig file, because it wont be saved by the rancher-backup operator.
The same encryptionFile needs to be used when performing a restore.
The operator consumes this encryptionConfig as a Kubernetes Secret, and the Secret must be in the operators namespace. Rancher installs the `rancher-backup` operator in the `cattle-resources-system` namespace, so create this encryptionConfig secret in that namespace.
For the `EncryptionConfiguration`, you can use the [sample file provided in the Kubernetes documentation.](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration)
To create the Secret, the encryption configuration file must be named `encryption-provider-config.yaml`, and the `--from-file` flag must be used to create this secret.
Save the `EncryptionConfiguration` in a file called `encryption-provider-config.yaml` and run this command:
```
kubectl create secret generic encryptionconfig \
--from-file=./encryption-provider-config.yaml \
-n cattle-resources-system
```
This will ensure that the secret contains a key named `encryption-provider-config.yaml`, and the operator will use this key to get the encryption configuration.
The `Encryption Config Secret` dropdown will filter out and list only those Secrets that have this exact key
{{< img "/img/rancher/backup_restore/backup/encryption.png" "">}}
In the example command above, the name `encryptionconfig` can be changed to anything.
| YAML Directive Name | Description |
| ---------------- | ---------------- |
| `encryptionConfigSecretName` | Provide the name of the Secret from `cattle-resources-system` namespace, that contains the encryption config file. |
# Storage Location
{{< img "/img/rancher/backup_restore/backup/storageLocation.png" "">}}
If the StorageLocation is specified in the Backup, the operator will retrieve the backup location from that particular S3 bucket. If not specified, the operator will try to find this file in the default operator-level S3 store, and in the operator-level PVC store. The default storage location is configured during the deployment of the `rancher-backup` operator.
Selecting the first option stores this backup in the storage location configured while installing the rancher-backup chart. The second option lets you configure a different S3 compatible storage provider for storing the backup.
### S3
The S3 storage location contains the following configuration fields:
1. **Credential Secret** (optional): If you need to use the AWS Access keys Secret keys to access s3 bucket, create a secret with your credentials with keys and the directives `accessKey` and `secretKey`. It can be in any namespace. An example secret is [here.](#example-credentialsecret) This directive is unnecessary if the nodes running your operator are in EC2 and set up with IAM permissions that allow them to access S3, as described in [this section.](#iam-permissions-for-ec2-nodes-to-access-s3)
1. **Bucket Name**: The name of the S3 bucket where backup files will be stored.
1. **Region** (optional): The AWS [region](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) where the S3 bucket is located. This field isn't needed for configuring MinIO.
1. **Folder** (optional): The name of the folder in the S3 bucket where backup files will be stored.
1. **Endpoint**: The [endpoint](https://docs.aws.amazon.com/general/latest/gr/s3.html) that is used to access S3 in the region of your bucket.
1. **Endpoint CA** (optional): This should be the Base64 encoded CA cert. For an example, refer to the [example S3 compatible configuration.](#example-s3-compatible-storage-configuration)
1. **Skip TLS Verifications** (optional): Set to true if you are not using TLS.
| YAML Directive Name | Description | Required |
| ---------------- | ---------------- | ------------ |
| `credentialSecretName` | If you need to use the AWS Access keys Secret keys to access s3 bucket, create a secret with your credentials with keys and the directives `accessKey` and `secretKey`. It can be in any namespace as long as you provide that namespace in `credentialSecretNamespace`. An example secret is [here.](#example-credentialsecret) This directive is unnecessary if the nodes running your operator are in EC2 and set up with IAM permissions that allow them to access S3, as described in [this section.](#iam-permissions-for-ec2-nodes-to-access-s3) | |
| `credentialSecretNamespace` | The namespace of the secret containing the credentials to access S3. This directive is unnecessary if the nodes running your operator are in EC2 and set up with IAM permissions that allow them to access S3, as described in [this section.](#iam-permissions-for-ec2-nodes-to-access-s3) | |
| `bucketName` | The name of the S3 bucket where backup files will be stored. | ✓ |
| `folder` | The name of the folder in the S3 bucket where backup files will be stored. | |
| `region` | The AWS [region](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) where the S3 bucket is located. | ✓ |
| `endpoint` | The [endpoint](https://docs.aws.amazon.com/general/latest/gr/s3.html) that is used to access S3 in the region of your bucket. | ✓ |
| `endpointCA` | This should be the Base64 encoded CA cert. For an example, refer to the [example S3 compatible configuration.](#example-s3-compatible-storage-configuration) | |
| `insecureTLSSkipVerify` | Set to true if you are not using TLS. | |
### Example S3 Storage Configuration
```yaml
s3:
credentialSecretName: s3-creds
credentialSecretNamespace: default
bucketName: rancher-backups
folder: rancher
region: us-west-2
endpoint: s3.us-west-2.amazonaws.com
```
### Example MinIO Configuration
```yaml
s3:
credentialSecretName: minio-creds
bucketName: rancherbackups
endpoint: minio.35.202.130.254.xip.io
endpointCA: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHakNDQWdLZ0F3SUJBZ0lKQUtpWFZpNEpBb0J5TUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIzUmxjM1F0WTJFd0hoY05NakF3T0RNd01UZ3lOVFE1V2hjTk1qQXhNREk1TVRneU5UUTVXakFTTVJBdwpEZ1lEVlFRRERBZDBaWE4wTFdOaE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCjA4dnV3Q2Y0SEhtR2Q2azVNTmozRW5NOG00T2RpS3czSGszd1NlOUlXQkwyVzY5WDZxenBhN2I2M3U2L05mMnkKSnZWNDVqeXplRFB6bFJycjlpbEpWaVZ1NFNqWlFjdG9jWmFCaVNsL0xDbEFDdkFaUlYvKzN0TFVTZSs1ZDY0QQpWcUhDQlZObU5xM3E3aVY0TE1aSVpRc3N6K0FxaU1Sd0pOMVVKQTZ6V0tUc2Yzc3ByQ0J2dWxJWmZsVXVETVAyCnRCTCt6cXZEc0pDdWlhNEEvU2JNT29tVmM2WnNtTGkwMjdub3dGRld3MnRpSkM5d0xMRE14NnJoVHQ4a3VvVHYKQXJpUjB4WktiRU45L1Uzb011eUVKbHZyck9YS2ZuUDUwbk8ycGNaQnZCb3pUTStYZnRvQ1d5UnhKUmI5cFNTRApKQjlmUEFtLzNZcFpMMGRKY2sxR1h3SURBUUFCbzNNd2NUQWRCZ05WSFE0RUZnUVU5NHU4WXlMdmE2MTJnT1pyCm44QnlFQ2NucVFjd1FnWURWUjBqQkRzd09ZQVU5NHU4WXlMdmE2MTJnT1pybjhCeUVDY25xUWVoRnFRVU1CSXgKRURBT0JnTlZCQU1NQjNSbGMzUXRZMkdDQ1FDb2wxWXVDUUtBY2pBTUJnTlZIUk1FQlRBREFRSC9NQTBHQ1NxRwpTSWIzRFFFQkN3VUFBNElCQVFER1JRZ1RtdzdVNXRQRHA5Q2psOXlLRW9Vd2pYWWM2UlAwdm1GSHpubXJ3dUVLCjFrTkVJNzhBTUw1MEpuS29CY0ljVDNEeGQ3TGdIbTNCRE5mVVh2anArNnZqaXhJYXR2UWhsSFNVaWIyZjJsSTkKVEMxNzVyNCtROFkzelc1RlFXSDdLK08vY3pJTGh5ei93aHRDUlFkQ29lS1dXZkFiby8wd0VSejZzNkhkVFJzNwpHcWlGNWZtWGp6S0lOcTBjMHRyZ0xtalNKd1hwSnU0ZnNGOEcyZUh4b2pOKzdJQ1FuSkg5cGRIRVpUQUtOL2ppCnIvem04RlZtd1kvdTBndEZneWVQY1ZWbXBqRm03Y0ZOSkc4Y2ZYd0QzcEFwVjhVOGNocTZGeFBHTkVvWFZnclMKY1VRMklaU0RJd1FFY3FvSzFKSGdCUWw2RXBaUVpWMW1DRklrdFBwSQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t
```
### Example credentialSecret
```yaml
apiVersion: v1
kind: Secret
metadata:
name: creds
type: Opaque
data:
accessKey: <Enter your access key>
secretKey: <Enter your secret key>
```
### IAM Permissions for EC2 Nodes to Access S3
There are two ways to set up the `rancher-backup` operator to use S3 as the backup storage location.
One way is to configure the `credentialSecretName` in the Backup custom resource, which refers to AWS credentials that have access to S3.
If the cluster nodes are in Amazon EC2, the S3 access can also be set up by assigning IAM permissions to the EC2 nodes so that they can access S3.
To allow a node to access S3, follow the instructions in the [AWS documentation](https://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-access-s3-bucket/) to create an IAM role for EC2. When you add a custom policy to the role, add the following permissions, and replace the `Resource` with your bucket name:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::rancher-backups"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::rancher-backups/*"
]
}
]
}
```
After the role is created, and you have attached the corresponding instance profile to your EC2 instance(s), the `credentialSecretName` directive can be left empty in the Backup custom resource.
# Examples
For example Backup custom resources, refer to [this page.](../../examples/#backup)
@@ -0,0 +1,87 @@
---
title: Restore Configuration
shortTitle: Restore
weight: 2
---
The Restore Create page lets you provide details of the backup to restore from
{{< img "/img/rancher/backup_restore/restore/restore.png" "">}}
- [Backup Source](#backup-source)
- [An Existing Backup Config](#an-existing-backup-config)
- [The default storage target](#the-default-storage-target)
- [An S3-compatible object store](#an-s3-compatible-object-store)
- [Encryption](#encryption)
- [Prune during restore](#prune-during-restore)
- [Getting the Backup Filename from S3](#getting-the-backup-filename-from-s3)
# Backup Source
Provide details of the backup file and its storage location, which the operator will then use to perform the restore. Select from the following options to provide these details
### An existing backup config
Selecting this option will populate the **Target Backup** dropdown with the Backups available in this cluster. Select the Backup from the dropdown, and that will fill out the **Backup Filename** field for you, and will also pass the backup source information from the selected Backup to the operator.
{{< img "/img/rancher/backup_restore/restore/existing.png" "">}}
If the Backup custom resource does not exist in the cluster, you need to get the exact filename and provide the backup source details with the default storage target or an S3-compatible object store.
### The default storage target
Select this option if you are restoring from a backup file that exists in the default storage location configured at the operator-level. The operator-level configuration is the storage location that was configured when the `rancher-backup` operator was installed or upgraded. Provide the exact filename in the **Backup Filename** field.
{{< img "/img/rancher/backup_restore/restore/default.png" "">}}
### An S3-compatible object store
Select this option if no default storage location is configured at the operator-level, OR if the backup file exists in a different S3 bucket than the one configured as the default storage location. Provide the exact filename in the **Backup Filename** field. Refer to [this section](#getting-the-backup-filename-from-s3) for exact steps on getting the backup filename from s3. Fill in all the details for the S3 compatible object store. Its fields are exactly same as ones for the `backup.StorageLocation` configuration in the [Backup custom resource.](../../configuration/backup-config/#storagelocation)
{{< img "/img/rancher/backup_restore/restore/s3store.png" "">}}
# Encryption
If the backup was created with encryption enabled, its file will have `.enc` suffix. Choosing such a Backup, or providing a backup filename with `.enc` suffix will display another dropdown named **Encryption Config Secret**.
{{< img "/img/rancher/backup_restore/restore/encryption.png" "">}}
The Secret selected from this dropdown must have the same contents as the one used for the Backup custom resource while performing the backup. If the encryption configuration doesn't match, the restore will fail
The `Encryption Config Secret` dropdown will filter out and list only those Secrets that have this exact key
| YAML Directive Name | Description |
| ---------------- | ---------------- |
| `encryptionConfigSecretName` | Provide the name of the Secret from `cattle-resources-system` namespace, that contains the encryption config file. |
> **Important**
This field should only be set if the backup was created with encryption enabled. Providing the incorrect encryption config will cause the restore to fail.
# Prune During Restore
* **Prune**: In order to fully restore Rancher from a backup, and to go back to the exact state it was at when the backup was performed, we need to delete any additional resources that were created by Rancher after the backup was taken. The operator does so if the **Prune** flag is enabled. Prune is enabled by default and it is recommended to keep it enabled.
* **Delete Timeout**: This is the amount of time the operator will wait while deleting a resource before editing the resource to remove finalizers and attempt deletion again.
| YAML Directive Name | Description |
| ---------------- | ---------------- |
| `prune` | Delete the resources managed by Rancher that are not present in the backup (Recommended). |
| `deleteTimeoutSeconds` | Amount of time the operator will wait while deleting a resource before editing the resource to remove finalizers and attempt deletion again. |
# Getting the Backup Filename from S3
This is the name of the backup file that the `rancher-backup` operator will use to perform the restore.
To obtain this file name from S3, go to your S3 bucket (and folder if it was specified while performing backup).
Copy the filename and store it in your Restore custom resource. So assuming the name of your backup file is `backupfile`,
- If your bucket name is `s3bucket` and no folder was specified, then the `backupFilename` to use will be `backupfile`.
- If your bucket name is `s3bucket` and the base folder is`s3folder`, the `backupFilename` to use is only `backupfile` .
- If there is a subfolder inside `s3Folder` called `s3sub`, and that has your backup file, then the `backupFilename` to use is `s3sub/backupfile`.
| YAML Directive Name | Description |
| ---------------- | ---------------- |
| `backupFilename` | This is the name of the backup file that the `rancher-backup` operator will use to perform the restore. |
@@ -0,0 +1,112 @@
---
title: Backup Storage Location Configuration
shortTitle: Storage
weight: 3
---
Configure a storage location where all backups are saved by default. You will have the option to override this with each backup, but will be limited to using an S3-compatible object store.
Only one storage location can be configured at the operator level.
- [Storage Location Configuration](#storage-location-configuration)
- [No Default Storage Location](#no-default-storage-location)
- [S3-compatible Object Store](#s3-compatible-object-store)
- [Use an existing StorageClass](#existing-storageclass)
- [Use an existing PersistentVolume](#existing-persistent-volume)
- [Encryption](#encryption)
- [Example values.yaml for the rancher-backup Helm Chart](#example-values-yaml-for-the-rancher-backup-helm-chart)
# Storage Location Configuration
### No Default Storage Location
You can choose to not have any operator-level storage location configured. If you select this option, you must configure an S3-compatible object store as the storage location for each individual backup.
### S3-compatible Object Store
| Parameter | Description |
| -------------- | -------------- |
| Credential Secret | Choose the credentials for S3 from your secrets in Rancher. |
| Bucket Name | Enter the name of the [S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html) where the backups will be stored. Default: `rancherbackups`. |
| Region | The [AWS region](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) where the S3 bucket is located. |
| Folder | The [folder in the S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html) where the backups will be stored. |
| Endpoint | The [S3 endpoint](https://docs.aws.amazon.com/general/latest/gr/s3.html) For example, `s3.us-west-2.amazonaws.com`. |
| Endpoint CA | The CA cert used to for the S3 endpoint. Default: base64 encoded CA cert |
| insecureTLSSkipVerify | Set to true if you are not using TLS. |
### Existing StorageClass
Installing the `rancher-backup` chart by selecting the StorageClass option will create a Persistent Volume Claim (PVC), and Kubernetes will in turn dynamically provision a Persistent Volume (PV) where all the backups will be saved by default.
For information about creating storage classes refer to [this section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/#1-add-a-storage-class-and-configure-it-to-use-your-storage-provider)
> **Important**
It is highly recommended to use a StorageClass with a reclaim policy of "Retain". Otherwise if the PVC created by the `rancher-backup` chart gets deleted (either during app upgrade, or accidentally), the PV will get deleted too, which means all backups saved in it will get deleted.
If no such StorageClass is available, after the PV is provisioned, make sure to edit its reclaim policy and set it to "Retain" before storing backups in it.
### Existing Persistent Volume
Select an existing Persistent Volume (PV) that will be used to store your backups. For information about creating PersistentVolumes in Rancher, refer to [this section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/#2-add-a-persistent-volume-that-refers-to-the-persistent-storage)
> **Important**
It is highly recommended to use a Persistent Volume with a reclaim policy of "Retain". Otherwise if the PVC created by the `rancher-backup` chart gets deleted (either during app upgrade, or accidentally), the PV will get deleted too, which means all backups saved in it will get deleted.
# Example values.yaml for the rancher-backup Helm Chart
This values.yaml file can be used to configure `rancher-backup` operator when the Helm CLI is used to install it.
For more information about `values.yaml` files and configuring Helm charts during installation, refer to the [Helm documentation.](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing)
```yaml
image:
repository: rancher/rancher-backup
tag: v0.0.1-rc10
## Default s3 bucket for storing all backup files created by the rancher-backup operator
s3:
enabled: false
## credentialSecretName if set, should be the name of the Secret containing AWS credentials.
## To use IAM Role, don't set this field
credentialSecretName: creds
credentialSecretNamespace: ""
region: us-west-2
bucketName: rancherbackups
folder: base folder
endpoint: s3.us-west-2.amazonaws.com
endpointCA: base64 encoded CA cert
# insecureTLSSkipVerify: optional
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
## If persistence is enabled, operator will create a PVC with mountPath /var/lib/backups
persistence:
enabled: false
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack).
## Refer to https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
##
storageClass: "-"
## If you want to disable dynamic provisioning by setting storageClass to "-" above,
## and want to target a particular PV, provide name of the target volume
volumeName: ""
## Only certain StorageClasses allow resizing PVs; Refer to https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/
size: 2Gi
global:
cattle:
systemDefaultRegistry: ""
nodeSelector: {}
tolerations: []
affinity: {}
```
@@ -0,0 +1,10 @@
---
title: Backup and Restore for Rancher Installed with Docker
shortTitle: Docker Installs
weight: 10
---
The steps for backing up and restoring Rancher installed with Docker did not change in Rancher v2.5.
- [Backups](./docker-backups)
- [Restores](./docker-restores)
@@ -1,10 +1,13 @@
---
title: Backing up Rancher Installed with Docker
shortTitle: Docker Installs
weight: 3
aliases:
- /rancher/v2.x/en/installation/after-installation/single-node-backup-and-restoration/
- /rancher/v2.x/en/installation/after-installation/single-node-backup-and-restoration/
---
After completing your Docker installation of Rancher, we recommend creating backups of it on a regular basis. Having a recent backup will let you recover quickly from an unexpected disaster.
## Before You Start
@@ -1,16 +1,17 @@
---
title: Restoring Backups—Docker Installs
shortTitle: Docker Installs
weight: 365
weight: 3
aliases:
- /rancher/v2.x/en/installation/after-installation/single-node-backup-and-restoration/
- /rancher/v2.x/en/backups/restorations/single-node-restoration
---
If you encounter a disaster scenario, you can restore your Rancher Server to your most recent backup.
## Before You Start
During restoration of your backup, you'll enter a series of commands, filling placeholders with data from your environment. These placeholders are denoted with angled brackets and all capital letters (`<EXAMPLE>`). Here's an example of a command with a placeholder:
During restore of your backup, you'll enter a series of commands, filling placeholders with data from your environment. These placeholders are denoted with angled brackets and all capital letters (`<EXAMPLE>`). Here's an example of a command with a placeholder:
```
docker run --volumes-from <RANCHER_CONTAINER_NAME> -v $PWD:/backup \
@@ -68,4 +69,4 @@ Using a [backup]({{<baseurl>}}/rancher/v2.x/en/backups/backups/single-node-backu
docker start <RANCHER_CONTAINER_NAME>
```
1. Wait a few moments and then open Rancher in a web browser. Confirm that the restoration succeeded and that your data is restored.
1. Wait a few moments and then open Rancher in a web browser. Confirm that the restore succeeded and that your data is restored.
@@ -0,0 +1,301 @@
---
title: Examples
weight: 5
---
This section contains examples of Backup and Restore custom resources.
The default backup storage location is configured when the `rancher-backup` operator is installed or upgraded.
Encrypted backups can only be restored if the Restore custom resource uses the same encryption configuration secret that was used to create the backup.
- [Backup](#backup)
- [Backup in the default location with encryption](#backup-in-the-default-location-with-encryption)
- [Recurring backup in the default location](#recurring-backup-in-the-default-location)
- [Encrypted recurring backup in the default location](#encrypted-recurring-backup-in-the-default-location)
- [Encrypted backup in Minio](#encrypted-backup-in-minio)
- [Backup in S3 using AWS credential secret](#backup-in-s3-using-aws-credential-secret)
- [Recurring backup in S3 using AWS credential secret](#recurring-backup-in-s3-using-aws-credential-secret)
- [Backup from EC2 nodes with IAM permission to access S3](#backup-from-ec2-nodes-with-iam-permission-to-access-s3)
- [Restore](#restore)
- [Restore using the default backup file location](#restore-using-the-default-backup-file-location)
- [Restore for Rancher migration](#restore-for-rancher-migration)
- [Restore from encrypted backup](#restore-from-encrypted-backup)
- [Restore an encrypted backup from Minio](#restore-an-encrypted-backup-from-minio)
- [Restore from backup using an AWS credential secret to access S3](#restore-from-backup-using-an-aws-credential-secret-to-access-s3)
- [Restore from EC2 nodes with IAM permissions to access S3](#restore-from-ec2-nodes-with-iam-permissions-to-access-s3)
- [Example Credential Secret for Storing Backups in S3](#example-credential-secret-for-storing-backups-in-s3)
- [Example EncryptionConfiguration](#example-encryptionconfiguration)
# Backup
This section contains example Backup custom resources.
### Backup in the Default Location with Encryption
```yaml
apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
name: default-location-encrypted-backup
spec:
resourceSetName: rancher-resource-set
encryptionConfigSecretName: encryptionconfig
```
### Recurring Backup in the Default Location
```yaml
apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
name: default-location-recurring-backup
spec:
resourceSetName: rancher-resource-set
schedule: "@every 1h"
retentionCount: 10
```
### Encrypted Recurring Backup in the Default Location
```yaml
apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
name: default-enc-recurring-backup
spec:
resourceSetName: rancher-resource-set
encryptionConfigSecretName: encryptionconfig
schedule: "@every 1h"
retentionCount: 3
```
### Encrypted Backup in Minio
```yaml
apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
name: minio-backup
spec:
storageLocation:
s3:
credentialSecretName: minio-creds
credentialSecretNamespace: default
bucketName: rancherbackups
endpoint: minio.xip.io
endpointCA: LS0tLS1CRUdJTi3VUFNQkl5UUT.....pbEpWaVzNkRS0tLS0t
resourceSetName: rancher-resource-set
encryptionConfigSecretName: encryptionconfig
```
### Backup in S3 Using AWS Credential Secret
```yaml
apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
name: s3-backup
spec:
storageLocation:
s3:
credentialSecretName: s3-creds
credentialSecretNamespace: default
bucketName: rancher-backups
folder: ecm1
region: us-west-2
endpoint: s3.us-west-2.amazonaws.com
resourceSetName: rancher-resource-set
encryptionConfigSecretName: encryptionconfig
```
### Recurring Backup in S3 Using AWS Credential Secret
```yaml
apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
name: s3-recurring-backup
spec:
storageLocation:
s3:
credentialSecretName: s3-creds
credentialSecretNamespace: default
bucketName: rancher-backups
folder: ecm1
region: us-west-2
endpoint: s3.us-west-2.amazonaws.com
resourceSetName: rancher-resource-set
encryptionConfigSecretName: encryptionconfig
schedule: "@every 1h"
retentionCount: 10
```
### Backup from EC2 Nodes with IAM Permission to Access S3
This example shows that the AWS credential secret does not have to be provided to create a backup if the nodes running `rancher-backup` have [these permissions for access to S3.](../configuration/backup-config/#iam-permissions-for-ec2-nodes-to-access-s3)
```yaml
apiVersion: resources.cattle.io/v1
kind: Backup
metadata:
name: s3-iam-backup
spec:
storageLocation:
s3:
bucketName: rancher-backups
folder: ecm1
region: us-west-2
endpoint: s3.us-west-2.amazonaws.com
resourceSetName: rancher-resource-set
encryptionConfigSecretName: encryptionconfig
```
# Restore
This section contains example Restore custom resources.
### Restore Using the Default Backup File Location
```yaml
apiVersion: resources.cattle.io/v1
kind: Restore
metadata:
name: restore-default
spec:
backupFilename: default-location-recurring-backup-752ecd87-d958-4d20-8350-072f8d090045-2020-09-26T12-29-54-07-00.tar.gz
# encryptionConfigSecretName: test-encryptionconfig
```
### Restore for Rancher Migration
```yaml
apiVersion: resources.cattle.io/v1
kind: Restore
metadata:
name: restore-migration
spec:
backupFilename: backup-b0450532-cee1-4aa1-a881-f5f48a007b1c-2020-09-15T07-27-09Z.tar.gz
prune: false
storageLocation:
s3:
credentialSecretName: s3-creds
credentialSecretNamespace: default
bucketName: rancher-backups
folder: ecm1
region: us-west-2
endpoint: s3.us-west-2.amazonaws.com
```
### Restore from Encrypted Backup
```yaml
apiVersion: resources.cattle.io/v1
kind: Restore
metadata:
name: restore-encrypted
spec:
backupFilename: default-test-s3-def-backup-c583d8f2-6daf-4648-8ead-ed826c591471-2020-08-24T20-47-05Z.tar.gz
encryptionConfigSecretName: encryptionconfig
```
### Restore an Encrypted Backup from Minio
```yaml
apiVersion: resources.cattle.io/v1
kind: Restore
metadata:
name: restore-minio
spec:
backupFilename: default-minio-backup-demo-aa5c04b7-4dba-4c48-9ac4-ab7916812eaa-2020-08-30T13-18-17-07-00.tar.gz
storageLocation:
s3:
credentialSecretName: minio-creds
credentialSecretNamespace: default
bucketName: rancherbackups
endpoint: minio.xip.io
endpointCA: LS0tLS1CRUdJTi3VUFNQkl5UUT.....pbEpWaVzNkRS0tLS0t
encryptionConfigSecretName: test-encryptionconfig
```
### Restore from Backup Using an AWS Credential Secret to Access S3
```yaml
apiVersion: resources.cattle.io/v1
kind: Restore
metadata:
name: restore-s3-demo
spec:
backupFilename: test-s3-recurring-backup-752ecd87-d958-4d20-8350-072f8d090045-2020-09-26T12-49-34-07-00.tar.gz.enc
storageLocation:
s3:
credentialSecretName: s3-creds
credentialSecretNamespace: default
bucketName: rancher-backups
folder: ecm1
region: us-west-2
endpoint: s3.us-west-2.amazonaws.com
encryptionConfigSecretName: test-encryptionconfig
```
### Restore from EC2 Nodes with IAM Permissions to Access S3
This example shows that the AWS credential secret does not have to be provided to restore from backup if the nodes running `rancher-backup` have [these permissions for access to S3.](../configuration/backup-config/#iam-permissions-for-ec2-nodes-to-access-s3)
```yaml
apiVersion: resources.cattle.io/v1
kind: Restore
metadata:
name: restore-s3-demo
spec:
backupFilename: default-test-s3-recurring-backup-84bf8dd8-0ef3-4240-8ad1-fc7ec308e216-2020-08-24T10#52#44-07#00.tar.gz
storageLocation:
s3:
bucketName: rajashree-backup-test
folder: ecm1
region: us-west-2
endpoint: s3.us-west-2.amazonaws.com
encryptionConfigSecretName: test-encryptionconfig
```
# Example Credential Secret for Storing Backups in S3
```yaml
apiVersion: v1
kind: Secret
metadata:
name: creds
type: Opaque
data:
accessKey: <Enter your access key>
secretKey: <Enter your secret key>
```
# Example EncryptionConfiguration
```yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aesgcm:
keys:
- name: key1
secret: c2VjcmV0IGlzIHNlY3VyZQ==
- name: key2
secret: dGhpcyBpcyBwYXNzd29yZA==
- aescbc:
keys:
- name: key1
secret: c2VjcmV0IGlzIHNlY3VyZQ==
- name: key2
secret: dGhpcyBpcyBwYXNzd29yZA==
- secretbox:
keys:
- name: key1
secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=
```
@@ -0,0 +1,20 @@
---
title: Legacy Backup and Restore Documentation
weight: 6
---
This section is devoted to protecting your data in a disaster scenario.
To protect yourself from a disaster scenario, you should create backups on a regular basis.
- Rancher server backups:
- [Rancher installed on a K3s Kubernetes cluster](./backups/k3s-backups)
- [Rancher installed on an RKE Kubernetes cluster](./backups/ha-backups)
- [Backing up Rancher Launched Kubernetes Clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/)
In a disaster scenario, you can restore your `etcd` database by restoring a backup.
- [Rancher Server Restorations]({{<baseurl>}}/rancher/v2.x/en/backups/restorations)
- [Restoring Rancher Launched Kubernetes Clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/restoring-etcd/)
For Rancher installed with Docker, the backup and restore procedure is the same in Rancher v2.5. The backup and restore instructions for Docker installs are [here.]({{<baseurl>}}/rancher/v2.x/en/backups/docker-installs)
@@ -1,14 +1,15 @@
---
title: Backups
title: Backup
weight: 50
aliases:
- /rancher/v2.x/en/installation/after-installation/
- /rancher/v2.x/en/backups/
- /rancher/v2.x/en/backups/backups
---
This section contains information about how to create backups of your Rancher data and how to restore them in a disaster scenario.
- [Backing up Rancher installed on a K3s Kubernetes cluster](./k3s-backups)
- [Backing up Rancher installed on an RKE Kubernetes cluster](./ha-backups/)
- [Backing up Rancher installed with Docker](./single-node-backups/)
- [Backing up Rancher installed with Docker]({{<baseurl>}}/rancher/v2.x/en/backups/docker-installs/docker-backups)
If you are looking to back up your [Rancher launched Kubernetes cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), please refer [here]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/).
@@ -1,9 +1,12 @@
---
title: Backing up Rancher Installed on an RKE Kubernetes Cluster
shortTitle: RKE Installs
weight: 2
aliases:
- /rancher/v2.x/en/installation/after-installation/k8s-install-backup-and-restoration/
- /rancher/v2.x/en/installation/backups-and-restoration/ha-backup-and-restoration/
- /rancher/v2.x/en/backups/backups/ha-backups
- /rancher/v2.x/en/backups/backups/k8s-backups/ha-backups
---
This section describes how to create backups of your high-availability Rancher install.
@@ -1,6 +1,10 @@
---
title: Backing up Rancher Installed on a K3s Kubernetes Cluster
shortTitle: K3s Installs
weight: 1
aliases:
- /rancher/v2.x/en/backups/backups/k3s-backups
- /rancher/v2.x/en/backups/backups/k8s-backups/k3s-backups
---
When Rancher is installed on a high-availability Kubernetes cluster, we recommend using an external database to store the cluster data.
@@ -1,10 +1,12 @@
---
title: Restorations
title: Restore
weight: 1010
aliases:
- /rancher/v2.x/en/backups/restorations
---
If you lose the data on your Rancher Server, you can restore it if you have backups stored in a safe location.
- [Restoring Backups—Docker Installs]({{<baseurl>}}/rancher/v2.x/en/backups/restorations/single-node-restoration/)
- [Restoring Backups—Docker Installs]({{<baseurl>}}/rancher/v2.x/en/backups/docker-installs/docker-restores)
- [Restoring Backups—Kubernetes installs]({{<baseurl>}}/rancher/v2.x/en/backups/restorations/ha-restoration/)
If you are looking to restore your [Rancher launched Kubernetes cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), please refer [here]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/restoring-etcd/).
@@ -1,6 +1,10 @@
---
title: Restoring Rancher Installed on a K3s Kubernetes Cluster
shortTitle: K3s Installs
weight: 1
aliases:
- /rancher/v2.x/en/backups/restorations/k3s-restoration
- /rancher/v2.x/en/backups/restorations/k8s-restore/k3s-restore
---
When Rancher is installed on a high-availability Kubernetes cluster, we recommend using an external database to store the cluster data.
@@ -0,0 +1,136 @@
---
title: Restoring Backups—Kubernetes installs
shortTitle: RKE Installs
weight: 2
aliases:
- /rancher/v2.x/en/installation/after-installation/ha-backup-and-restoration/
- /rancher/v2.x/en/backups/restorations/ha-restoration
- /rancher/v2.x/en/backups/restorations/k8s-restore/rke-restore
---
This procedure describes how to use RKE to restore a snapshot of the Rancher Kubernetes cluster.
This will restore the Kubernetes configuration and the Rancher database and state.
> **Note:** This document covers clusters set up with RKE >= v0.2.x, for older RKE versions refer to the [RKE Documentation]({{<baseurl>}}/rke/latest/en/etcd-snapshots/restoring-from-backup).
## Restore Outline
<!-- TOC -->
- [1. Preparation](#1-preparation)
- [2. Place Snapshot](#2-place-snapshot)
- [3. Configure RKE](#3-configure-rke)
- [4. Restore the Database and bring up the Cluster](#4-restore-the-database-and-bring-up-the-cluster)
<!-- /TOC -->
### 1. Preparation
It is advised that you run the restore from your local host or a jump box/bastion where your cluster yaml, rke statefile, and kubeconfig are stored. You will need [RKE]({{<baseurl>}}/rke/latest/en/installation/) and [kubectl]({{<baseurl>}}/rancher/v2.x/en/faq/kubectl/) CLI utilities installed locally.
Prepare by creating 3 new nodes to be the target for the restored Rancher instance. We recommend that you start with fresh nodes and a clean state. For clarification on the requirements, review the [Installation Requirements](https://rancher.com/docs/rancher/v2.x/en/installation/requirements/).
Alternatively you can re-use the existing nodes after clearing Kubernetes and Rancher configurations. This will destroy the data on these nodes. See [Node Cleanup]({{<baseurl>}}/rancher/v2.x/en/faq/cleaning-cluster-nodes/) for the procedure.
> **IMPORTANT:** Before starting the restore make sure all the Kubernetes services on the old cluster nodes are stopped. We recommend powering off the nodes to be sure.
### 2. Place Snapshot
As of RKE v0.2.0, snapshots could be saved in an S3 compatible backend. To restore your cluster from the snapshot stored in S3 compatible backend, you can skip this step and retrieve the snapshot in [4. Restore the Database and bring up the Cluster](#4-restore-the-database-and-bring-up-the-cluster). Otherwise, you will need to place the snapshot directly on one of the etcd nodes.
Pick one of the clean nodes that will have the etcd role assigned and place the zip-compressed snapshot file in `/opt/rke/etcd-snapshots` on that node.
> **Note:** Because of a current limitation in RKE, the restore process does not work correctly if `/opt/rke/etcd-snapshots` is a NFS share that is mounted on all nodes with the etcd role. The easiest options are to either keep `/opt/rke/etcd-snapshots` as a local folder during the restore process and only mount the NFS share there after it has been completed, or to only mount the NFS share to one node with an etcd role in the beginning.
### 3. Configure RKE
Use your original `rancher-cluster.yml` and `rancher-cluster.rkestate` files. If they are not stored in a version control system, it is a good idea to back them up before making any changes.
```
cp rancher-cluster.yml rancher-cluster.yml.bak
cp rancher-cluster.rkestate rancher-cluster.rkestate.bak
```
If the replaced or cleaned nodes have been configured with new IP addresses, modify the `rancher-cluster.yml` file to ensure the address and optional internal_address fields reflect the new addresses.
> **IMPORTANT:** You should not rename the `rancher-cluster.yml` or `rancher-cluster.rkestate` files. It is important that the filenames match each other.
### 4. Restore the Database and bring up the Cluster
You will now use the RKE command-line tool with the `rancher-cluster.yml` and the `rancher-cluster.rkestate` configuration files to restore the etcd database and bring up the cluster on the new nodes.
> **Note:** Ensure your `rancher-cluster.rkestate` is present in the same directory as the `rancher-cluster.yml` file before starting the restore, as this file contains the certificate data for the cluster.
#### Restoring from a Local Snapshot
When restoring etcd from a local snapshot, the snapshot is assumed to be located on the target node in the directory `/opt/rke/etcd-snapshots`.
```
rke etcd snapshot-restore --name snapshot-name --config ./rancher-cluster.yml
```
> **Note:** The --name parameter expects the filename of the snapshot without the extension.
#### Restoring from a Snapshot in S3
_Available as of RKE v0.2.0_
When restoring etcd from a snapshot located in an S3 compatible backend, the command needs the S3 information in order to connect to the S3 backend and retrieve the snapshot.
```
$ rke etcd snapshot-restore --config ./rancher-cluster.yml --name snapshot-name \
--s3 --access-key S3_ACCESS_KEY --secret-key S3_SECRET_KEY \
--bucket-name s3-bucket-name --s3-endpoint s3.amazonaws.com \
--folder folder-name # Available as of v2.3.0
```
#### Options for `rke etcd snapshot-restore`
S3 specific options are only available for RKE v0.2.0+.
| Option | Description | S3 Specific |
| --- | --- | ---|
| `--name` value | Specify snapshot name | |
| `--config` value | Specify an alternate cluster YAML file (default: "cluster.yml") [$RKE_CONFIG] | |
| `--s3` | Enabled backup to s3 |* |
| `--s3-endpoint` value | Specify s3 endpoint url (default: "s3.amazonaws.com") | * |
| `--access-key` value | Specify s3 accessKey | *|
| `--secret-key` value | Specify s3 secretKey | *|
| `--bucket-name` value | Specify s3 bucket name | *|
| `--folder` value | Specify s3 folder in the bucket name _Available as of v2.3.0_ | *|
| `--region` value | Specify the s3 bucket location (optional) | *|
| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK]({{<baseurl>}}/rke/latest/en/config-options/#ssh-agent) | |
| `--ignore-docker-version` | [Disable Docker version check]({{<baseurl>}}/rke/latest/en/config-options/#supported-docker-versions) |
#### Testing the Cluster
Once RKE completes it will have created a credentials file in the local directory. Configure `kubectl` to use the `kube_config_rancher-cluster.yml` credentials file and check on the state of the cluster. See [Installing and Configuring kubectl]({{<baseurl>}}/rancher/v2.x/en/faq/kubectl/#configuration) for details.
#### Check Kubernetes Pods
Wait for the pods running in `kube-system`, `ingress-nginx` and the `rancher` pod in `cattle-system` to return to the `Running` state.
> **Note:** `cattle-cluster-agent` and `cattle-node-agent` pods will be in an `Error` or `CrashLoopBackOff` state until Rancher server is up and the DNS/Load Balancer have been pointed at the new cluster.
```
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
cattle-system cattle-cluster-agent-766585f6b-kj88m 0/1 Error 6 4m
cattle-system cattle-node-agent-wvhqm 0/1 Error 8 8m
cattle-system rancher-78947c8548-jzlsr 0/1 Running 1 4m
ingress-nginx default-http-backend-797c5bc547-f5ztd 1/1 Running 1 4m
ingress-nginx nginx-ingress-controller-ljvkf 1/1 Running 1 8m
kube-system canal-4pf9v 3/3 Running 3 8m
kube-system cert-manager-6b47fc5fc-jnrl5 1/1 Running 1 4m
kube-system kube-dns-7588d5b5f5-kgskt 3/3 Running 3 4m
kube-system kube-dns-autoscaler-5db9bbb766-s698d 1/1 Running 1 4m
kube-system metrics-server-97bc649d5-6w7zc 1/1 Running 1 4m
kube-system tiller-deploy-56c4cf647b-j4whh 1/1 Running 1 4m
```
#### Finishing Up
Rancher should now be running and available to manage your Kubernetes clusters. Review the [recommended architecture]({{<baseurl>}}/rancher/v2.x/en/installation/k8s-install/#recommended-architecture) for Kubernetes installations and update the endpoints for Rancher DNS or the Load Balancer that you built during Step 1 of the Kubernetes install ([1. Create Nodes and Load Balancer]({{<baseurl>}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/#load-balancer)) to target the new cluster. Once the endpoints are updated, the agents on your managed clusters should automatically reconnect. This may take 10-15 minutes due to reconnect back off timeouts.
> **IMPORTANT:** Remember to save your updated RKE config (`rancher-cluster.yml`) state file (`rancher-cluster.rkestate`) and `kubectl` credentials (`kube_config_rancher-cluster.yml`) files in a safe place for future maintenance for example in a version control system.
@@ -0,0 +1,97 @@
---
title: Migrating Rancher to a New Cluster
weight: 3
---
If you are migrating Rancher to a new Kubernetes cluster, you don't need to install Rancher on the new cluster first. If Rancher is restored to a new cluster with Rancher already installed, it can cause problems.
### Prerequisites
These instructions assume you have [created a backup](../back-up-rancher) and you have already installed a new Kubernetes cluster where Rancher will be deployed.
It is required to use the same hostname that was set as the server URL in the first cluster.
Rancher version must be v2.5.0 and up
Rancher can be installed on any Kubernetes cluster, including hosted Kubernetes clusters such as Amazon EKS clusters. For help installing Kubernetes, refer to the documentation of the Kubernetes distribution. One of Rancher's Kubernetes distributions may also be used:
- [RKE Kubernetes installation docs]({{<baseurl>}}/rke/latest/en/installation/)
- [K3s Kubernetes installation docs]({{<baseurl>}}/k3s/latest/en/installation/)
### 1. Install the rancher-backup Helm chart
```
helm repo add rancher-charts https://charts.rancher.io
helm repo update
helm install rancher-backup-crd rancher-charts/rancher-backup-crd -n cattle-resources-system --create-namespace
helm install rancher-backup rancher-charts/rancher-backup -n cattle-resources-system
```
### 2. Restore from backup using a Restore custom resource
If you are using an S3 store as the backup source, and need to use your S3 credentials for restore, create a secret in this cluster using your S3 credentials. The Secret data must have two keys, `accessKey` and `secretKey` containing the s3 credentials like this:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: s3-creds
type: Opaque
data:
accessKey: <Enter your access key>
secretKey: <Enter your secret key>
```
This secret can be created in any namespace, with the above example it will get created in the default namespace
In the Restore custom resource, `prune` must be set to false.
Create a Restore custom resource like the example below:
```yaml
# migrationResource.yaml
apiVersion: resources.cattle.io/v1
kind: Restore
metadata:
name: restore-migration
spec:
backupFilename: backup-b0450532-cee1-4aa1-a881-f5f48a007b1c-2020-09-15T07-27-09Z.tar.gz
prune: false
encryptionConfigSecretName: encryptionconfig
storageLocation:
s3:
credentialSecretName: s3-creds
credentialSecretNamespace: default
bucketName: backup-test
folder: ecm1
region: us-west-2
endpoint: s3.us-west-2.amazonaws.com
```
> **Important:** The field `encryptionConfigSecretName` must be set only if your backup was created with encryption enabled. Provide the name of the Secret containing the encryption config file. If you only have the encryption config file, but don't have a secret created with it in this cluster, use the following steps to create the secret:
1. The encryption configuration file must be named `encryption-provider-config.yaml`, and the `--from-file` flag must be used to create this secret. So save your `EncryptionConfiguration` in a file called `encryption-provider-config.yaml` and run this command:
```
kubectl create secret generic encryptionconfig \
--from-file=./encryption-provider-config.yaml \
-n cattle-resources-system
```
Then apply the resource:
```
kubectl apply -f migrationResource.yaml
```
### 3. Install cert-manager
Follow the steps to [install cert-manager]({{<baseurl>}}/rancher/v2.x/en/installation/install-rancher-on-k8s/install/#5-install-cert-manager) in the documentation about installing cert-manager on Kubernetes.
### 4. Bring up Rancher with Helm
Use the same version of Helm to install Rancher, that was used on the first cluster.
```
helm install rancher rancher-latest/rancher \
--namespace cattle-system \
--set hostname=<same hostname as the server URL from the first Rancher server> \
```
@@ -0,0 +1,52 @@
---
title: Restoring Rancher
weight: 2
---
A restore is performed by creating a Restore custom resource.
> **Important**
* Follow the instructions from this page for restoring rancher on the same cluster where it was backed up from. In order to migrate rancher to a new cluster, follow the steps to [migrate rancher.](../migrating-rancher)
* While restoring rancher on the same setup, the operator will scale down the rancher deployment when restore starts, and it will scale back up the deployment once restore completes. So Rancher will be unavailable during the restore.
### Create the Restore Custom Resource
1. In the **Cluster Explorer,** go to the dropdown menu in the upper left corner and click **Rancher Backups.**
1. Click **Restore.**
1. Create the Restore with the form, or with YAML. For creating the Restore resource using form, refer to the [configuration reference](../configuration/restore-config) and to the [examples.](../examples/#restore)
1. For using the YAML editor, we can click **Create > Create from YAML.** Enter the Restore YAML.
```yaml
apiVersion: resources.cattle.io/v1
kind: Restore
metadata:
name: restore-migration
spec:
backupFilename: backup-b0450532-cee1-4aa1-a881-f5f48a007b1c-2020-09-15T07-27-09Z.tar.gz
encryptionConfigSecretName: encryptionconfig
storageLocation:
s3:
credentialSecretName: s3-creds
credentialSecretNamespace: default
bucketName: rancher-backups
folder: rancher
region: us-west-2
endpoint: s3.us-west-2.amazonaws.com
```
For help configuring the Restore, refer to the [configuration reference](../configuration/restore-config) and to the [examples.](../examples/#restore)
1. Click **Create.**
**Result:** The rancher-operator scales down the rancher deployment during restore, and scales it back up once the restore completes. The resources are restored in this order:
1. Custom Resource Definitions (CRDs)
2. Cluster-scoped resources
3. Namespaced resources
To check how the restore is progressing, you can check the logs of the operator. Follow these steps to get the logs:
```yaml
kubectl get pods -n cattle-resources-system
kubectl logs <pod name from above command> -n cattle-resources-system -f
```
@@ -1,6 +1,6 @@
---
title: Best Practices Guide
weight: 1000
weight: 4
---
The purpose of this section is to consolidate best practices for Rancher implementations. This also includes recommendations for related technologies, such as Kubernetes, Docker, containers, and more. The objective is to improve the outcome of a Rancher implementation using the operational experience of Rancher and its customers.
+236
View File
@@ -0,0 +1,236 @@
---
title: CIS Scans
weight: 18
---
_Available as of v2.4.0_
Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark.
The `rancher-cis-benchmark` app leverages <a href="https://github.com/aquasecurity/kube-bench" target="_blank">kube-bench,</a> an open-source tool from Aqua Security, to check clusters for CIS Kubernetes Benchmark compliance. Also, to generate a cluster-wide report, the application utilizes <a href="https://github.com/vmware-tanzu/sonobuoy" target="_blank">Sonobuoy</a> for report aggregation.
> The CIS scan feature was improved in Rancher v2.5. If you are using Rancher v2.4, refer to the older version of the CIS scan documentation [here.](./legacy)
- [Changes in Rancher v2.5](#changes-in-rancher-v2-5)
- [About the CIS Benchmark](#about-the-cis-benchmark)
- [Installing rancher-cis-benchmark](#installing-rancher-cis-benchmark)
- [Uninstalling rancher-cis-benchmark](#uninstalling-rancher-cis-benchmark)
- [Running a Scan](#running-a-scan)
- [Skipping Tests](#skipping-tests)
- [Viewing Reports](#viewing-reports)
- [About the generated report](#about-the-generated-report)
- [Test Profiles](#test-profiles)
- [About Skipped and Not Applicable Tests](#about-skipped-and-not-applicable-tests)
- [Roles-based access control](./rbac)
- [Configuration](./configuration)
### Changes in Rancher v2.5
We now support running CIS scans on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE. Previously it was only supported to run CIS scans on RKE Kubernetes clusters.
In Rancher v2.4, the CIS scan tool was available from the **cluster manager** in the Rancher UI. Now it is available in the **Cluster Explorer** and it can be enabled and deployed using a Helm chart. It can be installed from the Rancher UI, but it can also be installed independently of Rancher. It deploys a CIS scan operator for the cluster, and deploys Kubernetes custom resources for cluster scans. The custom resources can be managed directly from the **Cluster Explorer.**
In v1 of the CIS scan tool, which was available in Rancher v2.4 through the cluster manager, recurring scans could be scheduled. The ability to schedule recurring scans is not yet available in Rancher v2.5.
Support for alerting for the cluster scan results is not available for Rancher v2.5 yet.
More test profiles were added. In Rancher v2.4, permissive and hardened profiles were included. In Rancher v2.5, the following profiles are available:
- Generic CIS 1.5
- RKE permissive
- RKE hardened
- EKS
- GKE
The default profile depends on the type of cluster that will be scanned:
- For RKE Kubernetes clusters, the RKE permissive profile is the default.
- EKS and GKE have their own CIS Benchmarks published by `kube-bench`. The corresponding test profiles are used by default for those clusters.
- For cluster types other than RKE, EKS and GKE, the Generic CIS 1.5 profile will be used by default.
The `rancher-cis-benchmark` currently supports the CIS 1.5 Benchmark version.
> **Note:** CIS v1 cannot run on a cluster when CIS v2 is deployed. In other words, after `rancher-cis-benchmark` is installed, you can't run scans by going to the Cluster Manager view in the Rancher UI and clicking **Tools > CIS Scans.**
# About the CIS Benchmark
The Center for Internet Security is a 501(c)(3) nonprofit organization, formed in October 2000, with a mission is to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace". The organization is headquartered in East Greenbush, New York, with members including large corporations, government agencies, and academic institutions.
CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team.
The official Benchmark documents are available through the CIS website. The sign-up form to access the documents is
<a href="https://learn.cisecurity.org/benchmarks" target="_blank">here.</a>
# Installing rancher-cis-benchmark
The application can be installed with the Rancher UI or with Helm.
### Installing with the Rancher UI
1. In the Rancher UI, go to the **Cluster Explorer.**
1. Click **Apps.**
1. Click `rancher-cis-benchmark`.
1. Click **Install.**
**Result:** The CIS scan application is deployed on the Kubernetes cluster.
### Installing with Helm
There are two Helm charts for the application:
- `rancher-cis-benchmark-crds`, the custom resource definition chart
- `rancher-cis-benchmark`, the chart deploying <a href="https://github.com/rancher/cis-operator" target="_blank">rancher/cis-operator</a>
To install the charts, run the following commands:
```
helm repo add rancherchart https://charts.rancher.io
helm repo update
helm install rancher-cis-benchmark-crd --kubeconfig <> rancherchart/rancher-cis-benchmark-crd --create-namespace -n cis-operator-system
helm install rancher-cis-benchmark --kubeconfig <> rancherchart/rancher-cis-benchmark -n cis-operator-system
```
# Uninstalling rancher-cis-benchmark
The application can be uninstalled with the Rancher UI or with Helm.
### Uninstalling with the Rancher UI
1. From the **Cluster Explorer,** go to the top left dropdown menu and click **Apps & Marketplace.**
1. Click **Installed Apps.**
1. Go to the `cis-operator-system` namespace and check the boxes next to `rancher-cis-benchmark-crd` and `rancher-cis-benchmark`.
1. Click **Delete** and confirm **Delete.**
**Result:** The `rancher-cis-benchmark` application is uninstalled.
### Uninstalling with Helm
Run the following commands:
```
helm uninstall rancher-cis-benchmark -n cis-operator-system
helm uninstall rancher-cis-benchmark-crd -n cis-operator-system
```
# Running a Scan
When a ClusterScan custom resource is created, it launches a new CIS scan on the cluster for the chosen ClusterScanProfile.
Note: There is currently a limitation of running only one CIS scan at a time for a cluster. If you create multiple ClusterScan custom resources, they will be run one after the other by the operator, and until one scan finishes, the rest of the ClusterScan custom resources will be in the "Pending" state.
To run a scan,
1. Go to the **Cluster Explorer** in the Rancher UI. In the top left dropdown menu, click **Cluster Explorer > CIS Benchmark.**
1. In the **Scans** section, click **Create.**
1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on.
1. Click **Create.**
**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears.
# Skipping Tests
CIS scans can be run using test profiles with user-defined skips.
To skip tests, you will create a custom CIS scan profile. A profile contains the configuration for the CIS scan, which includes the benchmark versions to use and any specific tests to skip in that benchmark.
1. In the **Cluster Explorer,** go to the top-left dropdown menu and click **CIS Benchmark.**
1. Click **Profiles.**
1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile, click the three vertical dots, and click **Clone as YAML.** If you are filling out the form, add the tests to skip using the test IDs, using the relevant CIS Benchmark as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name:
```yaml
apiVersion: cis.cattle.io/v1
kind: ClusterScanProfile
metadata:
annotations:
meta.helm.sh/release-name: clusterscan-operator
meta.helm.sh/release-namespace: cis-operator-system
labels:
app.kubernetes.io/managed-by: Helm
name: "<example-profile>"
spec:
benchmarkVersion: cis-1.5
skipTests:
- "1.1.20"
- "1.1.21"
```
1. Click **Create.**
**Result:** A new CIS scan profile is created.
When you [run a scan](#running-a-scan) that uses this profile, the defined tests will be skipped during the scan. The skipped tests will be marked in the generated report as `Skip`.
# Viewing Reports
To view the generated CIS scan reports,
1. In the **Cluster Explorer,** go to the top left dropdown menu and click **Cluster Explorer > CIS Benchmark.**
1. The **Scans** page will show the generated reports. To see a detailed report, go to a scan report and click the name.
One can download the report from the Scans list or from the scan detail page.
# About the Generated Report
Each scan generates a report can be viewed in the Rancher UI and can be downloaded in CSV format.
In Rancher v2.5, the scan will use the CIS Benchmark v1.5. The Benchmark version is included in the generated report.
The Benchmark provides recommendations of two types: Scored and Not Scored. Recommendations marked as Not Scored in the Benchmark are not included in the generated report.
Some tests are designated as "Not Applicable." These tests will not be run on any CIS scan because of the way that Rancher provisions RKE clusters. For information on how test results can be audited, and why some tests are designated to be not applicable, refer to Rancher's <a href="{{<baseurl>}}/rancher/v2.x/en/security/#the-cis-benchmark-and-self-assessment" target="_blank">self-assessment guide for the corresponding Kubernetes version.</a>
The report contains the following information:
| Column in Report | Description |
|------------------|-------------|
| `id` | The ID number of the CIS Benchmark. |
| `description` | The description of the CIS Benchmark test. |
| `remediation` | What needs to be fixed in order to pass the test. |
| `state` | Indicates if the test passed, failed, was skipped, or was not applicable. |
| `node_type` | The node role, which affects which tests are run on the node. Master tests are run on controlplane nodes, etcd tests are run on etcd nodes, and node tests are run on the worker nodes. |
| `audit` | This is the audit check that `kube-bench` runs for this test. |
| `audit_config` | Any configuration applicable to the audit script. |
| `test_info` | Test-related info as reported by `kube-bench`, if any. |
| `commands` | Test-related commands as reported by `kube-bench`, if any. |
| `config_commands` | Test-related configuration data as reported by `kube-bench`, if any. |
| `actual_value` | The test's actual value, present if reported by `kube-bench`. |
| `expected_result` | The test's expected result, present if reported by `kube-bench`. |
Refer to <a href="{{<baseurl>}}/rancher/v2.x/en/security/" target="_blank">the table in the cluster hardening guide</a> for information on which versions of Kubernetes, the Benchmark, Rancher, and our cluster hardening guide correspond to each other. Also refer to the hardening guide for configuration files of CIS-compliant clusters and information on remediating failed tests.
# Test Profiles
The following profiles are available:
- Generic CIS 1.5 (default)
- RKE permissive
- RKE hardened
- EKS
- GKE
You also have the ability to customize a profile by saving a set of tests to skip.
All profiles will have a set of not applicable tests that will be skipped during the CIS scan. These tests are not applicable based on how a RKE cluster manages Kubernetes.
There are 2 types of RKE cluster scan profiles:
- **Permissive:** This profile has a set of tests that have been will be skipped as these tests will fail on a default RKE Kubernetes cluster. Besides the list of skipped tests, the profile will also not run the not applicable tests.
- **Hardened:** This profile will not skip any tests, except for the non-applicable tests.
The EKS and GKE cluster scan profiles are based on CIS Benchmark versions that are specific to those types of clusters.
In order to pass the "Hardened" profile, you will need to follow the steps on the <a href="{{<baseurl>}}/rancher/v2.x/en/security/#rancher-hardening-guide" target="_blank">hardening guide</a> and use the `cluster.yml` defined in the hardening guide to provision a hardened cluster.
# About Skipped and Not Applicable Tests
For a list of skipped and not applicable tests, refer to <a href="{{<baseurl>}}/rancher/v2.x/en/cis-scans/skipped-tests" target="_blank">this page.</a>
For now, only user-defined skipped tests are marked as skipped in the generated report.
Any skipped tests that are defined as being skipped by one of the default profiles are marked as not applicable.
# Roles-based Access Control
For information about permissions, refer to <a href="{{<baseurl>}}/rancher/v2.x/en/cis-scans/rbac" target="_blank">this page.</a>
# Configuration
For more information about configuring the custom resources for the scans, profiles, and benchmark versions, refer to <a href="{{<baseurl>}}/rancher/v2.x/en/cis-scans/configuration" target="_blank">this page.</a>
@@ -0,0 +1,94 @@
---
title: Configuration
weight: 3
---
This configuration reference is intended to help you manage the custom resources created by the `rancher-cis-benchmark` application. These resources are used for performing CIS scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization.
To configure the custom resources, go to the **Cluster Explorer** in the Rancher UI. In dropdown menu in the top left corner, click **Cluster Explorer > CIS Benchmark.**
### Scans
A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed.
When configuring a scan, you need to define the name of the scan profile that will be used with the `scanProfileName` directive.
An example ClusterScan custom resource is below:
```yaml
apiVersion: cis.cattle.io/v1
kind: ClusterScan
metadata:
name: rke-cis
spec:
scanProfileName: rke-profile-hardened
```
### Profiles
A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark.
> By default, a few ClusterScanProfiles are installed as part of the `rancher-cis-benchmark` chart. If a user edits these default benchmarks or profiles, the next chart update will reset them back. So it is advisable for users to not edit the default ClusterScanProfiles.
Users can clone the ClusterScanProfiles to create custom profiles.
Skipped tests are listed under the `skipTests` directive.
When you create a new profile, you will also need to give it a name.
An example `ClusterScanProfile` is below:
```yaml
apiVersion: cis.cattle.io/v1
kind: ClusterScanProfile
metadata:
annotations:
meta.helm.sh/release-name: clusterscan-operator
meta.helm.sh/release-namespace: cis-operator-system
labels:
app.kubernetes.io/managed-by: Helm
name: "<example-profile>"
spec:
benchmarkVersion: cis-1.5
skipTests:
- "1.1.20"
- "1.1.21"
```
### Benchmark Versions
A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark.
A `ClusterScanBenchmark` defines the CIS `BenchmarkVersion` name and test configurations. The `BenchmarkVersion` name is a parameter provided to the `kube-bench` tool.
By default, a few `BenchmarkVersion` names and test configurations are packaged as part of the CIS scan application. When this feature is enabled, these default BenchmarkVersions will be automatically installed and available for users to create a ClusterScanProfile.
> If the default BenchmarkVersions are edited, the next chart update will reset them back. Therefore we don't recommend editing the default ClusterScanBenchmarks.
A ClusterScanBenchmark consists of the fields:
- `ClusterProvider`: This is the cluster provider name for which this benchmark is applicable. For example: RKE, EKS, GKE, etc. Leave it empty if this benchmark can be run on any cluster type.
- `MinKubernetesVersion`: Specifies the cluster's minimum kubernetes version necessary to run this benchmark. Leave it empty if there is no dependency on a particular Kubernetes version.
- `MaxKubernetesVersion`: Specifies the cluster's maximum Kubernetes version necessary to run this benchmark. Leave it empty if there is no dependency on a particular k8s version.
An example `ClusterScanBenchmark` is below:
```yaml
apiVersion: cis.cattle.io/v1
kind: ClusterScanBenchmark
metadata:
annotations:
meta.helm.sh/release-name: clusterscan-operator
meta.helm.sh/release-namespace: cis-operator-system
creationTimestamp: "2020-08-28T18:18:07Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: cis-1.5
resourceVersion: "203878"
selfLink: /apis/cis.cattle.io/v1/clusterscanbenchmarks/cis-1.5
uid: 309e543e-9102-4091-be91-08d7af7fb7a7
spec:
clusterProvider: ""
minKubernetesVersion: 1.15.0
```
@@ -0,0 +1,157 @@
---
title: Cluster Manager CIS Scan (Deprecated)
shortTitle: Cluster Manager
weight: 1
---
_Available as of v2.4.0_
This section contains the legacy documentation for the CIS Scan tool that was released in Rancher v2.4, and was available under the **Tools** menu in the top navigation bar of the cluster manager.
As of Rancher v2.5, it is deprecated and replaced with the `rancher-cis-benchmark` application.
- [Prerequisites](#prerequisites)
- [Running a scan](#running-a-scan)
- [Scheduling recurring scans](#scheduling-recurring-scans)
- [Skipping tests](#skipping-tests)
- [Setting alerts](#setting-alerts)
- [Deleting a report](#deleting-a-report)
- [Downloading a report](#downloading-a-report)
- [List of skipped and not applicable tests](#list-of-skipped-and-not-applicable-tests)
# Prerequisites
To run security scans on a cluster and access the generated reports, you must be an [Administrator]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [Cluster Owner.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/)
Rancher can only run security scans on clusters that were created with RKE, which includes custom clusters and clusters that Rancher created in an infrastructure provider such as Amazon EC2 or GCE. Imported clusters and clusters in hosted Kubernetes providers can't be scanned by Rancher.
The security scan cannot run in a cluster that has Windows nodes.
You will only be able to see the CIS scan reports for clusters that you have access to.
# Running a Scan
1. From the cluster view in Rancher, click **Tools > CIS Scans.**
1. Click **Run Scan.**
1. Choose a CIS scan profile.
**Result:** A report is generated and displayed in the **CIS Scans** page. To see details of the report, click the report's name.
# Scheduling Recurring Scans
Recurring scans can be scheduled to run on any RKE Kubernetes cluster.
To enable recurring scans, edit the advanced options in the cluster configuration during cluster creation or after the cluster has been created.
To schedule scans for an existing cluster:
1. Go to the cluster view in Rancher.
1. Click **Tools > CIS Scans.**
1. Click **Add Schedule.** This takes you to the section of the cluster editing page that is applicable to configuring a schedule for CIS scans. (This section can also be reached by going to the cluster view, clicking **&#8942; > Edit,** and going to the **Advanced Options.**)
1. In the **CIS Scan Enabled** field, click **Yes.**
1. In the **CIS Scan Profile** field, choose a **Permissive** or **Hardened** profile. The corresponding CIS Benchmark version is included in the profile name. Note: Any skipped tests [defined in a separate ConfigMap](#skipping-tests) will be skipped regardless of whether a **Permissive** or **Hardened** profile is selected. When selecting the the permissive profile, you should see which tests were skipped by Rancher (tests that are skipped by default for RKE clusters) and which tests were skipped by a Rancher user. In the hardened test profile, the only skipped tests will be skipped by users.
1. In the **CIS Scan Interval (cron)** job, enter a [cron expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) to define how often the cluster will be scanned.
1. In the **CIS Scan Report Retention** field, enter the number of past reports that should be kept.
**Result:** The security scan will run and generate reports at the scheduled intervals.
The test schedule can be configured in the `cluster.yml`:
```yaml
scheduled_cluster_scan:
    enabled: true
    scan_config:
        cis_scan_config:
            override_benchmark_version: rke-cis-1.4
            profile: permissive
    schedule_config:
        cron_schedule: 0 0 * * *
        retention: 24
```
# Skipping Tests
You can define a set of tests that will be skipped by the CIS scan when the next report is generated.
These tests will be skipped for subsequent CIS scans, including both manually triggered and scheduled scans, and the tests will be skipped with any profile.
The skipped tests will be listed alongside the test profile name in the cluster configuration options when a test profile is selected for a recurring cluster scan. The skipped tests will also be shown every time a scan is triggered manually from the Rancher UI by clicking **Run Scan.** The display of skipped tests allows you to know ahead of time which tests will be run in each scan.
To skip tests, you will need to define them in a Kubernetes ConfigMap resource. Each skipped CIS scan test is listed in the ConfigMap alongside the version of the CIS benchmark that the test belongs to.
To skip tests by editing a ConfigMap resource,
1. Create a `security-scan` namespace.
1. Create a ConfigMap named `security-scan-cfg`.
1. Enter the skip information under the key `config.json` in the following format:
```json
{
"skip": {
"rke-cis-1.4": [
"1.1.1",
"1.2.2"
]
}
}
```
In the example above, the CIS benchmark version is specified alongside the tests to be skipped for that version.
**Result:** These tests will be skipped on subsequent scans that use the defined CIS Benchmark version.
# Setting Alerts
Rancher provides a set of alerts for cluster scans. which are not configured to have notifiers by default:
- A manual cluster scan was completed
- A manual cluster scan has failures
- A scheduled cluster scan was completed
- A scheduled cluster scan has failures
> **Prerequisite:** You need to configure a [notifier]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) before configuring, sending, or receiving alerts.
To activate an existing alert for a CIS scan result,
1. From the cluster view in Rancher, click **Tools > Alerts.**
1. Go to the section called **A set of alerts for cluster scans.**
1. Go to the alert you want to activate and click **&#8942; > Activate.**
1. Go to the alert rule group **A set of alerts for cluster scans** and click **&#8942; > Edit.**
1. Scroll down to the **Alert** section. In the **To** field, select the notifier that you would like to use for sending alert notifications.
1. Optional: To limit the frequency of the notifications, click on **Show advanced options** and configure the time interval of the alerts.
1. Click **Save.**
**Result:** The notifications will be triggered when the a scan is run on a cluster and the active alerts have satisfied conditions.
To create a new alert,
1. Go to the cluster view and click **Tools > CIS Scans.**
1. Click **Add Alert.**
1. Fill out the form.
1. Enter a name for the alert.
1. In the **Is** field, set the alert to be triggered when a scan is completed or when a scan has a failure.
1. In the **Send a** field, set the alert as a **Critical,** **Warning,** or **Info** alert level.
1. Choose a [notifier]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) for the alert.
**Result:** The alert is created and activated. The notifications will be triggered when the a scan is run on a cluster and the active alerts have satisfied conditions.
For more information about alerts, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/alerts/)
# Deleting a Report
1. From the cluster view in Rancher, click **Tools > CIS Scans.**
1. Go to the report that should be deleted.
1. Click the **&#8942; > Delete.**
1. Click **Delete.**
# Downloading a Report
1. From the cluster view in Rancher, click **Tools > CIS Scans.**
1. Go to the report that you want to download. Click **&#8942; > Download.**
**Result:** The report is downloaded in CSV format. For more information on each columns, refer to the [section about the generated report.](#about-the-generated-report)
# List of Skipped and Not Applicable Tests
For a list of skipped and not applicable tests, refer to <a href="{{<baseurl>}}/rancher/v2.x/en/cis-scans/legacy/skipped-tests" target="_blank">this page.</a>
@@ -0,0 +1,105 @@
---
title: Skipped and Not Applicable Tests
weight: 1
---
This section lists the tests that are skipped in the permissive test profile for RKE.
All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile.
- [CIS Benchmark v1.5](#cis-benchmark-v1-5)
- [CIS Benchmark v1.4](#cis-benchmark-v1-4)
# CIS Benchmark v1.5
### CIS Benchmark v1.5 Skipped Tests
| Number | Description | Reason for Skipping |
| ---------- | ------------- | --------- |
| 1.1.12 | Ensure that the etcd data directory ownership is set to etcd:etcd (Scored) | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership. |
| 1.2.6 | Ensure that the --kubelet-certificate-authority argument is set as appropriate (Scored) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. |
| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. |
| 1.2.34 | Ensure that encryption providers are appropriately configured (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. |
| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true. |
| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. |
| 5.1.5 | Ensure that default service accounts are not actively used. (Scored) | Kubernetes provides default service accounts to be used. |
| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 5.2.3 | Minimize the admission of containers wishing to share the host IPC namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 5.2.4 | Minimize the admission of containers wishing to share the host network namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 5.2.5 | Minimize the admission of containers with allowPrivilegeEscalation (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 5.3.2 | Ensure that all Namespaces have Network Policies defined (Scored) | Enabling Network Policies can prevent certain applications from communicating with each other. |
| 5.6.4 | The default namespace should not be used (Scored) | Kubernetes provides a default namespace. |
### CIS Benchmark v1.5 Not Applicable Tests
| Number | Description | Reason for being not applicable |
| ---------- | ------------- | --------- |
| 1.1.1 | Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. |
| 1.1.2 | Ensure that the API server pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. |
| 1.1.3 | Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
| 1.1.4 | Ensure that the controller manager pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
| 1.1.5 | Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
| 1.1.6 | Ensure that the scheduler pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
| 1.1.7 | Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. |
| 1.1.8 | Ensure that the etcd pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. |
| 1.1.13 | Ensure that the admin.conf file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. |
| 1.1.14 | Ensure that the admin.conf file ownership is set to root:root (Scored) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. |
| 1.1.15 | Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
| 1.1.16 | Ensure that the scheduler.conf file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
| 1.1.17 | Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
| 1.1.18 | Ensure that the controller-manager.conf file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
| 1.3.6 | Ensure that the RotateKubeletServerCertificate argument is set to true (Scored) | Clusters provisioned by RKE handles certificate rotation directly through RKE. |
| 4.1.1 | Ensure that the kubelet service file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. |
| 4.1.2 | Ensure that the kubelet service file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. |
| 4.1.9 | Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. |
| 4.1.10 | Ensure that the kubelet configuration file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. |
| 4.2.12 | Ensure that the RotateKubeletServerCertificate argument is set to true (Scored) | Clusters provisioned by RKE handles certificate rotation directly through RKE. |
# CIS Benchmark v1.4
The skipped and not applicable tests for CIS Benchmark v1.4 are as follows:
### CIS Benchmark v1.4 Skipped Tests
Number | Description | Reason for Skipping
---|---|---
1.1.11 | "Ensure that the admission control plugin AlwaysPullImages is set (Scored)" | Enabling AlwaysPullImages can use significant bandwidth.
1.1.21 | "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Scored)" | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers.
1.1.24 | "Ensure that the admission control plugin PodSecurityPolicy is set (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail.
1.1.34 | "Ensure that the --encryption-provider-config argument is set as appropriate (Scored)" | Enabling encryption changes how data can be recovered as data is encrypted.
1.1.35 | "Ensure that the encryption provider is set to aescbc (Scored)" | Enabling encryption changes how data can be recovered as data is encrypted.
1.1.36 | "Ensure that the admission control plugin EventRateLimit is set (Scored)" | EventRateLimit needs to be tuned depending on the cluster.
1.2.2 | "Ensure that the --address argument is set to 127.0.0.1 (Scored)" | Adding this argument prevents Rancher's monitoring tool to collect metrics on the scheduler.
1.3.7 | "Ensure that the --address argument is set to 127.0.0.1 (Scored)" | Adding this argument prevents Rancher's monitoring tool to collect metrics on the controller manager.
1.4.12 | "Ensure that the etcd data directory ownership is set to etcd:etcd (Scored)" | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership.
1.7.2 | "Do not admit containers wishing to share the host process ID namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail.
1.7.3 | "Do not admit containers wishing to share the host IPC namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail.
1.7.4 | "Do not admit containers wishing to share the host network namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail.
1.7.5 | " Do not admit containers with allowPrivilegeEscalation (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail.
2.1.6 | "Ensure that the --protect-kernel-defaults argument is set to true (Scored)" | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true.
2.1.10 | "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored)" | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers.
### CIS Benchmark v1.4 Not Applicable Tests
Number | Description | Reason for being not applicable
---|---|---
1.1.9 | "Ensure that the --repair-malformed-updates argument is set to false (Scored)" | The argument --repair-malformed-updates has been removed as of Kubernetes version 1.14
1.3.6 | "Ensure that the RotateKubeletServerCertificate argument is set to true" | Cluster provisioned by RKE handles certificate rotation directly through RKE.
1.4.1 | "Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
1.4.2 | "Ensure that the API server pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
1.4.3 | "Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
1.4.4 | "Ensure that the controller manager pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
1.4.5 | "Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
1.4.6 | "Ensure that the scheduler pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
1.4.7 | "Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd.
1.4.8 | "Ensure that the etcd pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd.
1.4.13 | "Ensure that the admin.conf file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes.
1.4.14 | "Ensure that the admin.conf file ownership is set to root:root (Scored)" | Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes.
2.1.8 | "Ensure that the --hostname-override argument is not set (Scored)" | Clusters provisioned by RKE clusters and most cloud providers require hostnames.
2.1.12 | "Ensure that the --rotate-certificates argument is not set to false (Scored)" | Cluster provisioned by RKE handles certificate rotation directly through RKE.
2.1.13 | "Ensure that the RotateKubeletServerCertificate argument is set to true (Scored)" | Cluster provisioned by RKE handles certificate rotation directly through RKE.
2.2.3 | "Ensure that the kubelet service file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesnt require or maintain a configuration file for the kubelet service.
2.2.4 | "Ensure that the kubelet service file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesnt require or maintain a configuration file for the kubelet service.
2.2.9 | "Ensure that the kubelet configuration file ownership is set to root:root (Scored)" | RKE doesnt require or maintain a configuration file for the kubelet.
2.2.10 | "Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Scored)" | RKE doesnt require or maintain a configuration file for the kubelet.
@@ -0,0 +1,45 @@
---
title: Roles-based Access Control
shortTitle: RBAC
weight: 3
---
This section describes the permissions required to use the rancher-cis-benchmark App.
The rancher-cis-benchmark is a cluster-admin only feature by default.
However, the `rancher-cis-benchmark` chart installs three default `ClusterRoles`:
- cis-admin
- cis-edit
- cis-view
In Rancher, only cluster owners and global administrators have `cis-admin` access by default.
# Cluster-Admin Access
Rancher CIS Scans is a cluster-admin only feature by default.
This means only the Rancher global admins, and the clusters cluster-owner can:
- Install/Uninstall the rancher-cis-benchmark App
- See the navigation links for CIS Benchmark CRDs - ClusterScanBenchmarks, ClusterScanProfiles, ClusterScans
- List the default ClusterScanBenchmarks and ClusterScanProfiles
- Create/Edit/Delete new ClusterScanProfiles
- Create/Edit/Delete a new ClusterScan to run the CIS scan on the cluster
- View and Download the ClusterScanReport created after the ClusterScan is complete
# Summary of Default Permissions for Kubernetes Default Roles
The rancher-cis-benchmark creates three `ClusterRoles` and adds the CIS Benchmark CRD access to the following default K8s `ClusterRoles`:
| ClusterRole created by chart | Default K8s ClusterRole | Permissions given with Role
| ------------------------------| ---------------------------| ---------------------------|
| `cis-admin` | `admin`| Ability to CRUD clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR
| `cis-edit`| `edit` | Ability to CRUD clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR
| `cis-view` | `view `| Ability to List(R) clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR
By default only cluster-owner role will have ability to manage and use `rancher-cis-benchmark` feature.
The other Rancher roles (cluster-member, project-owner, project-member) do not have default permissions to manage and use rancher-cis-benchmark resources.
But if a cluster-owner wants to delegate access to other users, they can do so by creating ClusterRoleBindings between these users and the CIS ClusterRoles manually.
@@ -0,0 +1,54 @@
---
title: Skipped and Not Applicable Tests
weight: 3
---
This section lists the tests that are skipped in the permissive test profile for RKE.
> All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile.
# CIS Benchmark v1.5
### CIS Benchmark v1.5 Skipped Tests
| Number | Description | Reason for Skipping |
| ---------- | ------------- | --------- |
| 1.1.12 | Ensure that the etcd data directory ownership is set to etcd:etcd (Scored) | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership. |
| 1.2.6 | Ensure that the --kubelet-certificate-authority argument is set as appropriate (Scored) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. |
| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. |
| 1.2.34 | Ensure that encryption providers are appropriately configured (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. |
| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true. |
| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. |
| 5.1.5 | Ensure that default service accounts are not actively used. (Scored) | Kubernetes provides default service accounts to be used. |
| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 5.2.3 | Minimize the admission of containers wishing to share the host IPC namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 5.2.4 | Minimize the admission of containers wishing to share the host network namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 5.2.5 | Minimize the admission of containers with allowPrivilegeEscalation (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 5.3.2 | Ensure that all Namespaces have Network Policies defined (Scored) | Enabling Network Policies can prevent certain applications from communicating with each other. |
| 5.6.4 | The default namespace should not be used (Scored) | Kubernetes provides a default namespace. |
### CIS Benchmark v1.5 Not Applicable Tests
| Number | Description | Reason for being not applicable |
| ---------- | ------------- | --------- |
| 1.1.1 | Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. |
| 1.1.2 | Ensure that the API server pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. |
| 1.1.3 | Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
| 1.1.4 | Ensure that the controller manager pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
| 1.1.5 | Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
| 1.1.6 | Ensure that the scheduler pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
| 1.1.7 | Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. |
| 1.1.8 | Ensure that the etcd pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. |
| 1.1.13 | Ensure that the admin.conf file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. |
| 1.1.14 | Ensure that the admin.conf file ownership is set to root:root (Scored) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. |
| 1.1.15 | Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
| 1.1.16 | Ensure that the scheduler.conf file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
| 1.1.17 | Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
| 1.1.18 | Ensure that the controller-manager.conf file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
| 1.3.6 | Ensure that the RotateKubeletServerCertificate argument is set to true (Scored) | Clusters provisioned by RKE handles certificate rotation directly through RKE. |
| 4.1.1 | Ensure that the kubelet service file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. |
| 4.1.2 | Ensure that the kubelet service file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. |
| 4.1.9 | Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. |
| 4.1.10 | Ensure that the kubelet configuration file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. |
| 4.2.12 | Ensure that the RotateKubeletServerCertificate argument is set to true (Scored) | Clusters provisioned by RKE handles certificate rotation directly through RKE. |
+1 -1
View File
@@ -3,7 +3,7 @@ title: Using the Rancher Command Line Interface
description: The Rancher CLI is a unified tool that you can use to interact with Rancher. With it, you can operate Rancher using a command line interface rather than the GUI
metaTitle: "Using the Rancher Command Line Interface "
metaDescription: "The Rancher CLI is a unified tool that you can use to interact with Rancher. With it, you can operate Rancher using a command line interface rather than the GUI"
weight: 6000
weight: 21
---
The Rancher CLI (Command Line Interface) is a unified tool that you can use to interact with Rancher. With this tool, you can operate Rancher using a command line rather than the GUI.
@@ -1,6 +1,6 @@
---
title: Cluster Administration
weight: 2005
weight: 8
---
After you provision a cluster in Rancher, you can begin using powerful Kubernetes features to deploy and scale your containerized applications in development, testing, or production environments.
@@ -24,7 +24,7 @@ For attaching existing persistent storage to a cluster, the cloud provider does
The overall workflow for setting up existing storage is as follows:
1. Set up persistent storage in an infrastructure provider.
1. Set up your persistent storage. This may be storage in an infrastructure provider, or it could be your own storage.
2. Add a persistent volume (PV) that refers to the persistent storage.
3. Add a persistent volume claim (PVC) that refers to the PV.
4. Mount the PVC as a volume in your workload.
@@ -35,12 +35,22 @@ For details and prerequisites, refer to [this page.](./attaching-existing-storag
The overall workflow for provisioning new storage is as follows:
1. Add a storage class and configure it to use your storage provider.
1. Add a StorageClass and configure it to use your storage provider. The StorageClass could refer to storage in an infrastructure provider, or it could refer to your own storage.
2. Add a persistent volume claim (PVC) that refers to the storage class.
3. Mount the PVC as a volume for your workload.
For details and prerequisites, refer to [this page.](./provisioning-new-storage)
### Longhorn Storage
[Longhorn](https://longhorn.io/) is a lightweight, reliable and easy-to-use distributed block storage system for Kubernetes.
Longhorn is free, open source software. Originally developed by Rancher Labs, it is now being developed as a sandbox project of the Cloud Native Computing Foundation. It can be installed on any Kubernetes cluster with Helm, with kubectl, or with the Rancher UI.
If you have a pool of block storage, Longhorn can help you provide persistent storage to your Kubernetes cluster without relying on cloud providers. For more information about Longhorn features, refer to the [documentation.](https://longhorn.io/docs/1.0.2/what-is-longhorn/)
Rancher v2.5 simplified the process of installing Longhorn on a Rancher-managed cluster. For more information, see [this page.]({{<baseurl>}}/rancher/v2.x/en/longhorn)
### Provisioning Storage Examples
We provide examples of how to provision storage with [NFS,](./examples/nfs) [vSphere,](./examples/vsphere) and [Amazon's EBS.](./examples/ebs)
@@ -9,7 +9,7 @@ This section describes how to set up existing persistent storage for workloads i
To set up storage, follow these steps:
1. [Set up persistent storage in an infrastructure provider.](#1-set-up-persistent-storage-in-an-infrastructure-provider)
1. [Set up persistent storage.](#1-set-up-persistent-storage)
2. [Add a persistent volume that refers to the persistent storage.](#2-add-a-persistent-volume-that-refers-to-the-persistent-storage)
3. [Add a persistent volume claim that refers to the persistent volume.](#3-add-a-persistent-volume-claim-that-refers-to-the-persistent-volume)
4. [Mount the persistent volume claim as a volume in your workload.](#4-mount-the-persistent-storage-claim-as-a-volume-in-your-workload)
@@ -19,11 +19,13 @@ To set up storage, follow these steps:
- To create a persistent volume as a Kubernetes resource, you must have the `Manage Volumes` [role.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-role-reference)
- If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider.
### 1. Set up persistent storage in an infrastructure provider
### 1. Set up persistent storage
Creating a persistent volume in Rancher will not create a storage volume. It only creates a Kubernetes resource that maps to an existing volume. Therefore, before you can create a persistent volume as a Kubernetes resource, you must have storage provisioned.
The steps to set up a persistent storage device will differ based on your infrastructure. We provide examples of how to set up storage using [vSphere,](../examples/vsphere) [NFS,](../examples/nfs) or Amazon's [EBS.](../examples/ebs)
The steps to set up a persistent storage device will differ based on your infrastructure. We provide examples of how to set up storage using [vSphere,](../examples/vsphere) [NFS,](../examples/nfs) or Amazon's [EBS.](../examples/ebs)
If you have a pool of block storage, and you don't want to use a cloud provider, Longhorn could help you provide persistent storage to your Kubernetes cluster. For more information, see [this page.]({{<baseurl>}}/rancher/v2.x/en/longhorn)
### 2. Add a persistent volume that refers to the persistent storage
@@ -5,11 +5,15 @@ weight: 2
This section describes how to provision new persistent storage for workloads in Rancher.
> This section assumes that you understand the Kubernetes concepts of storage classes and persistent volume claims. For more information, refer to the section on [how storage works.](../how-storage-works)
This section assumes that you understand the Kubernetes concepts of storage classes and persistent volume claims. For more information, refer to the section on [how storage works.](../how-storage-works)
New storage is often provisioned by a cloud provider such as Amazon EBS. However, new storage doesn't have to be in the cloud.
If you have a pool of block storage, and you don't want to use a cloud provider, Longhorn could help you provide persistent storage to your Kubernetes cluster. For more information, see [this page.]({{<baseurl>}}/rancher/v2.x/en/longhorn)
To provision new storage for your workloads, follow these steps:
1. [Add a storage class and configure it to use your storage provider.](#1-add-a-storage-class-and-configure-it-to-use-your-storage-provider)
1. [Add a storage class and configure it to use your storage.](#1-add-a-storage-class-and-configure-it-to-use-your-storage)
2. [Add a persistent volume claim that refers to the storage class.](#2-add-a-persistent-volume-claim-that-refers-to-the-storage-class)
3. [Mount the persistent volume claim as a volume for your workload.](#3-mount-the-persistent-volume-claim-as-a-volume-for-your-workload)
@@ -36,7 +40,7 @@ hostPath | `host-path`
To use a storage provisioner that is not on the above list, you will need to use a [feature flag to enable unsupported storage drivers.]({{<baseurl>}}/rancher/v2.x/en/installation/options/feature-flags/enable-not-default-storage-drivers/)
### 1. Add a storage class and configure it to use your storage provider
### 1. Add a storage class and configure it to use your storage
These steps describe how to set up a storage class at the cluster level.
@@ -1,7 +1,7 @@
---
title: Setting up Kubernetes Clusters in Rancher
description: Provisioning Kubernetes Clusters
weight: 2000
weight: 7
aliases:
- /rancher/v2.x/en/concepts/clusters/
- /rancher/v2.x/en/concepts/clusters/cluster-providers/
@@ -1,6 +1,6 @@
---
title: Setting up Clusters from Hosted Kubernetes Providers
weight: 2100
weight: 3
---
In this scenario, Rancher does not provision Kubernetes because it is installed by providers such as Google Kubernetes Engine (GKE), Amazon Elastic Container Service for Kubernetes, or Azure Kubernetes Service.
@@ -8,13 +8,24 @@ aliases:
Amazon EKS provides a managed control plane for your Kubernetes cluster. Amazon EKS runs the Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Rancher provides an intuitive user interface for managing and deploying the Kubernetes clusters you run in Amazon EKS. With this guide, you will use Rancher to quickly and easily launch an Amazon EKS Kubernetes cluster in your AWS account. For more information on Amazon EKS, see this [documentation](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html).
- [Prerequisites in Amazon Web Services](#prerequisites-in-amazon-web-services)
- [Amazon VPC](#amazon-vpc)
- [IAM Policies](#iam-policies)
- [Architecture](#architecture)
- [Create the EKS Cluster](#create-the-eks-cluster)
- [EKS Cluster Configuration Reference](#eks-cluster-configuration-reference)
- [Troubleshooting](#troubleshooting)
- [AWS Service Events](#aws-service-events)
- [Security and Compliance](#security-and-compliance)
- [Tutorial](#tutorial)
- [Minimum EKS Permissions](#minimum-eks-permissions)
## Prerequisites in Amazon Web Services
# Prerequisites in Amazon Web Services
>**Note**
>Deploying to Amazon AWS will incur charges. For more information, refer to the [EKS pricing page](https://aws.amazon.com/eks/pricing/).
To set up a cluster on EKS, you will need to set up an Amazon VPC (Virtual Private Cloud). You will also need to make sure that the account you will be using to create the EKS cluster has the appropriate permissions. For details, refer to the official guide on [Amazon EKS Prerequisites](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-prereqs).
To set up a cluster on EKS, you will need to set up an Amazon VPC (Virtual Private Cloud). You will also need to make sure that the account you will be using to create the EKS cluster has the appropriate [permissions.](#minimum-eks-permissions) For details, refer to the official guide on [Amazon EKS Prerequisites](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-prereqs).
### Amazon VPC
@@ -26,7 +37,7 @@ Rancher needs access to your AWS account in order to provision and administer yo
1. Create a user with programmatic access by following the steps [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html).
2. Next, create an IAM policy that defines what this user has access to in your AWS account. The required permissions are [here.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/#appendix-minimum-eks-permissions) Follow the steps [here](https://docs.aws.amazon.com/eks/latest/userguide/EKS_IAM_user_policies.html) to create an IAM policy and attach it to your user.
2. Next, create an IAM policy that defines what this user has access to in your AWS account. It's important to only grant this user minimal access within your account. The minimum permissions required for an EKS cluster are listed [here.](#minimum-eks-permissions) Follow the steps [here](https://docs.aws.amazon.com/eks/latest/userguide/EKS_IAM_user_policies.html) to create an IAM policy and attach it to your user.
3. Finally, follow the steps [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) to create an access key and secret key for this user.
@@ -34,13 +45,15 @@ Rancher needs access to your AWS account in order to provision and administer yo
For more detailed information on IAM policies for EKS, refer to the official [documentation on Amazon EKS IAM Policies, Roles, and Permissions](https://docs.aws.amazon.com/eks/latest/userguide/IAM_policies.html).
## Architecture
# Architecture
The figure below illustrates the high-level architecture of Rancher 2.x. The figure depicts a Rancher Server installation that manages two Kubernetes clusters: one created by RKE and another created by EKS.
![Rancher architecture with EKS hosted cluster]({{<baseurl>}}/img/rancher/rancher-architecture.svg)
<figcaption>Managing Kubernetes Clusters through Rancher's Authentication Proxy</figcaption>
## Create the EKS Cluster
![Architecture]({{<baseurl>}}/img/rancher/rancher-architecture-rancher-api-server.svg)
# Create the EKS Cluster
Use Rancher to set up and configure your Kubernetes cluster.
@@ -48,120 +61,279 @@ Use Rancher to set up and configure your Kubernetes cluster.
1. Choose **Amazon EKS**.
1. Enter a **Cluster Name**.
1. Enter a **Cluster Name.**
1. {{< step_create-cluster_member-roles >}}
1. Configure **Account Access** for the EKS cluster. Complete each drop-down and field using the information obtained in [2. Create Access Key and Secret Key](#prerequisites-in-amazon-web-services).
| Setting | Description |
| ---------- | -------------------------------------------------------------------------------------------------------------------- |
| Region | From the drop-down choose the geographical region in which to build your cluster. |
| Access Key | Enter the access key that you created in [2. Create Access Key and Secret Key](#2-create-access-key-and-secret-key). |
| Secret Key | Enter the secret key that you created in [2. Create Access Key and Secret Key](#2-create-access-key-and-secret-key). |
1. Click **Next: Select Service Role**. Then choose a [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html).
Service Role | Description
-------------|---------------------------
Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster.
Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you're already created within AWS. For more information on creating a custom service role in AWS, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#create-service-linked-role).
1. Click **Next: Select VPC and Subnet**.
1. Choose an option for **Public IP for Worker Nodes**. Your selection for this option determines what options are available for **VPC & Subnet**.
Option | Description
-------|------------
Yes | When your cluster nodes are provisioned, they're assigned a both a private and public IP address.
No: Private IPs only | When your cluster nodes are provisioned, they're assigned only a private IP address.<br/><br/>If you choose this option, you must also choose a **VPC & Subnet** that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane.
1. Now choose a **VPC & Subnet**. For more information, refer to the AWS documentation for [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html). Follow one of the sets of instructions below based on your selection from the previous step.
- [What Is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html)
- [VPCs and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html)
{{% accordion id="yes" label="Public IP for Worker Nodes—Yes" %}}
If you choose to assign a public IP address to your cluster's worker nodes, you have the option of choosing between a VPC that's automatically generated by Rancher (i.e., **Standard: Rancher generated VPC and Subnet**), or a VPC that you're already created with AWS (i.e., **Custom: Choose from your existing VPC and Subnets**). Choose the option that best fits your use case.
1. Choose a **VPC and Subnet** option.
Option | Description
-------|------------
Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC and Subnet.
Custom: Choose from your exiting VPC and Subnets | While provisioning your cluster, Rancher configures your nodes to use a VPC and Subnet that you've already [created in AWS](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html). If you choose this option, complete the remaining steps below.
1. If you're using **Custom: Choose from your existing VPC and Subnets**:
(If you're using **Standard**, skip to [step 11](#select-instance-options))
1. Make sure **Custom: Choose from your existing VPC and Subnets** is selected.
1. From the drop-down that displays, choose a VPC.
1. Click **Next: Select Subnets**. Then choose one of the **Subnets** that displays.
1. Click **Next: Select Security Group**.
{{% /accordion %}}
{{% accordion id="no" label="Public IP for Worker Nodes—No: Private IPs only" %}}
If you chose this option, you must also choose a **VPC & Subnet** that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane. Follow the steps below.
>**Tip:** When using only private IP addresses, you can provide your nodes internet access by creating a VPC constructed with two subnets, a private set and a public set. The private set should have its route tables configured to point toward a NAT in the public set. For more information on routing traffic from private subnets, please see the [official AWS documentation](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html).
1. From the drop-down that displays, choose a VPC.
1. Click **Next: Select Subnets**. Then choose one of the **Subnets** that displays.
1. Click **Next: Select Security Group**.
{{% /accordion %}}
1. <a id="security-group"></a>Choose a **Security Group**. See the documentation below on how to create one.
Amazon Documentation:
- [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)
- [Security Groups for Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html)
- [Create a Security Group](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html#getting-started-create-security-group)
1. <a id="select-instance-options"></a>Click **Select Instance Options**, and then edit the node options available. Instance type and size of your worker nodes affects how many IP addresses each worker node will have available. See this [documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI) for more information.
Option | Description
-------|------------
Instance Type | Choose the [hardware specs](https://aws.amazon.com/ec2/instance-types/) for the instance you're provisioning.
Custom AMI Override | If you want to use a custom [Amazon Machine Image](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html#creating-an-ami) (AMI), specify it here. By default, Rancher will use the [EKS-optimized AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) for the EKS version that you chose.
Desired ASG Size | The number of instances that your cluster will provision.
User Data | Custom commands can to be passed to perform automated configuration tasks **WARNING: Modifying this may cause your nodes to be unable to join the cluster.** _Note: Available as of v2.2.0_
1. Fill out the rest of the form. For help, refer to the [configuration reference.](#eks-cluster-configuration-reference)
1. Click **Create**.
{{< result_create-cluster >}}
## Troubleshooting
# EKS Cluster Configuration Reference
### Changes in Rancher v2.5
More EKS options can be configured when you create an EKS cluster in Rancher, including the following:
- Managed node groups
- Desired size, minimum size, maximum size (requires the Cluster Autoscaler to be installed)
- Control plane logging
- Secrets encryption with KMS
The following capabilities have been added for configuring EKS clusters in Rancher:
- GPU support
- Exclusively use managed nodegroups that come with the most up-to-date AMIs
- Add new nodes
- Upgrade nodes
- Add and remove node groups
- Disable and enable private access
- Add restrictions to public access
- Use your cloud credentials to create the EKS cluster instead of passing in your access key and secret key
Due to the way that the cluster data is synced with EKS, if the cluster is modified from another source, such as in the EKS console, and in Rancher within five minutes, it could cause some changes to be overwritten. For information about how the sync works and how to configure it, refer to [this section](#syncing).
{{% tabs %}}
{{% tab "Rancher v2.5+" %}}
### Account Access
<a id="account-access-2-5"></a>
Complete each drop-down and field using the information obtained for your [IAM policy.](#iam-policy)
| Setting | Description |
| ---------- | -------------------------------------------------------------------------------------------------------------------- |
| Region | From the drop-down choose the geographical region in which to build your cluster. |
| Cloud Credentials | Select the cloud credentials that you created for your [IAM policy.](#iam-policy) For more information on creating cloud credentials in Rancher, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/user-settings/cloud-credentials/) |
### Service Role
<a id="service-role-2-5"></a>
Choose a [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html).
Service Role | Description
-------------|---------------------------
Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster.
Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you're already created within AWS. For more information on creating a custom service role in AWS, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#create-service-linked-role).
### Secrets Encryption
<a id="secrets-encryption-2-5"></a>
Optional: To encrypt secrets, select or enter a key created in [AWS Key Management Service (KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html)
### API Server Endpoint Access
<a id="api-server-endpoint-access-2-5"></a>
Configuring Public/Private API access is an advanced use case. For details, refer to the EKS cluster endpoint access control [documentation.](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html)
### Public Access Endpoints
<a id="public-access-endpoints-2-5"></a>
Optionally limit access to the public endpoint via explicit CIDR blocks.
If you limit access to specific CIDR blocks, then it is recommended that you also enable the private access to avoid losing network communication to the cluster.
One of the following is required to enable private access:
- Rancher's IP must be part of an allowed CIDR block
- Private access should be enabled, and Rancher must share a subnet with the cluster and have network access to the cluster, which can be configured with a security group
For more information about public and private access to the cluster endpoint, refer to the [Amazon EKS documentation.](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html)
### Subnet
<a id="subnet-2-5"></a>
| Option | Description |
| ------- | ------------ |
| Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC with 3 public subnets. |
| Custom: Choose from your existing VPC and Subnets | While provisioning your cluster, Rancher configures your Control Plane and nodes to use a VPC and Subnet that you've already [created in AWS](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html). |
For more information, refer to the AWS documentation for [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html). Follow one of the sets of instructions below based on your selection from the previous step.
- [What Is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html)
- [VPCs and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html)
### Security Group
<a id="security-group-2-5"></a>
Amazon Documentation:
- [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)
- [Security Groups for Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html)
- [Create a Security Group](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html#getting-started-create-security-group)
### Logging
<a id="logging-2-5"></a>
Configure control plane logs to send to Amazon CloudWatch. You are charged the standard CloudWatch Logs data ingestion and storage costs for any logs sent to CloudWatch Logs from your clusters.
Each log type corresponds to a component of the Kubernetes control plane. To learn more about these components, see [Kubernetes Components](https://kubernetes.io/docs/concepts/overview/components/) in the Kubernetes documentation.
For more information on EKS control plane logging, refer to the official [documentation.](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html)
### Managed Node Groups
<a id="managed-node-groups-2-5"></a>
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
For more information about how node groups work and how they are configured, refer to the [EKS documentation.](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html)
Amazon will use the [EKS-optimized AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) for the Kubernetes version. You can configure whether the AMI has GPU enabled.
| Option | Description |
| ------- | ------------ |
| Instance Type | Choose the [hardware specs](https://aws.amazon.com/ec2/instance-types/) for the instance you're provisioning. |
| Maximum ASG Size | The maximum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. |
| Minimum ASG Size | The minimum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. |
{{% /tab %}}
{{% tab "Rancher prior to v2.5" %}}
### Account Access
<a id="account-access-2-4"></a>
Complete each drop-down and field using the information obtained for your [IAM policy.](#iam-policy)
| Setting | Description |
| ---------- | -------------------------------------------------------------------------------------------------------------------- |
| Region | From the drop-down choose the geographical region in which to build your cluster. |
| Access Key | Enter the access key that you created for your [IAM policy.](#iam-policy) |
| Secret Key | Enter the secret key that you created for your [IAM policy.](#iam-policy) |
### Service Role
<a id="service-role-2-4"></a>
Choose a [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html).
Service Role | Description
-------------|---------------------------
Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster.
Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you're already created within AWS. For more information on creating a custom service role in AWS, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#create-service-linked-role).
### Public IP for Worker Nodes
<a id="public-ip-for-worker-nodes-2-4"></a>
Your selection for this option determines what options are available for **VPC & Subnet**.
Option | Description
-------|------------
Yes | When your cluster nodes are provisioned, they're assigned a both a private and public IP address.
No: Private IPs only | When your cluster nodes are provisioned, they're assigned only a private IP address.<br/><br/>If you choose this option, you must also choose a **VPC & Subnet** that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane.
### VPC & Subnet
<a id="vpc-and-subnet-2-4"></a>
The available options depend on the [public IP for worker nodes.](#public-ip-for-worker-nodes)
Option | Description
-------|------------
Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC and Subnet.
Custom: Choose from your existing VPC and Subnets | While provisioning your cluster, Rancher configures your nodes to use a VPC and Subnet that you've already [created in AWS](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html). If you choose this option, complete the remaining steps below.
For more information, refer to the AWS documentation for [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html). Follow one of the sets of instructions below based on your selection from the previous step.
- [What Is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html)
- [VPCs and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html)
If you choose to assign a public IP address to your cluster's worker nodes, you have the option of choosing between a VPC that's automatically generated by Rancher (i.e., **Standard: Rancher generated VPC and Subnet**), or a VPC that you've already created with AWS (i.e., **Custom: Choose from your existing VPC and Subnets**). Choose the option that best fits your use case.
{{% accordion id="yes" label="Click to expand" %}}
If you're using **Custom: Choose from your existing VPC and Subnets**:
(If you're using **Standard**, skip to the [instance options.)](#select-instance-options-2-4)
1. Make sure **Custom: Choose from your existing VPC and Subnets** is selected.
1. From the drop-down that displays, choose a VPC.
1. Click **Next: Select Subnets**. Then choose one of the **Subnets** that displays.
1. Click **Next: Select Security Group**.
{{% /accordion %}}
If your worker nodes have Private IPs only, you must also choose a **VPC & Subnet** that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane.
{{% accordion id="no" label="Click to expand" %}}
Follow the steps below.
>**Tip:** When using only private IP addresses, you can provide your nodes internet access by creating a VPC constructed with two subnets, a private set and a public set. The private set should have its route tables configured to point toward a NAT in the public set. For more information on routing traffic from private subnets, please see the [official AWS documentation](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html).
1. From the drop-down that displays, choose a VPC.
1. Click **Next: Select Subnets**. Then choose one of the **Subnets** that displays.
{{% /accordion %}}
### Security Group
<a id="security-group-2-4"></a>
Amazon Documentation:
- [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)
- [Security Groups for Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html)
- [Create a Security Group](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html#getting-started-create-security-group)
### Instance Options
<a id="select-instance-options-2-4"></a>
Instance type and size of your worker nodes affects how many IP addresses each worker node will have available. See this [documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI) for more information.
Option | Description
-------|------------
Instance Type | Choose the [hardware specs](https://aws.amazon.com/ec2/instance-types/) for the instance you're provisioning.
Custom AMI Override | If you want to use a custom [Amazon Machine Image](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html#creating-an-ami) (AMI), specify it here. By default, Rancher will use the [EKS-optimized AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) for the EKS version that you chose.
Desired ASG Size | The number of instances that your cluster will provision.
User Data | Custom commands can to be passed to perform automated configuration tasks **WARNING: Modifying this may cause your nodes to be unable to join the cluster.** _Note: Available as of v2.2.0_
{{% /tab %}}
{{% /tabs %}}
# Troubleshooting
If your changes were overwritten, it could be due to the way the cluster data is synced with EKS. Changes shouldn't be made to the cluster from another source, such as in the EKS console, and in Rancher within a five-minute span. For information on how this works and how to configure the refresh interval, refer to [Syncing.](#syncing)
If an unauthorized error is returned while attempting to modify or register the cluster and the cluster was not created with the role or user that your credentials belong to, refer to [Security and Compliance.](#security-and-compliance)
For any issues or troubleshooting details for your Amazon EKS Kubernetes cluster, please see this [documentation](https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html).
## AWS Service Events
# AWS Service Events
To find information on any AWS Service events, please see [this page](https://status.aws.amazon.com/).
## Security and Compliance
# Security and Compliance
By default only the IAM user or role that created a cluster has access to it. Attempting to access the cluster with any other user or role without additional configuration will lead to an error. In Rancher, this means using a credential that maps to a user or role that was not used to create the cluster will cause an unauthorized error. For example, an EKSCtl cluster will not register in Rancher unless the credentials used to register the cluster match the role or user used by EKSCtl. Additional users and roles can be authorized to access a cluster by being added to the aws-auth configmap in the kube-system namespace. For a more in-depth explanation and detailed instructions, please see this [documentation](https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/).
For more information on security and compliance with your Amazon EKS Kubernetes cluster, please see this [documentation](https://docs.aws.amazon.com/eks/latest/userguide/shared-responsibilty.html).
## Tutorial
# Tutorial
This [tutorial](https://aws.amazon.com/blogs/opensource/managing-eks-clusters-rancher/) on the AWS Open Source Blog will walk you through how to set up an EKS cluster with Rancher, deploy a publicly accessible app to test the cluster, and deploy a sample project to track real-time geospatial data using a combination of other open-source software such as Grafana and InfluxDB.
## Appendix - Minimum EKS Permissions
# Minimum EKS Permissions
Documented here is a minimum set of permissions necessary to use all functionality of the EKS driver in Rancher. Additional permissions are required for Rancher to provision the `Service Role` and `VPC` resources. Optionally these resources can be created **before** the cluster creation and will be selectable when defining the cluster configuration.
Documented here is a minimum set of permissions necessary to use all functionality of the EKS driver in Rancher.
Resource | Description
---------|------------
Service Role | The service role provides Kubernetes the permissions it requires to manage resources on your behalf. Rancher can create the service role with the following [Service Role Permissions](http://localhost:9001/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/#service-role-permissions).
VPC | Provides isolated network resouces utilised by EKS and worker nodes. Rancher can create the VPC resouces with the follwoing [VPC Permissions](http://localhost:9001/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/#vpc-permissions).
Resource targeting uses `*` as the ARN of many of the resources created cannot be known prior to creating the EKS cluster in Rancher.
Resource targeting uses `*` as the ARN of many of the resources created cannot be known prior to creating the EKS cluster in Rancher. Some permissions (for example `ec2:CreateVpc`) are only used in situations where Rancher handles the creation of certain resources.
```json
{
@@ -171,41 +343,70 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b
"Sid": "EC2Permisssions",
"Effect": "Allow",
"Action": [
"ec2:RevokeSecurityGroupIngress",
"ec2:RevokeSecurityGroupEgress",
"ec2:DescribeVpcs",
"ec2:DescribeTags",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeRouteTables",
"ec2:DescribeKeyPairs",
"ec2:DescribeInternetGateways",
"ec2:DescribeImages",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeAccountAttributes",
"ec2:DeleteTags",
"ec2:DeleteSecurityGroup",
"ec2:DeleteKeyPair",
"ec2:CreateTags",
"ec2:CreateSecurityGroup",
"ec2:CreateKeyPair",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:AuthorizeSecurityGroupEgress"
"ec2:DeleteSubnet",
"ec2:CreateKeyPair",
"ec2:AttachInternetGateway",
"ec2:ReplaceRoute",
"ec2:DeleteRouteTable",
"ec2:AssociateRouteTable",
"ec2:DescribeInternetGateways",
"ec2:CreateRoute",
"ec2:CreateInternetGateway",
"ec2:RevokeSecurityGroupEgress",
"ec2:DescribeAccountAttributes",
"ec2:DeleteInternetGateway",
"ec2:DescribeKeyPairs",
"ec2:CreateTags",
"ec2:CreateRouteTable",
"ec2:DescribeRouteTables",
"ec2:DetachInternetGateway",
"ec2:DisassociateRouteTable",
"ec2:RevokeSecurityGroupIngress",
"ec2:DeleteVpc",
"ec2:CreateSubnet",
"ec2:DescribeSubnets",
"ec2:DeleteKeyPair",
"ec2:DeleteTags",
"ec2:CreateVpc",
"ec2:DescribeAvailabilityZones",
"ec2:CreateSecurityGroup",
"ec2:ModifyVpcAttribute",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:DescribeTags",
"ec2:DeleteRoute",
"ec2:DescribeSecurityGroups",
"ec2:DescribeImages",
"ec2:DescribeVpcs",
"ec2:DeleteSecurityGroup"
],
"Resource": "*"
},
{
"Sid": "CloudFormationPermisssions",
"Sid": "EKSPermissions",
"Effect": "Allow",
"Action": [
"cloudformation:ListStacks",
"cloudformation:ListStackResources",
"cloudformation:DescribeStacks",
"cloudformation:DescribeStackResources",
"cloudformation:DescribeStackResource",
"cloudformation:DeleteStack",
"cloudformation:CreateStackSet",
"cloudformation:CreateStack"
"eks:DeleteFargateProfile",
"eks:DescribeFargateProfile",
"eks:ListTagsForResource",
"eks:UpdateClusterConfig",
"eks:DescribeNodegroup",
"eks:ListNodegroups",
"eks:DeleteCluster",
"eks:CreateFargateProfile",
"eks:DeleteNodegroup",
"eks:UpdateNodegroupConfig",
"eks:DescribeCluster",
"eks:ListClusters",
"eks:UpdateClusterVersion",
"eks:UpdateNodegroupVersion",
"eks:ListUpdates",
"eks:CreateCluster",
"eks:UntagResource",
"eks:CreateNodegroup",
"eks:ListFargateProfiles",
"eks:DescribeUpdate",
"eks:TagResource"
],
"Resource": "*"
},
@@ -213,52 +414,52 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b
"Sid": "IAMPermissions",
"Effect": "Allow",
"Action": [
"iam:PassRole",
"iam:ListRoles",
"iam:ListRoleTags",
"iam:ListInstanceProfilesForRole",
"iam:ListInstanceProfiles",
"iam:ListAttachedRolePolicies",
"iam:GetRole",
"iam:GetInstanceProfile",
"iam:DetachRolePolicy",
"iam:DeleteRole",
"iam:RemoveRoleFromInstanceProfile",
"iam:CreateRole",
"iam:AttachRolePolicy"
"iam:AttachRolePolicy",
"iam:AddRoleToInstanceProfile",
"iam:DetachRolePolicy",
"iam:GetRole",
"iam:DeleteRole",
"iam:CreateInstanceProfile",
"iam:ListInstanceProfilesForRole",
"iam:PassRole",
"iam:GetInstanceProfile",
"iam:ListRoles",
"iam:ListInstanceProfiles",
"iam:DeleteInstanceProfile"
],
"Resource": "*"
},
{
"Sid": "KMSPermisssions",
"Sid": "CloudFormationPermisssions",
"Effect": "Allow",
"Action": "kms:ListKeys",
"Action": [
"cloudformation:DescribeStackResource",
"cloudformation:ListStackResources",
"cloudformation:DescribeStackResources",
"cloudformation:DescribeStacks",
"cloudformation:ListStacks",
"cloudformation:CreateStack"
],
"Resource": "*"
},
{
"Sid": "EKSPermisssions",
"Sid": "AutoScalingPermissions",
"Effect": "Allow",
"Action": [
"eks:UpdateNodegroupVersion",
"eks:UpdateNodegroupConfig",
"eks:UpdateClusterVersion",
"eks:UpdateClusterConfig",
"eks:UntagResource",
"eks:TagResource",
"eks:ListUpdates",
"eks:ListTagsForResource",
"eks:ListNodegroups",
"eks:ListFargateProfiles",
"eks:ListClusters",
"eks:DescribeUpdate",
"eks:DescribeNodegroup",
"eks:DescribeFargateProfile",
"eks:DescribeCluster",
"eks:DeleteNodegroup",
"eks:DeleteFargateProfile",
"eks:DeleteCluster",
"eks:CreateNodegroup",
"eks:CreateFargateProfile",
"eks:CreateCluster"
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"autoscaling:CreateOrUpdateTags",
"autoscaling:DeleteAutoScalingGroup",
"autoscaling:CreateAutoScalingGroup",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeScalingActivities",
"autoscaling:CreateLaunchConfiguration",
"autoscaling:DeleteLaunchConfiguration"
],
"Resource": "*"
}
@@ -266,97 +467,29 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b
}
```
### Service Role Permissions
# Syncing
Rancher will create a service role with the following trust policy:
Syncing is the feature that causes Rancher to update its EKS clusters' values so they are up to date with their corresponding cluster object in the EKS console. This enables Rancher to not be the sole owner of an EKS clusters state. Its largest limitation is that processing an update from Rancher and another source at the same time or within 5 minutes of one finishing may cause the state from one source to completely overwrite the other.
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
```
### How it works
This role will also have two role policy attachments with the following policies ARNs:
There are two fields on the Rancher Cluster object that must be understood to understand how syncing works:
```
arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
arn:aws:iam::aws:policy/AmazonEKSServicePolicy
```
1. EKSConfig which is located on the Spec of the Cluster.
2. UpstreamSpec which is located on the EKSStatus field on the Status of the Cluster.
Permissions required for Rancher to create service role on users behalf during the EKS cluster creation process.
Both of which are defined by the struct EKSClusterConfigSpec found in the eks-operator project: https://github.com/rancher/eks-operator/blob/master/pkg/apis/eks.cattle.io/v1/types.go
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "IAMPermisssions",
"Effect": "Allow",
"Action": [
"iam:AddRoleToInstanceProfile",
"iam:AttachRolePolicy",
"iam:CreateInstanceProfile",
"iam:CreateRole",
"iam:CreateServiceLinkedRole",
"iam:DeleteInstanceProfile",
"iam:DeleteRole",
"iam:DetachRolePolicy",
"iam:GetInstanceProfile",
"iam:GetRole",
"iam:ListAttachedRolePolicies",
"iam:ListInstanceProfiles",
"iam:ListInstanceProfilesForRole",
"iam:ListRoles",
"iam:ListRoleTags",
"iam:PassRole",
"iam:RemoveRoleFromInstanceProfile"
],
"Resource": "*"
}
]
}
```
All fields with the exception of DisplayName, AmazonCredentialSecret, Region, and Imported are nillable on the EKSClusterConfigSpec.
### VPC Permissions
The EKSConfig represents desired state for its non-nil values. Fields that are non-nil in the EKSConfig can be thought of as “managed".When a cluster is created in Rancher, all fields are non-nil and therefore “managed”. When a pre-existing cluster is registered in rancher all nillable fields are nil and are not “managed”. Those fields become managed once their value has been changed by Rancher.
Permissions required for Rancher to create VPC and associated resources.
UpstreamSpec represents the cluster as it is in EKS and is refreshed on an interval of 5 minutes. After the UpstreamSpec has been refreshed rancher checks if the EKS cluster has an update in progress. If it is updating, nothing further is done. If it is not currently updating, any “managed” fields on EKSConfig are overwritten with their corresponding value from the recently updated UpstreamSpec.
```json
{
"Sid": "VPCPermissions",
"Effect": "Allow",
"Action": [
"ec2:ReplaceRoute",
"ec2:ModifyVpcAttribute",
"ec2:ModifySubnetAttribute",
"ec2:DisassociateRouteTable",
"ec2:DetachInternetGateway",
"ec2:DescribeVpcs",
"ec2:DeleteVpc",
"ec2:DeleteTags",
"ec2:DeleteSubnet",
"ec2:DeleteRouteTable",
"ec2:DeleteRoute",
"ec2:DeleteInternetGateway",
"ec2:CreateVpc",
"ec2:CreateSubnet",
"ec2:CreateSecurityGroup",
"ec2:CreateRouteTable",
"ec2:CreateRoute",
"ec2:CreateInternetGateway",
"ec2:AttachInternetGateway",
"ec2:AssociateRouteTable"
],
"Resource": "*"
}
```
The effective desired state can be thought of as the UpstreamSpec + all non-nil fields in the EKSConfig. This is what is displayed in the UI.
If Rancher and another source attempt to update an EKS cluster at the same time or within the 5 minute refresh window of an update finishing, then it is likely any “managed” fields can be caught in a race condition. For example, a cluster may have PrivateAccess as a managed field. If PrivateAccess is false and then enabled in EKS console, then finishes at 11:01, and then tags are updated from Rancher before 11:05 the value will likely be overwritten. This would also occur if tags were updated while the cluster was processing the update. If the cluster was registered and the PrivateAccess fields was nil then this issue should not occur in the aforementioend case.
### Configuring the Refresh Interval
It is possible to change the refresh interval through the setting “eks-refresh-cron". This setting accepts values in the Cron format. The default is `*/5 * * * *`. The shorter the refresh window is the less likely any race conditions will occur, but it does increase the likelihood of encountering request limits that may be in place for AWS APIs.
@@ -1,9 +1,9 @@
---
title: Importing Existing Clusters into Rancher
title: Importing Existing Clusters
description: Learn how you can create a cluster in Rancher by importing an existing Kubernetes cluster. Then, you can manage it using Rancher
metaTitle: 'Kubernetes Cluster Management'
metaDescription: 'Learn how you can import an existing Kubernetes cluster and then manage it using Rancher'
weight: 2300
weight: 5
aliases:
- /rancher/v2.x/en/tasks/clusters/import-cluster/
---
@@ -16,6 +16,9 @@ For all imported Kubernetes clusters except for K3s clusters, the configuration
Rancher v2.4 added the capability to import a K3s cluster into Rancher, as well as the ability to upgrade Kubernetes by editing the cluster in the Rancher UI.
> Rancher v2.5 added the ability to [register clusters.](#changes-in-rancher-v2-5) This page will be updated to reflect the new functionality.
- [Changes in Rancher v2.5](#changes-in-rancher-v2-5)
- [Features](#features)
- [Prerequisites](#prerequisites)
- [Importing a cluster](#importing-a-cluster)
@@ -25,6 +28,14 @@ Rancher v2.4 added the capability to import a K3s cluster into Rancher, as well
- [Debug Logging and Troubleshooting for Imported K3s clusters](#debug-logging-and-troubleshooting-for-imported-k3s-clusters)
- [Annotating imported clusters](#annotating-imported-clusters)
# Changes in Rancher v2.5
In Rancher v2.5, the cluster registration feature replaced the feature to import clusters. Rancher has more capabilities to manage registered clusters compared to imported clusters, and registering a cluster allows Rancher to treat it as though it were created in Rancher.
Amazon EKS clusters can now be registered in Rancher. For the most part, registered EKS clusters and EKS clusters created in Rancher are treated the same way in the Rancher UI, except for deletion.
When you delete an EKS cluster that was created in Rancher, the cluster is destroyed. When you delete an EKS that was registered in Rancher, it is disconnected from the Rancher server, but it still exists and you can still access it in the same way you did before it was registered in Rancher.
# Features
After importing a cluster, the cluster owner can:
@@ -28,7 +28,7 @@ If you plan to use ARM64, see [Running on ARM64 (Experimental).]({{<baseurl>}}/r
For information on how to install Docker, refer to the official [Docker documentation.](https://docs.docker.com/)
Some distributions of Linux derived from RHEL, including Oracle Linux, may have default firewall rules that block communication with Helm. This [how-to guide]({{<baseurl>}}/rancher/v2.x/en/installation/options/firewall) shows how to check the default firewall rules and how to open the ports with `firewalld` if necessary.
Some distributions of Linux derived from RHEL, including Oracle Linux, may have default firewall rules that block communication with Helm. We recommend disabling firewalld. For Kubernetes 1.19, firewalld must be turned off.
SUSE Linux may have a firewall that blocks all ports by default. In that situation, follow [these steps](#opening-suse-linux-ports) to open the ports needed for adding a host to a custom cluster.
@@ -1,6 +1,6 @@
---
title: Checklist for Production-Ready Clusters
weight: 2005
weight: 2
---
In this section, we recommend best practices for creating the production-ready Kubernetes clusters that will run your apps and services.
@@ -1,6 +1,6 @@
---
title: Launching Kubernetes with Rancher
weight: 2200
weight: 4
---
You can have Rancher launch a Kubernetes cluster using any nodes you want. When Rancher deploys Kubernetes onto these nodes, it uses [Rancher Kubernetes Engine]({{<baseurl>}}/rke/latest/en/) (RKE), which is Rancher's own lightweight Kubernetes installer. It can launch Kubernetes on any computers, including:
@@ -1,6 +1,6 @@
---
title: Contributing to Rancher
weight: 9000
weight: 27
aliases:
- /rancher/v2.x/en/faq/contributing/
---
@@ -0,0 +1,16 @@
---
title: Deploying Applications across Clusters
weight: 13
---
Rancher v2.5 introduced Fleet, a new way to deploy applications across clusters.
### Fleet
_Available in v2.5_
Fleet is GitOps at scale. For more information, refer to the [Fleet section.](./fleet)
### Legacy UI Documentation for Multi-cluster Apps
In Rancher prior to v2.5, the multi-cluster apps feature was used to deploy applications across clusters. Refer to the documentation [here.](./multi-cluster-apps)
@@ -0,0 +1,28 @@
---
title: Fleet - GitOps at Scale
shortTitle: Fleet
weight: 1
---
_Available as of Rancher v2.5_
Fleet is GitOps at scale. Fleet is designed to manage up to a million clusters. It's also lightweight enough that is works great for a [single cluster](https://fleet.rancher.io/single-cluster-install/) too, but it really shines when you get to a [large scale.](https://fleet.rancher.io/multi-cluster-install/) By large scale we mean either a lot of clusters, a lot of deployments, or a lot of teams in a single organization.
Fleet is a separate project from Rancher, and can be installed on any Kubernetes cluster with Helm.
![Architecture]({{<baseurl>}}/img/rancher/fleet-architecture.png)
Fleet can manage deployments from git of raw Kubernetes YAML, Helm charts, or Kustomize or any combination of the three. Regardless of the source, all resources are dynamically turned into Helm charts, and Helm is used as the engine to
deploy everything in the cluster. This give a high degree of control, consistency, and auditability. Fleet focuses not only on the ability to scale, but to give one a high degree of control and visibility to exactly what is installed on the cluster.
### Accessing Fleet in the Rancher UI
Fleet comes preinstalled in Rancher v2.5. To access it, go to the **Cluster Explorer** in the Rancher UI. In the top left dropdown menu, click **Cluster Explorer > Fleet.** On this page, you can edit Kubernetes resources and cluster groups managed by Fleet.
### GitHub Repository
The Fleet Helm charts are available [here.](https://github.com/rancher/fleet/releases/latest)
### Documentation
The Fleet documentation is at [https://fleet.rancher.io/.](https://fleet.rancher.io/)
@@ -1,7 +1,9 @@
---
title: Multi-Cluster Apps
weight: 600
title: Legacy Multi-Cluster App Documentation
shortTitle: Legacy
weight: 2
---
_Available as of v2.2.0_
Typically, most applications are deployed on a single Kubernetes cluster, but there will be times you might want to deploy multiple copies of the same application across different clusters and/or projects. In Rancher, a _multi-cluster application_, is an application deployed using a Helm chart across multiple clusters. With the ability to deploy the same application across multiple clusters, it avoids the repetition of the same action on each cluster, which could introduce user error during application configuration. With multi-cluster applications, you can customize to have the same configuration across all projects/clusters as well as have the ability to change the configuration based on your target project. Since multi-cluster application is considered a single application, it's easy to manage and maintain this application.
+1 -1
View File
@@ -1,6 +1,6 @@
---
title: FAQ
weight: 8000
weight: 25
aliases:
- /rancher/v2.x/en/about/
---
@@ -0,0 +1,12 @@
---
title: Helm Charts in Rancher
weight: 12
---
### Apps and Marketplace
In Rancher v2.5, the [apps and marketplace feature](./apps-marketplace) is used to manage Helm charts, replacing the catalog system.
### Catalogs
In Rancher prior to v2.5, the [catalog system](./legacy-catalogs) was used to manage Helm charts.
@@ -0,0 +1,44 @@
---
title: Apps and Marketplace
weight: 1
---
_Available as of v2.5_
In this section, you'll learn how to manage Helm chart repositories and applications in Rancher.
In the cluster manager Rancher uses a catalog system to import bundles of charts and then uses those charts to either deploy custom helm applications or Rancher's tools such as Monitoring or Istio. Now in the Cluster Explorer, Rancher uses a similar but simplified version of the same system. Repositories can be added in the same way that catalogs were, but are specific to the current cluster. Rancher tools come as pre-loaded repositories which deploy as standalone helm charts.
### Charts
From the top-left menu select _"Apps & Marketplace"_ and you will be taken to the Charts page.
The charts page contains all Rancher, Partner, and Custom Charts.
* Rancher tools such as Logging or Monitoring are included under the Rancher label
* Partner charts reside under the Partners label
* Custom charts will show up under the name of the repository
All three types are deployed and managed in the same way.
### Repositories
From the left sidebar select _"Repositories"_.
These items represent helm repositories, and can be either traditional helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository.
### Helm compatilbitiy
The Cluster Explorer only supports Helm 3 compatible charts.
### Deployment and Upgrades
From the _"Charts"_ tab select a Chart to install. Rancher and Partner charts may have extra configurations available through custom pages or questions.yaml files, but all chart installations can modify the values.yaml and other basic settings. Once you click install, a helm operation job is deployed, and the console for the job is displayed.
To view all recent changes, go to the _"Recent Operations"_ tab. From there you can view the call that was made, conditions, events, and logs.
After installing a chart, you can find it in the _"Installed Apps"_ tab. In this section you can upgrade or delete the installation, and see further details. When choosing to upgrade, the form and values presented will be the same as installation.
Most Rancher tools have additional pages located in the toolbar below the _"Apps & Marketplace"_ section to help manage and use the features. These pages include links to dashboards, forms to easily add Custom Resources, and additional information.
@@ -1,11 +1,13 @@
---
title: Catalogs, Helm Charts and Apps
title: Legacy Catalog Documentation
shortTitle: Legacy
description: Rancher enables the use of catalogs to repeatedly deploy applications easily. Catalogs are GitHub or Helm Chart repositories filled with deployment-ready apps.
weight: 4000
weight: 1
aliases:
- /rancher/v2.x/en/concepts/global-configuration/catalog/
- /rancher/v2.x/en/concepts/catalogs/
- /rancher/v2.x/en/tasks/global-configuration/catalog/
- /rancher/v2.x/en/catalog
---
Rancher provides the ability to use a catalog of Helm charts that make it easy to repeatedly deploy applications.
@@ -4,6 +4,7 @@ weight: 200
aliases:
- /rancher/v2.x/en/tasks/global-configuration/catalog/adding-custom-catalogs/
- /rancher/v2.x/en/catalog/custom/adding
- /rancher/v2.x/en/catalog/adding-catalogs
---
Custom catalogs can be added into Rancher at a global scope, cluster scope, or project scope.
@@ -3,6 +3,7 @@ title: Enabling and Disabling Built-in Global Catalogs
weight: 100
aliases:
- /rancher/v2.x/en/tasks/global-configuration/catalog/enabling-default-catalogs/
- /rancher/v2.x/en/catalog/built-in
---
There are default global catalogs packaged as part of Rancher.
@@ -3,6 +3,7 @@ title: Custom Catalog Configuration Reference
weight: 300
aliases:
- /rancher/v2.x/en/catalog/catalog-config
- /rancher/v2.x/en/catalog/catalog-config
---
Any user can create custom catalogs to add into Rancher. Besides the content of the catalog, users must ensure their catalogs are able to be added into Rancher.
@@ -4,6 +4,7 @@ weight: 400
aliases:
- /rancher/v2.x/en/tasks/global-configuration/catalog/customizing-charts/
- /rancher/v2.x/en/catalog/custom/creating
- /rancher/v2.x/en/catalog/creating-apps
---
Rancher's catalog service requires any custom catalogs to be structured in a specific format for the catalog service to be able to leverage it in Rancher.
@@ -1,6 +1,8 @@
---
title: Global DNS
weight: 5010
aliases:
- /rancher/v2.x/en/catalog/globaldns
---
_Available as of v2.2.0_
@@ -1,6 +1,8 @@
---
title: Managing Catalog Apps
weight: 500
aliases:
- /rancher/v2.x/en/catalog/managing-apps
---
After deploying an application, one of the benefits of using an application versus individual workloads/resources is the ease of being able to manage many workloads/resources applications. Apps can be cloned, upgraded or rolled back.
@@ -0,0 +1,9 @@
---
title: Multi-Cluster Apps
weight: 600
aliases:
- /rancher/v2.x/en/catalog/multi-cluster-apps
---
_Available as of v2.2.0_
The documentation about multi-cluster apps has moved [here.]({{<baseurl>}}/rancher/v2.x/en/deploy-across-clusters/multi-cluster-apps)
@@ -1,6 +1,8 @@
---
title: "Tutorial: Example Custom Chart Creation"
weight: 800
aliases:
- /rancher/v2.x/en/catalog/tutorial
---
In this tutorial, you'll learn how to create a Helm chart and deploy it to a repository. The repository can then be used as a source for a custom catalog in Rancher.
+19 -6
View File
@@ -1,7 +1,7 @@
---
title: Installing Rancher
title: Installing/Upgrading Rancher
description: Learn how to install Rancher in development and production environments. Read about single node and high availability installation
weight: 50
weight: 3
aliases:
- /rancher/v2.x/en/installation/how-ha-works/
---
@@ -16,14 +16,21 @@ In this section,
- **RKE (Rancher Kubernetes Engine)** is a certified Kubernetes distribution and CLI/library which creates and manages a Kubernetes cluster.
- **K3s (Lightweight Kubernetes)** is also a fully compliant Kubernetes distribution. It is newer than RKE, easier to use, and more lightweight, with a binary size of less than 100 MB. As of Rancher v2.4, Rancher can be installed on a K3s cluster.
### Changes to Installation in Rancher v2.5
In Rancher v2.5, the Rancher management server can be installed on any Kubernetes cluster, including hosted clusters, such as Amazon EKS clusters.
For Docker installations, a local Kubernetes cluster is installed in the single Docker container, and Rancher is installed on the local cluster.
The `restrictedAdmin` Helm chart option was added. When this option is set to true, the initial Rancher user has restricted access to the local Kubernetes cluster to prevent privilege escalation. For more information, see the section about the [restricted-admin role.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#restricted-admin)
### Overview of Installation Options
Rancher can be installed on these main architectures:
- **High-availability Kubernetes Install:** We recommend using [Helm,]({{<baseurl>}}/rancher/v2.x/en/overview/concepts/#about-helm) a Kubernetes package manager, to install Rancher on multiple nodes on a dedicated Kubernetes cluster. For RKE clusters, three nodes are required to achieve a high-availability cluster. For K3s clusters, only two nodes are required.
- **Single-node Kubernetes Install:** Another option is to install Rancher with Helm on a Kubernetes cluster, but to only use a single node in the cluster. In this case, the Rancher server doesn't have high availability, which is important for running Rancher in production. However, this option is useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path. In the future, you can add nodes to the cluster to get a high-availability Rancher server.
- **Docker Install:** For test and demonstration purposes, Rancher can be installed with Docker on a single node. This installation works out-of-the-box, but there is no migration path from a Docker installation to a high-availability installation on a Kubernetes cluster. Therefore, you may want to use a Kubernetes installation from the start.
- **Docker Install:** For test and demonstration purposes, Rancher can be installed with Docker on a single node. This installation works out-of-the-box, but there is no migration path from a Docker installation to a high-availability installation. Therefore, you may want to use a Kubernetes installation from the start.
There are also separate instructions for installing Rancher in an air gap environment or behind an HTTP proxy:
@@ -35,9 +42,15 @@ There are also separate instructions for installing Rancher in an air gap enviro
We recommend installing Rancher on a Kubernetes cluster, because in a multi-node cluster, the Rancher management server becomes highly available. This high-availability configuration helps maintain consistent access to the downstream Kubernetes clusters that Rancher will manage.
For that reason, we recommend that for a production-grade architecture, you should set up a high-availability Kubernetes cluster using either RKE or K3s, then install Rancher on it. After Rancher is installed, you can use Rancher to deploy and manage Kubernetes clusters.
For that reason, we recommend that for a production-grade architecture, you should set up a high-availability Kubernetes cluster, then install Rancher on it. After Rancher is installed, you can use Rancher to deploy and manage Kubernetes clusters.
For testing or demonstration purposes, you can install Rancher in single Docker container. In this Docker install, you can use Rancher to set up Kubernetes clusters out-of-the-box.
> The type of cluster that Rancher needs to be installed on depends on the Rancher version.
>
> For Rancher v2.5, any Kubernetes cluster can be used.
> For Rancher v2.4.x, either an RKE Kubernetes cluster or K3s Kubernetes cluster can be used.
> For Rancher prior to v2.4, an RKE cluster must be used.
For testing or demonstration purposes, you can install Rancher in single Docker container. In this Docker install, you can use Rancher to set up Kubernetes clusters out-of-the-box. The Docker install allows you to explore the Rancher server functionality, but it is intended to be used for development and testing purposes only.
Our [instructions for installing Rancher on Kubernetes]({{<baseurl>}}/rancher/v2.x/en/installation/k8s-install) describe how to first use K3s or RKE to create and manage a Kubernetes cluster, then install Rancher onto that cluster.
@@ -1,11 +1,18 @@
---
title: 3. Install Rancher on the Kubernetes Cluster
description: Rancher installation is managed using the Helm Kubernetes package manager. Use Helm to install the prerequisites and charts to install Rancher
weight: 200
aliases:
- /rancher/v2.x/en/installation/ha/helm-rancher
title: Install Rancher on a Kubernetes Cluster
description: Learn how to install Rancher in development and production environments. Read about single node and high availability installation
weight: 3
---
> **Prerequisite:**
> Set up the Rancher server's local Kubernetes cluster.
>
> - As of Rancher v2.5, Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS.
> - In Rancher v2.4.x, Rancher needs to be installed on a K3s Kubernetes cluster or an RKE Kubernetes cluster.
> - In Rancher prior to v2.4, Rancher needs to be installed on an RKE Kubernetes cluster.
# Install the Rancher Helm Chart
Rancher is installed using the Helm package manager for Kubernetes. Helm charts provide templating syntax for Kubernetes YAML manifest documents.
With Helm, we can create configurable deployments instead of just using static files. For more information about creating your own catalog of deployments, check out the docs at https://helm.sh/.
@@ -263,3 +270,8 @@ That's it. You should have a functional Rancher server.
In a web browser, go to the DNS name that forwards traffic to your load balancer. Then you should be greeted by the colorful login page.
Doesn't work? Take a look at the [Troubleshooting]({{<baseurl>}}/rancher/v2.x/en/installation/options/troubleshooting/) Page
### Optional Next Steps
Enable the Enterprise Cluster Manager.
@@ -1,52 +0,0 @@
---
title: Installing Rancher on a Kubernetes Cluster
weight: 3
description: For production environments, install Rancher in a high-availability configuration. Read the guide for setting up a 3-node cluster and still install Rancher using a Helm chart.
aliases:
- /rancher/v2.x/en/installation/ha/
---
For production environments, we recommend installing Rancher in a high-availability configuration so that your user base can always access Rancher Server. When installed in a Kubernetes cluster, Rancher will integrate with the cluster's etcd database and take advantage of Kubernetes scheduling for high-availability.
This section describes how to create and manage a Kubernetes cluster, then install Rancher onto that cluster. For this type of architecture, you will need to deploy nodes - typically virtual machines - in the infrastructure provider of your choice. You will also need to configure a load balancer to direct front-end traffic to the three VMs. When the VMs are running and fulfill the [node requirements,]({{<baseurl>}}/rancher/v2.x/en/installation/requirements) you can use RKE or K3s to deploy Kubernetes onto them, then use the Helm package manager to deploy Rancher onto Kubernetes.
### Optional: Installing Rancher on a Single-node Kubernetes Cluster
If you only have one node, but you want to use the Rancher server in production in the future, it is better to install Rancher on a single-node Kubernetes cluster than to install it with Docker.
One option is to install Rancher with Helm on a Kubernetes cluster, but to only use a single node in the cluster. In this case, the Rancher server does not have high availability, which is important for running Rancher in production. However, this option is useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path. In the future, you can add nodes to the cluster to get a high-availability Rancher server.
To set up a single-node RKE cluster, configure only one node in the `cluster.yml` . The single node should have all three roles: `etcd`, `controlplane`, and `worker`.
To set up a single-node K3s cluster, run the Rancher server installation command on just one node instead of two nodes.
In both single-node Kubernetes setups, Rancher can be installed with Helm on the Kubernetes cluster in the same way that it would be installed on any other cluster.
### Important Notes on Architecture
The Rancher management server can only be run on Kubernetes cluster in an infrastructure provider where Kubernetes is installed using K3s or RKE. Use of Rancher on hosted Kubernetes providers, such as EKS, is not supported.
For the best performance and security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads.
For information on how Rancher works, regardless of the installation method, refer to the [architecture section.]({{<baseurl>}}/rancher/v2.x/en/overview/architecture)
## Installation Outline
- [Set up Infrastructure]({{<baseurl>}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/)
- [Set up a Kubernetes Cluster]({{<baseurl>}}/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/)
- [Install Rancher]({{<baseurl>}}/rancher/v2.x/en/installation/k8s-install/helm-rancher/)
## Additional Install Options
- [Migrating from a high-availability Kubernetes Install with an RKE Add-on]({{<baseurl>}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/)
- [Installing Rancher with Helm 2:]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2) This section provides a copy of the older high-availability Rancher installation instructions that used Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible.
## Previous Methods
[RKE add-on install]({{<baseurl>}}/rancher/v2.x/en/installation/options/rke-add-on/)
> **Important: RKE add-on install is only supported up to Rancher v2.0.8**
>
> Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{<baseurl>}}/rancher/v2.x/en/installation/k8s-install/#installation-outline).
>
> If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{<baseurl>}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
@@ -1,43 +0,0 @@
---
title: Installing Rancher in an Air Gapped Environment with Helm 2
weight: 2
aliases:
- /rancher/v2.x/en/installation/air-gap-installation/
- /rancher/v2.x/en/installation/air-gap-high-availability/
- /rancher/v2.x/en/installation/air-gap-single-node/
---
> After Helm 3 was released, the Rancher installation instructions were updated to use Helm 3.
>
> If you are using Helm 2, we recommend [migrating to Helm 3](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) because it is simpler to use and more secure than Helm 2.
>
> This section provides a copy of the older instructions for installing Rancher on a Kubernetes cluster using Helm 2 in an air air gap environment, and it is intended to be used if upgrading to Helm 3 is not feasible.
This section is about installations of Rancher server in an air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy.
Throughout the installations instructions, there will be _tabs_ for either a high availability Kubernetes installation or a single-node Docker installation.
### Air Gapped Kubernetes Installations
This section covers how to install Rancher on a Kubernetes cluster in an air gapped environment.
A Kubernetes installation is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails.
### Air Gapped Docker Installations
These instructions also cover how to install Rancher on a single node in an air gapped environment.
The Docker installation is for Rancher users that are wanting to test out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server.
> **Important:** If you install Rancher following the Docker installation guide, there is no upgrade path to transition your Docker Installation to a Kubernetes Installation.
Instead of running the Docker installation, you have the option to follow the Kubernetes Install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a Kubernetes Installation.
# Installation Outline
- [1. Prepare your Node(s)]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/)
- [2. Collect and Publish Images to your Private Registry]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/)
- [3. Launch a Kubernetes Cluster with RKE]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/)
- [4. Install Rancher]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/)
### [Next: Prepare your Node(s)]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/)
@@ -1,333 +0,0 @@
---
title: 4. Install Rancher
weight: 400
aliases:
- /rancher/v2.x/en/installation/air-gap-installation/install-rancher/
- /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-system-charts/
- /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-for-private-reg/
- /rancher/v2.x/en/installation/air-gap-single-node/install-rancher
- /rancher/v2.x/en/installation/air-gap/install-rancher
---
This section is about how to deploy Rancher for your air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. There are _tabs_ for either a high availability (recommended) or a Docker installation.
{{% tabs %}}
{{% tab "Kubernetes Install (Recommended)" %}}
Rancher recommends installing Rancher on a Kubernetes cluster. A highly available Kubernetes Installation is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails.
This section describes installing Rancher in five parts:
- [A. Add the Helm Chart Repository](#a-add-the-helm-chart-repository)
- [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration)
- [C. Render the Rancher Helm Template](#c-render-the-rancher-helm-template)
- [D. Install Rancher](#d-install-rancher)
- [E. For Rancher versions prior to v2.3.0, Configure System Charts](#e-for-rancher-versions-prior-to-v2-3-0-configure-system-charts)
### A. Add the Helm Chart Repository
From a system that has access to the internet, fetch the latest Helm chart and copy the resulting manifests to a system that has access to the Rancher server cluster.
1. If you haven't already, initialize `helm` locally on a workstation that has internet access. Note: Refer to the [Helm version requirements]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm-version) to choose a version of Helm to install Rancher.
```plain
helm init -c
```
2. Use `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Version of Rancher]({{<baseurl>}}/rancher/v2.x/en/installation/options/server-tags/#helm-chart-repositories).
{{< release-channel >}}
```
helm repo add rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
```
3. Fetch the latest Rancher chart. This will pull down the chart and save it in the current directory as a `.tgz` file.
```plain
helm fetch rancher-<CHART_REPO>/rancher
```
> Want additional options? Need help troubleshooting? See [Kubernetes Install: Advanced Options]({{<baseurl>}}/rancher/v2.x/en/installation/k8s-install/helm-rancher/#advanced-configurations).
### B. Choose your SSL Configuration
Rancher Server is designed to be secure by default and requires SSL/TLS configuration.
When Rancher is installed on an air gapped Kubernetes cluster, there are two recommended options for the source of the certificate.
> **Note:** If you want terminate SSL/TLS externally, see [TLS termination on an External Load Balancer]({{<baseurl>}}/rancher/v2.x/en/installation/options/chart-options/#external-tls-termination).
| Configuration | Chart option | Description | Requires cert-manager |
| ------------------------------------------ | ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- |
| Rancher Generated Self-Signed Certificates | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)<br> This is the **default** and does not need to be added when rendering the Helm template. | yes |
| Certificates from Files | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s). <br> This option must be passed when rendering the Rancher Helm template. | no |
### C. Render the Rancher Helm Template
When setting up the Rancher Helm template, there are several options in the Helm chart that are designed specifically for air gap installations.
| Chart Option | Chart Value | Description |
| ----------------------- | -------------------------------- | ---- |
| `certmanager.version` | "<version>" | Configure proper Rancher TLS issuer depending of running cert-manager version. |
| `systemDefaultRegistry` | `<REGISTRY.YOURDOMAIN.COM:PORT>` | Configure Rancher server to always pull from your private registry when provisioning clusters. |
| `useBundledSystemChart` | `true` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. _Available as of v2.3.0_ |
Based on the choice your made in [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration), complete one of the procedures below.
{{% accordion id="self-signed" label="Option A-Default Self-Signed Certificate" %}}
By default, Rancher generates a CA and uses cert-manager to issue the certificate for access to the Rancher server interface.
> **Note:**
> Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.11.0, please see our [upgrade cert-manager documentation]({{<baseurl>}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/).
1. From a system connected to the internet, add the cert-manager repo to Helm.
```plain
helm repo add jetstack https://charts.jetstack.io
helm repo update
```
1. Fetch the latest cert-manager chart available from the [Helm chart repository](https://hub.helm.sh/charts/jetstack/cert-manager).
```plain
helm fetch jetstack/cert-manager --version v0.12.0
```
1. Render the cert manager template with the options you would like to use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files.
```plain
helm template ./cert-manager-v0.12.0.tgz --output-dir . \
--name cert-manager --namespace cert-manager \
--set image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-controller
--set webhook.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-webhook
--set cainjector.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-cainjector
```
1. Download the required CRD file for cert-manager
```plain
curl -L -o cert-manager/cert-manager-crd.yaml https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml
```
1. Render the Rancher template, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
Placeholder | Description
------------|-------------
`<VERSION>` | The version number of the output tarball.
`<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer.
`<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry.
`<CERTMANAGER_VERSION>` | Cert-manager version running on k8s cluster.
```plain
helm template ./rancher-<VERSION>.tgz --output-dir . \
--name rancher \
--namespace cattle-system \
--set hostname=<RANCHER.YOURDOMAIN.COM> \
--set certmanager.version=<CERTMANAGER_VERSION> \
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
```
{{% /accordion %}}
{{% accordion id="secret" label="Option B: Certificates From Files using Kubernetes Secrets" %}}
Create Kubernetes secrets from your own certificates for Rancher to use. The common name for the cert will need to match the `hostname` option in the command below, or the ingress controller will fail to provision the site for Rancher.
Render the Rancher template, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
| Placeholder | Description |
| -------------------------------- | ----------------------------------------------- |
| `<VERSION>` | The version number of the output tarball. |
| `<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer. |
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry. |
```plain
helm template ./rancher-<VERSION>.tgz --output-dir . \
--name rancher \
--namespace cattle-system \
--set hostname=<RANCHER.YOURDOMAIN.COM> \
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
--set ingress.tls.source=secret \
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
```
If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`:
```plain
helm template ./rancher-<VERSION>.tgz --output-dir . \
--name rancher \
--namespace cattle-system \
--set hostname=<RANCHER.YOURDOMAIN.COM> \
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
--set ingress.tls.source=secret \
--set privateCA=true \
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
```
Then refer to [Adding TLS Secrets]({{<baseurl>}}/rancher/v2.x/en/installation/options/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them.
{{% /accordion %}}
### D. Install Rancher
Copy the rendered manifest directories to a system that has access to the Rancher server cluster to complete installation.
Use `kubectl` to create namespaces and apply the rendered manifests.
If you chose to use self-signed certificates in [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration), install cert-manager.
{{% accordion id="install-cert-manager" label="Self-Signed Certificate Installs - Install Cert-manager" %}}
If you are using self-signed certificates, install cert-manager:
1. Create the namespace for cert-manager.
```plain
kubectl create namespace cert-manager
```
1. Create the cert-manager CustomResourceDefinitions (CRDs).
```plain
kubectl apply -f cert-manager/cert-manager-crd.yaml
```
> **Important:**
> If you are running Kubernetes v1.15 or below, you will need to add the `--validate=false flag to your kubectl apply command above else you will receive a validation error relating to the x-kubernetes-preserve-unknown-fields field in cert-managers CustomResourceDefinition resources. This is a benign error and occurs due to the way kubectl performs resource validation.
1. Launch cert-manager.
```plain
kubectl apply -R -f ./cert-manager
```
{{% /accordion %}}
Install Rancher:
```plain
kubectl create namespace cattle-system
kubectl -n cattle-system apply -R -f ./rancher
```
**Step Result:** If you are installing Rancher v2.3.0+, the installation is complete.
### E. For Rancher versions prior to v2.3.0, Configure System Charts
If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.x/en/installation/options/local-system-charts/#setting-up-system-charts-for-rancher-prior-to-v2-3-0).
### Additional Resources
These resources could be helpful when installing Rancher:
- [Rancher Helm chart options]({{<baseurl>}}/rancher/v2.x/en/installation/options/chart-options/)
- [Adding TLS secrets]({{<baseurl>}}/rancher/v2.x/en/installation/options/tls-secrets/)
- [Troubleshooting Rancher Kubernetes Installations]({{<baseurl>}}/rancher/v2.x/en/installation/options/troubleshooting/)
{{% /tab %}}
{{% tab "Docker Install" %}}
The Docker installation is for Rancher users that are wanting to **test** out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. **Important: If you install Rancher following the Docker installation guide, there is no upgrade path to transition your Docker installation to a Kubernetes Installation.** Instead of running the single node installation, you have the option to follow the Kubernetes Install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a Kubernetes Installation.
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
| Environment Variable Key | Environment Variable Value | Description |
| -------------------------------- | -------------------------------- | ---- |
| `CATTLE_SYSTEM_DEFAULT_REGISTRY` | `<REGISTRY.YOURDOMAIN.COM:PORT>` | Configure Rancher server to always pull from your private registry when provisioning clusters. |
| `CATTLE_SYSTEM_CATALOG` | `bundled` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. _Available as of v2.3.0_ |
> **Do you want to...**
>
> - Configure custom CA root certificate to access your services? See [Custom CA root certificate]({{<baseurl>}}/rancher/v2.x/en/installation/options/chart-options/#additional-trusted-cas).
> - Record all transactions with the Rancher API? See [API Auditing]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/#api-audit-log).
- For Rancher prior to v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher prior to v2.3.0.]({{<baseurl>}}/rancher/v2.x/en/installation/options/local-system-charts/#setting-up-system-charts-for-rancher-prior-to-v2-3-0)
Choose from the following options:
{{% accordion id="option-a" label="Option A-Default Self-Signed Certificate" %}}
If you are installing Rancher in a development or testing environment where identity verification isn't a concern, install Rancher using the self-signed certificate that it generates. This installation option omits the hassle of generating a certificate yourself.
Log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder.
| Placeholder | Description |
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. |
| `<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{<baseurl>}}/rancher/v2.x/en/installation/options/server-tags/) that you want to install. |
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
```
{{% /accordion %}}
{{% accordion id="option-b" label="Option B-Bring Your Own Certificate: Self-Signed" %}}
In development or testing environments where your team will access your Rancher server, create a self-signed certificate for use with your install so that your team can verify they're connecting to your instance of Rancher.
> **Prerequisites:**
> From a computer with an internet connection, create a self-signed certificate using [OpenSSL](https://www.openssl.org/) or another method of your choice.
>
> - The certificate files must be in [PEM format]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/#pem).
> - In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [SSL FAQ / Troubleshooting]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/#cert-order).
After creating your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Use the `-v` flag and provide the path to your certificates to mount them in your container.
| Placeholder | Description |
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| `<CERT_DIRECTORY>` | The path to the directory containing your certificate files. |
| `<FULL_CHAIN.pem>` | The path to your full certificate chain. |
| `<PRIVATE_KEY.pem>` | The path to the private key for your certificate. |
| `<CA_CERTS.pem>` | The path to the certificate authority's certificate. |
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. |
| `<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{<baseurl>}}/rancher/v2.x/en/installation/options/server-tags/) that you want to install. |
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
-v /<CERT_DIRECTORY>/<CA_CERTS.pem>:/etc/rancher/ssl/cacerts.pem \
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
```
{{% /accordion %}}
{{% accordion id="option-c" label="Option C-Bring Your Own Certificate: Signed by Recognized CA" %}}
In development or testing environments where you're exposing an app publicly, use a certificate signed by a recognized CA so that your user base doesn't encounter security warnings.
> **Prerequisite:** The certificate files must be in [PEM format]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/#pem).
After obtaining your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Because your certificate is signed by a recognized CA, mounting an additional CA certificate file is unnecessary.
| Placeholder | Description |
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| `<CERT_DIRECTORY>` | The path to the directory containing your certificate files. |
| `<FULL_CHAIN.pem>` | The path to your full certificate chain. |
| `<PRIVATE_KEY.pem>` | The path to the private key for your certificate. |
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. |
| `<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{<baseurl>}}/rancher/v2.x/en/installation/options/server-tags/) that you want to install. |
> **Note:** Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher.
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
--no-cacerts \
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
```
{{% /accordion %}}
If you are installing Rancher v2.3.0+, the installation is complete.
If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.x/en/installation/options/local-system-charts/#setting-up-system-charts-for-rancher-prior-to-v2-3-0).
{{% /tab %}}
{{% /tabs %}}
@@ -1,82 +0,0 @@
---
title: '3. Install Kubernetes with RKE (Kubernetes Installs Only)'
weight: 300
aliases:
- /rancher/v2.x/en/installation/air-gap-high-availability/install-kube
---
This section is about how to prepare to launch a Kubernetes cluster which is used to deploy Rancher server for your air gapped environment.
Since a Kubernetes Installation requires a Kubernetes cluster, we will create a Kubernetes cluster using [Rancher Kubernetes Engine]({{<baseurl>}}/rke/latest/en/) (RKE). Before being able to start your Kubernetes cluster, you'll need to [install RKE]({{<baseurl>}}/rke/latest/en/installation/) and create a RKE config file.
- [A. Create an RKE Config File](#a-create-an-rke-config-file)
- [B. Run RKE](#b-run-rke)
- [C. Save Your Files](#c-save-your-files)
### A. Create an RKE Config File
From a system that can access ports 22/tcp and 6443/tcp on your host nodes, use the sample below to create a new file named `rancher-cluster.yml`. This file is a Rancher Kubernetes Engine configuration file (RKE config file), which is a configuration for the cluster you're deploying Rancher to.
Replace values in the code sample below with help of the _RKE Options_ table. Use the IP address or DNS names of the [3 nodes]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap-high-availability/provision-hosts) you created.
> **Tip:** For more details on the options available, see the RKE [Config Options]({{<baseurl>}}/rke/latest/en/config-options/).
<figcaption>RKE Options</figcaption>
| Option | Required | Description |
| ------------------ | -------------------- | --------------------------------------------------------------------------------------- |
| `address` | ✓ | The DNS or IP address for the node within the air gap network. |
| `user` | ✓ | A user that can run docker commands. |
| `role` | ✓ | List of Kubernetes roles assigned to the node. |
| `internal_address` | optional<sup>1</sup> | The DNS or IP address used for internal cluster traffic. |
| `ssh_key_path` | | Path to SSH private key used to authenticate to the node (defaults to `~/.ssh/id_rsa`). |
> <sup>1</sup> Some services like AWS EC2 require setting the `internal_address` if you want to use self-referencing security groups or firewalls.
```yaml
nodes:
- address: 10.10.3.187 # node air gap network IP
internal_address: 172.31.7.22 # node intra-cluster IP
user: rancher
role: ['controlplane', 'etcd', 'worker']
ssh_key_path: /home/user/.ssh/id_rsa
- address: 10.10.3.254 # node air gap network IP
internal_address: 172.31.13.132 # node intra-cluster IP
user: rancher
role: ['controlplane', 'etcd', 'worker']
ssh_key_path: /home/user/.ssh/id_rsa
- address: 10.10.3.89 # node air gap network IP
internal_address: 172.31.3.216 # node intra-cluster IP
user: rancher
role: ['controlplane', 'etcd', 'worker']
ssh_key_path: /home/user/.ssh/id_rsa
private_registries:
- url: <REGISTRY.YOURDOMAIN.COM:PORT> # private registry url
user: rancher
password: '*********'
is_default: true
```
### B. Run RKE
After configuring `rancher-cluster.yml`, bring up your Kubernetes cluster:
```
rke up --config ./rancher-cluster.yml
```
### C. Save Your Files
> **Important**
> The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster.
Save a copy of the following files in a secure location:
- `rancher-cluster.yml`: The RKE cluster configuration file.
- `kube_config_rancher-cluster.yml`: The [Kubeconfig file]({{<baseurl>}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file]({{<baseurl>}}/rke/latest/en/installation/#kubernetes-cluster-state), this file contains credentials for full access to the cluster.<br/><br/>_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
### [Next: Install Rancher]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher)
@@ -1,274 +0,0 @@
---
title: '2. Collect and Publish Images to your Private Registry'
weight: 200
aliases:
- /rancher/v2.x/en/installation/air-gap-installation/prepare-private-reg/
- /rancher/v2.x/en/installation/air-gap-high-availability/prepare-private-registry/
- /rancher/v2.x/en/installation/air-gap-single-node/prepare-private-registry/
- /rancher/v2.x/en/installation/air-gap-single-node/config-rancher-for-private-reg/
- /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-for-private-reg/
---
> **Prerequisites:** You must have a [private registry](https://docs.docker.com/registry/deploying/) available to use.
>
> **Note:** Populating the private registry with images is the same process for HA and Docker installations, the differences in this section is based on whether or not you are planning to provision a Windows cluster or not.
By default, all images used to [provision Kubernetes clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/) or launch any [tools]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/) in Rancher, e.g. monitoring, pipelines, alerts, are pulled from Docker Hub. In an air gap installation of Rancher, you will need a private registry that is located somewhere accessible by your Rancher server. Then, you will load the registry with all the images.
This section describes how to set up your private registry so that when you install Rancher, Rancher will pull all the required images from this registry.
By default, we provide the steps of how to populate your private registry assuming you are provisioning Linux only clusters, but if you plan on provisioning any [Windows clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/), there are separate instructions to support the images needed for a Windows cluster.
{{% tabs %}}
{{% tab "Linux Only Clusters" %}}
For Rancher servers that will only provision Linux clusters, these are the steps to populate your private registry.
A. Find the required assets for your Rancher version <br>
B. Collect all the required images <br>
C. Save the images to your workstation <br>
D. Populate the private registry
### Prerequisites
These steps expect you to use a Linux workstation that has internet access, access to your private registry, and at least 20 GB of disk space.
If you will use ARM64 hosts, the registry must support manifests. As of April 2020, Amazon Elastic Container Registry does not support manifests.
### A. Find the required assets for your Rancher version
1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. Click **Assets*.*
2. From the release's **Assets** section, download the following files:
| Release File | Description |
| ---------------- | -------------- |
| `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools. |
| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. |
| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. |
### B. Collect all the required images (For Kubernetes Installs using Rancher Generated Self-Signed Certificate)
In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates.
1. Fetch the latest `cert-manager` Helm chart and parse the template for image details:
> **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation]({{<baseurl>}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/).
```plain
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm fetch jetstack/cert-manager --version v0.12.0
helm template ./cert-manager-<version>.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt
```
2. Sort and unique the images list to remove any overlap between the sources:
```plain
sort -u rancher-images.txt -o rancher-images.txt
```
### C. Save the images to your workstation
1. Make `rancher-save-images.sh` an executable:
```
chmod +x rancher-save-images.sh
```
1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images:
```plain
./rancher-save-images.sh --image-list ./rancher-images.txt
```
**Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory.
### D. Populate the private registry
Move the images in the `rancher-images.tar.gz` to your private registry using the scripts to load the images. The `rancher-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script.
1. Log into your private registry if required:
```plain
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
```
1. Make `rancher-load-images.sh` an executable:
```
chmod +x rancher-load-images.sh
```
1. Use `rancher-load-images.sh` to extract, tag and push `rancher-images.txt` and `rancher-images.tar.gz` to your private registry:
```plain
./rancher-load-images.sh --image-list ./rancher-images.txt --registry <REGISTRY.YOURDOMAIN.COM:PORT>
```
{{% /tab %}}
{{% tab "Linux and Windows Clusters" %}}
_Available as of v2.3.0_
For Rancher servers that will provision Linux and Windows clusters, there are distinctive steps to populate your private registry for the Windows images and the Linux images. Since a Windows cluster is a mix of Linux and Windows nodes, the Linux images pushed into the private registry are manifests.
### Windows Steps
The Windows images need to be collected and pushed from a Windows server workstation.
A. Find the required assets for your Rancher version <br>
B. Save the images to your Windows Server workstation <br>
C. Prepare the Docker daemon <br>
D. Populate the private registry
{{% accordion label="Collecting and Populating Windows Images into the Private Registry"%}}
### Prerequisites
These steps expect you to use a Windows Server 1809 workstation that has internet access, access to your private registry, and at least 50 GB of disk space.
The workstation must have Docker 18.02+ in order to support manifests, which are required when provisioning Windows clusters.
Your registry must support manifests. As of April 2020, Amazon Elastic Container Registry does not support manifests.
### A. Find the required assets for your Rancher version
1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments.
2. From the release's "Assets" section, download the following files:
| Release File | Description |
|------------------------|-------------------|
| `rancher-windows-images.txt` | This file contains a list of Windows images needed to provision Windows clusters. |
| `rancher-save-images.ps1` | This script pulls all the images in the `rancher-windows-images.txt` from Docker Hub and saves all of the images as `rancher-windows-images.tar.gz`. |
| `rancher-load-images.ps1` | This script loads the images from the `rancher-windows-images.tar.gz` file and pushes them to your private registry. |
### B. Save the images to your Windows Server workstation
1. Using `powershell`, go to the directory that has the files that were downloaded in the previous step.
1. Run `rancher-save-images.ps1` to create a tarball of all the required images:
```plain
./rancher-save-images.ps1
```
**Step Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-windows-images.tar.gz`. Check that the output is in the directory.
### C. Prepare the Docker daemon
Append your private registry address to the `allow-nondistributable-artifacts` config field in the Docker daemon (`C:\ProgramData\Docker\config\daemon.json`). Since the base image of Windows images are maintained by the `mcr.microsoft.com` registry, this step is required as the layers in the Microsoft registry are missing from Docker Hub and need to be pulled into the private registry.
```
{
...
"allow-nondistributable-artifacts": [
...
"<REGISTRY.YOURDOMAIN.COM:PORT>"
]
...
}
```
### D. Populate the private registry
Move the images in the `rancher-windows-images.tar.gz` to your private registry using the scripts to load the images. The `rancher-windows-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.ps1` script.
1. Using `powershell`, log into your private registry if required:
```plain
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
```
1. Using `powershell`, use `rancher-load-images.ps1` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry:
```plain
./rancher-load-images.ps1 --registry <REGISTRY.YOURDOMAIN.COM:PORT>
```
{{% /accordion %}}
### Linux Steps
The Linux images needs to be collected and pushed from a Linux host, but _must be done after_ populating the Windows images into the private registry. These step are different from the Linux only steps as the Linux images that are pushed will actually manifests that support Windows and Linux images.
A. Find the required assets for your Rancher version <br>
B. Collect all the required images <br>
C. Save the images to your Linux workstation <br>
D. Populate the private registry
{{% accordion label="Collecting and Populating Linux Images into the Private Registry" %}}
### Prerequisites
You must populate the private registry with the Windows images before populating the private registry with Linux images. If you have already populated the registry with Linux images, you will need to follow these instructions again as they will publish manifests that support Windows and Linux images.
These steps expect you to use a Linux workstation that has internet access, access to your private registry, and at least 20 GB of disk space.
The workstation must have Docker 18.02+ in order to support manifests, which are required when provisioning Windows clusters.
### A. Find the required assets for your Rancher version
1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments.
2. From the release's **Assets** section, download the following files, which are required to install Rancher in an air gap environment:
| Release File | Description |
|----------------------------|------|
| `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools. |
| `rancher-windows-images.txt` | This file contains a list of images needed to provision Windows clusters. |
| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. |
| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. |
### B. Collect all the required images
**For Kubernetes Installs using Rancher Generated Self-Signed Certificate:** In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates.
1. Fetch the latest `cert-manager` Helm chart and parse the template for image details:
> **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation]({{<baseurl>}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/).
```plain
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm fetch jetstack/cert-manager --version v0.12.0
helm template ./cert-manager-<version>.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt
```
2. Sort and unique the images list to remove any overlap between the sources:
```plain
sort -u rancher-images.txt -o rancher-images.txt
```
### C. Save the images to your workstation
1. Make `rancher-save-images.sh` an executable:
```
chmod +x rancher-save-images.sh
```
1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images:
```plain
./rancher-save-images.sh --image-list ./rancher-images.txt
```
**Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory.
### D. Populate the private registry
Move the images in the `rancher-images.tar.gz` to your private registry using the `rancher-load-images.sh script` to load the images. The `rancher-images.txt` / `rancher-windows-images.txt` image list is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script.
1. Log into your private registry if required:
```plain
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
```
1. Make `rancher-load-images.sh` an executable:
```
chmod +x rancher-load-images.sh
```
1. Use `rancher-load-images.sh` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry:
```plain
./rancher-load-images.sh --image-list ./rancher-images.txt \
--windows-image-list ./rancher-windows-images.txt \
--registry <REGISTRY.YOURDOMAIN.COM:PORT>
```
{{% /accordion %}}
{{% /tab %}}
{{% /tabs %}}
### [Next: Kubernetes Installs - Launch a Kubernetes Cluster with RKE]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/)
### [Next: Docker Installs - Install Rancher]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/)
@@ -1,105 +0,0 @@
---
title: '1. Prepare your Node(s)'
weight: 100
aliases:
- /rancher/v2.x/en/installation/air-gap-high-availability/provision-hosts
- /rancher/v2.x/en/installation/air-gap-single-node/provision-host
---
This section is about how to prepare your node(s) to install Rancher for your air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. There are _tabs_ for either a high availability (recommended) or a Docker installation.
# Prerequisites
{{% tabs %}}
{{% tab "Kubernetes Install (Recommended)" %}}
### OS, Docker, Hardware, and Networking
Make sure that your node(s) fulfill the general [installation requirements.]({{<baseurl>}}/rancher/v2.x/en/installation/requirements/)
### Private Registry
Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machines.
If you need help with creating a private registry, please refer to the [Docker documentation](https://docs.docker.com/registry/).
### CLI Tools
The following CLI tools are required for the Kubernetes Install. Make sure these tools are installed on your workstation and available in your `$PATH`.
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) - Kubernetes command-line tool.
- [rke]({{<baseurl>}}/rke/latest/en/installation/) - Rancher Kubernetes Engine, cli for building Kubernetes clusters.
- [helm](https://docs.helm.sh/using_helm/#installing-helm) - Package management for Kubernetes. Refer to the [Helm version requirements]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm-version) to choose a version of Helm to install Rancher.
{{% /tab %}}
{{% tab "Docker Install" %}}
### OS, Docker, Hardware, and Networking
Make sure that your node(s) fulfill the general [installation requirements.]({{<baseurl>}}/rancher/v2.x/en/installation/requirements/)
### Private Registry
Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machines.
If you need help with creating a private registry, please refer to the [Docker documentation](https://docs.docker.com/registry/).
{{% /tab %}}
{{% /tabs %}}
# Set up Infrastructure
{{% tabs %}}
{{% tab "Kubernetes Install (Recommended)" %}}
Rancher recommends installing Rancher on a Kubernetes cluster. A highly available Kubernetes install is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails.
### Recommended Architecture
- DNS for Rancher should resolve to a layer 4 load balancer
- The Load Balancer should forward port TCP/80 and TCP/443 to all 3 nodes in the Kubernetes cluster.
- The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443.
- The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment.
<figcaption>Rancher installed on a Kubernetes cluster with layer 4 load balancer, depicting SSL termination at ingress controllers</figcaption>
![Rancher HA]({{<baseurl>}}/img/rancher/ha/rancher2ha.svg)
### A. Provision three air gapped Linux hosts according to our requirements
These hosts will be disconnected from the internet, but require being able to connect with your private registry.
View hardware and software requirements for each of your cluster nodes in [Requirements]({{<baseurl>}}/rancher/v2.x/en/installation/requirements).
### B. Set up your Load Balancer
When setting up the Kubernetes cluster that will run the Rancher server components, an Ingress controller pod will be deployed on each of your nodes. The Ingress controller pods are bound to ports TCP/80 and TCP/443 on the host network and are the entry point for HTTPS traffic to the Rancher server.
You will need to configure a load balancer as a basic Layer 4 TCP forwarder to direct traffic to these ingress controller pods. The exact configuration will vary depending on your environment.
> **Important:**
> Only use this load balancer (i.e, the `local` cluster Ingress) to load balance the Rancher server. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps.
**Load Balancer Configuration Samples:**
- For an example showing how to set up an NGINX load balancer, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/installation/options/nginx)
- For an example showing how to set up an Amazon NLB load balancer, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/installation/options/nlb)
{{% /tab %}}
{{% tab "Docker Install" %}}
The Docker installation is for Rancher users that are wanting to test out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server.
> **Important:** If you install Rancher following the Docker installation guide, there is no upgrade path to transition your Docker installation to a Kubernetes Installation.
Instead of running the Docker installation, you have the option to follow the Kubernetes Install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a Kubernetes Installation.
### A. Provision a single, air gapped Linux host according to our Requirements
These hosts will be disconnected from the internet, but require being able to connect with your private registry.
View hardware and software requirements for each of your cluster nodes in [Requirements]({{<baseurl>}}/rancher/v2.x/en/installation/requirements).
{{% /tab %}}
{{% /tabs %}}
### [Next: Collect and Publish Images to your Private Registry]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/)
@@ -1,160 +0,0 @@
---
title: Template for an RKE Cluster with a Certificate Signed by Recognized CA and a Layer 4 Load Balancer
weight: 3
---
RKE uses a cluster.yml file to install and configure your Kubernetes cluster.
The following template can be used for the cluster.yml if you have a setup with:
- Certificate signed by a recognized CA
- Layer 4 load balancer
- [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/)
> For more options, refer to [RKE Documentation: Config Options]({{<baseurl>}}/rke/latest/en/config-options/).
```yaml
nodes:
- address: <IP> # hostname or IP to access nodes
user: <USER> # root user (usually 'root')
role: [controlplane,etcd,worker] # K8s roles for node
ssh_key_path: <PEM_FILE> # path to PEM file
- address: <IP>
user: <USER>
role: [controlplane,etcd,worker]
ssh_key_path: <PEM_FILE>
- address: <IP>
user: <USER>
role: [controlplane,etcd,worker]
ssh_key_path: <PEM_FILE>
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
addons: |-
---
kind: Namespace
apiVersion: v1
metadata:
name: cattle-system
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: cattle-admin
namespace: cattle-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cattle-crb
namespace: cattle-system
subjects:
- kind: ServiceAccount
name: cattle-admin
namespace: cattle-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: cattle-keys-ingress
namespace: cattle-system
type: Opaque
data:
tls.crt: <BASE64_CRT> # ssl cert for ingress. If self-signed, must be signed by same CA as cattle server
tls.key: <BASE64_KEY> # ssl key for ingress. If self-signed, must be signed by same CA as cattle server
---
apiVersion: v1
kind: Service
metadata:
namespace: cattle-system
name: cattle-service
labels:
app: cattle
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: cattle
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: cattle-system
name: cattle-ingress-http
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open
spec:
rules:
- host: <FQDN> # FQDN to access cattle server
http:
paths:
- backend:
serviceName: cattle-service
servicePort: 80
tls:
- secretName: cattle-keys-ingress
hosts:
- <FQDN> # FQDN to access cattle server
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
namespace: cattle-system
name: cattle
spec:
replicas: 1
template:
metadata:
labels:
app: cattle
spec:
serviceAccountName: cattle-admin
containers:
# Rancher install via RKE addons is only supported up to v2.0.8
- image: rancher/rancher:v2.0.8
args:
- --no-cacerts
imagePullPolicy: Always
name: cattle-server
# env:
# - name: HTTP_PROXY
# value: "http://your_proxy_address:port"
# - name: HTTPS_PROXY
# value: "http://your_proxy_address:port"
# - name: NO_PROXY
# value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,your_network_ranges_that_dont_need_proxy_to_access"
livenessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 20
periodSeconds: 10
ports:
- containerPort: 80
protocol: TCP
- containerPort: 443
protocol: TCP
```
@@ -1,175 +0,0 @@
---
title: Template for an RKE Cluster with a Self-signed Certificate and Layer 4 Load Balancer
weight: 2
---
RKE uses a cluster.yml file to install and configure your Kubernetes cluster.
The following template can be used for the cluster.yml if you have a setup with:
- Self-signed SSL
- Layer 4 load balancer
- [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/)
> For more options, refer to [RKE Documentation: Config Options]({{<baseurl>}}/rke/latest/en/config-options/).
```yaml
nodes:
- address: <IP> # hostname or IP to access nodes
user: <USER> # root user (usually 'root')
role: [controlplane,etcd,worker] # K8s roles for node
ssh_key_path: <PEM_FILE> # path to PEM file
- address: <IP>
user: <USER>
role: [controlplane,etcd,worker]
ssh_key_path: <PEM_FILE>
- address: <IP>
user: <USER>
role: [controlplane,etcd,worker]
ssh_key_path: <PEM_FILE>
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
addons: |-
---
kind: Namespace
apiVersion: v1
metadata:
name: cattle-system
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: cattle-admin
namespace: cattle-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cattle-crb
namespace: cattle-system
subjects:
- kind: ServiceAccount
name: cattle-admin
namespace: cattle-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: cattle-keys-ingress
namespace: cattle-system
type: Opaque
data:
tls.crt: <BASE64_CRT> # ssl cert for ingress. If selfsigned, must be signed by same CA as cattle server
tls.key: <BASE64_KEY> # ssl key for ingress. If selfsigned, must be signed by same CA as cattle server
---
apiVersion: v1
kind: Secret
metadata:
name: cattle-keys-server
namespace: cattle-system
type: Opaque
data:
cacerts.pem: <BASE64_CA> # CA cert used to sign cattle server cert and key
---
apiVersion: v1
kind: Service
metadata:
namespace: cattle-system
name: cattle-service
labels:
app: cattle
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: cattle
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: cattle-system
name: cattle-ingress-http
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open
spec:
rules:
- host: <FQDN> # FQDN to access cattle server
http:
paths:
- backend:
serviceName: cattle-service
servicePort: 80
tls:
- secretName: cattle-keys-ingress
hosts:
- <FQDN> # FQDN to access cattle server
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
namespace: cattle-system
name: cattle
spec:
replicas: 1
template:
metadata:
labels:
app: cattle
spec:
serviceAccountName: cattle-admin
containers:
# Rancher install via RKE addons is only supported up to v2.0.8
- image: rancher/rancher:v2.0.8
imagePullPolicy: Always
name: cattle-server
# env:
# - name: HTTP_PROXY
# value: "http://your_proxy_address:port"
# - name: HTTPS_PROXY
# value: "http://your_proxy_address:port"
# - name: NO_PROXY
# value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,your_network_ranges_that_dont_need_proxy_to_access"
livenessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 20
periodSeconds: 10
ports:
- containerPort: 80
protocol: TCP
- containerPort: 443
protocol: TCP
volumeMounts:
- mountPath: /etc/rancher/ssl
name: cattle-keys-volume
readOnly: true
volumes:
- name: cattle-keys-volume
secret:
defaultMode: 420
secretName: cattle-keys-server
```
@@ -1,158 +0,0 @@
---
title: Template for an RKE Cluster with a Self-signed Certificate and SSL Termination on Layer 7 Load Balancer
weight: 3
---
RKE uses a cluster.yml file to install and configure your Kubernetes cluster.
This template is intended to be used for RKE add-on installs, which are only supported up to Rancher v2.0.8. Please use the Rancher Helm chart if you are installing a newer Rancher version. For details, see the [Kubernetes Install - Installation Outline]({{<baseurl>}}/rancher/v2.x/en/installation/k8s-install/#installation-outline).
The following template can be used for the cluster.yml if you have a setup with:
- Layer 7 load balancer with self-signed SSL termination (HTTPS)
- [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/)
> For more options, refer to [RKE Documentation: Config Options]({{<baseurl>}}/rke/latest/en/config-options/).
```yaml
nodes:
- address: <IP> # hostname or IP to access nodes
user: <USER> # root user (usually 'root')
role: [controlplane,etcd,worker] # K8s roles for node
ssh_key_path: <PEM_FILE> # path to PEM file
- address: <IP>
user: <USER>
role: [controlplane,etcd,worker]
ssh_key_path: <PEM_FILE>
- address: <IP>
user: <USER>
role: [controlplane,etcd,worker]
ssh_key_path: <PEM_FILE>
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
addons: |-
---
kind: Namespace
apiVersion: v1
metadata:
name: cattle-system
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: cattle-admin
namespace: cattle-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cattle-crb
namespace: cattle-system
subjects:
- kind: ServiceAccount
name: cattle-admin
namespace: cattle-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: cattle-keys-server
namespace: cattle-system
type: Opaque
data:
cacerts.pem: <BASE64_CA> # CA cert used to sign cattle server cert and key
---
apiVersion: v1
kind: Service
metadata:
namespace: cattle-system
name: cattle-service
labels:
app: cattle
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: cattle
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: cattle-system
name: cattle-ingress-http
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open
nginx.ingress.kubernetes.io/ssl-redirect: "false" # Disable redirect to ssl
spec:
rules:
- host: <FQDN>
http:
paths:
- backend:
serviceName: cattle-service
servicePort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
namespace: cattle-system
name: cattle
spec:
replicas: 1
template:
metadata:
labels:
app: cattle
spec:
serviceAccountName: cattle-admin
containers:
# Rancher install via RKE addons is only supported up to v2.0.8
- image: rancher/rancher:v2.0.8
imagePullPolicy: Always
name: cattle-server
# env:
# - name: HTTP_PROXY
# value: "http://your_proxy_address:port"
# - name: HTTPS_PROXY
# value: "http://your_proxy_address:port"
# - name: NO_PROXY
# value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,your_network_ranges_that_dont_need_proxy_to_access"
livenessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 20
periodSeconds: 10
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /etc/rancher/ssl
name: cattle-keys-volume
readOnly: true
volumes:
- name: cattle-keys-volume
secret:
defaultMode: 420
secretName: cattle-keys-server
```
@@ -1,142 +0,0 @@
---
title: Template for an RKE Cluster with a Recognized CA Certificate and SSL Termination on Layer 7 Load Balancer
weight: 4
---
RKE uses a cluster.yml file to install and configure your Kubernetes cluster.
This template is intended to be used for RKE add-on installs, which are only supported up to Rancher v2.0.8. Please use the Rancher Helm chart if you are installing a newer Rancher version. For details, see the [Kubernetes Install - Installation Outline]({{<baseurl>}}/rancher/v2.x/en/installation/k8s-install/#installation-outline).
The following template can be used for the cluster.yml if you have a setup with:
- Layer 7 load balancer with SSL termination (HTTPS)
- [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/)
> For more options, refer to [RKE Documentation: Config Options]({{<baseurl>}}/rke/latest/en/config-options/).
```yaml
nodes:
- address: <IP> # hostname or IP to access nodes
user: <USER> # root user (usually 'root')
role: [controlplane,etcd,worker] # K8s roles for node
ssh_key_path: <PEM_FILE> # path to PEM file
- address: <IP>
user: <USER>
role: [controlplane,etcd,worker]
ssh_key_path: <PEM_FILE>
- address: <IP>
user: <USER>
role: [controlplane,etcd,worker]
ssh_key_path: <PEM_FILE>
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
addons: |-
---
kind: Namespace
apiVersion: v1
metadata:
name: cattle-system
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: cattle-admin
namespace: cattle-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cattle-crb
namespace: cattle-system
subjects:
- kind: ServiceAccount
name: cattle-admin
namespace: cattle-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
namespace: cattle-system
name: cattle-service
labels:
app: cattle
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: cattle
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: cattle-system
name: cattle-ingress-http
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open
nginx.ingress.kubernetes.io/ssl-redirect: "false" # Disable redirect to ssl
spec:
rules:
- host: <FQDN>
http:
paths:
- backend:
serviceName: cattle-service
servicePort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
namespace: cattle-system
name: cattle
spec:
replicas: 1
template:
metadata:
labels:
app: cattle
spec:
serviceAccountName: cattle-admin
containers:
# Rancher install via RKE addons is only supported up to v2.0.8
- image: rancher/rancher:v2.0.8
args:
- --no-cacerts
imagePullPolicy: Always
name: cattle-server
# env:
# - name: HTTP_PROXY
# value: "http://your_proxy_address:port"
# - name: HTTPS_PROXY
# value: "http://your_proxy_address:port"
# - name: NO_PROXY
# value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,your_network_ranges_that_dont_need_proxy_to_access"
livenessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 20
periodSeconds: 10
ports:
- containerPort: 80
protocol: TCP
```
@@ -1,8 +0,0 @@
---
title: cluster.yml Templates
weight: 1
---
RKE uses a cluster.yml file to install and configure your Kubernetes cluster. This section provides templates that can be used to create the cluster.yml.
> For more cluster.yml options, refer to the[RKE configuration reference.]({{<baseurl>}}/rke/latest/en/config-options/).
@@ -1,156 +0,0 @@
---
title: Enabling Experimental Features
weight: 8000
---
_Available as of v2.3.0_
Rancher includes some features that are experimental and disabled by default. You might want to enable these features, for example, if you decide that the benefits of using an [unsupported storage type]({{<baseurl>}}/rancher/v2.x/en/installation/options/feature-flags/enable-not-default-storage-drivers) outweighs the risk of using an untested feature. Feature flags were introduced to allow you to try these features that are not enabled by default.
The features can be enabled in three ways:
- [Enable features when starting Rancher.](#enabling-features-when-starting-rancher) When installing Rancher with a CLI, you can use a feature flag to enable a feature by default.
- [Enable features from the Rancher UI](#enabling-features-with-the-rancher-ui) in Rancher v2.3.3+ by going to the **Settings** page.
- [Enable features with the Rancher API](#enabling-features-with-the-rancher-api) after installing Rancher.
Each feature has two values:
- A default value, which can be configured with a flag or environment variable from the command line
- A set value, which can be configured with the Rancher API or UI
If no value has been set, Rancher uses the default value.
Because the API sets the actual value and the command line sets the default value, that means that if you enable or disable a feature with the API or UI, it will override any value set with the command line.
For example, if you install Rancher, then set a feature flag to true with the Rancher API, then upgrade Rancher with a command that sets the feature flag to false, the default value will still be false, but the feature will still be enabled because it was set with the Rancher API. If you then deleted the set value (true) with the Rancher API, setting it to NULL, the default value (false) would take effect.
> **Note:** As of v2.4.0, there are some feature flags that may require a restart of the Rancher server container. These features that require a restart are marked in the table of these docs and in the UI.
The following is a list of the feature flags available in Rancher:
- `dashboard`: This feature enables the new experimental UI that has a new look and feel. The dashboard also leverages a new API in Rancher which allows the UI to access the default Kubernetes resources without any intervention from Rancher.
- `istio-virtual-service-ui`: This feature enables a [UI to create, read, update, and delete Istio virtual services and destination rules]({{<baseurl>}}/rancher/v2.x/en/installation/options/feature-flags/istio-virtual-service-ui), which are traffic management features of Istio.
- `proxy`: This feature enables Rancher to use a new simplified code base for the proxy, which can help enhance performance and security. The proxy feature is known to have issues with Helm deployments, which prevents any catalog applications to be deployed which includes Rancher's tools like monitoring, logging, Istio, etc.
- `unsupported-storage-drivers`: This feature [allows unsupported storage drivers.]({{<baseurl>}}/rancher/v2.x/en/installation/options/feature-flags/enable-not-default-storage-drivers) In other words, it enables types for storage providers and provisioners that are not enabled by default.
The below table shows the availability and default value for feature flags in Rancher:
| Feature Flag Name | Default Value | Status | Available as of | Rancher Restart Required? |
| ----------------------------- | ------------- | ------------ | --------------- |---|
| `dashboard` | `true` | Experimental | v2.4.0 | x |
| `istio-virtual-service-ui` | `false` | Experimental | v2.3.0 | |
| `istio-virtual-service-ui` | `true` | GA | v2.3.2 | |
| `proxy` | `false` | Experimental | v2.4.0 | |
| `unsupported-storage-drivers` | `false` | Experimental | v2.3.0 | |
# Enabling Features when Starting Rancher
When you install Rancher, enable the feature you want with a feature flag. The command is different depending on whether you are installing Rancher on a single node or if you are doing a Kubernetes Installation of Rancher.
> **Note:** Values set from the Rancher API will override the value passed in through the command line.
{{% tabs %}}
{{% tab "Kubernetes Install" %}}
When installing Rancher with a Helm chart, use the `--features` option. In the below example, two features are enabled by passing the feature flag names names in a comma separated list:
```
helm install rancher-latest/rancher \
--name rancher \
--namespace cattle-system \
--set hostname=rancher.my.org \
--set 'extraEnv[0].name=CATTLE_FEATURES' # Available as of v2.3.0
--set 'extraEnv[0].value=<FEATURE-FLAG-NAME-1>=true,<FEATURE-FLAG-NAME-2>=true' # Available as of v2.3.0
```
Note: If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
### Rendering the Helm Chart for Air Gap Installations
For an air gap installation of Rancher, you need to add a Helm chart repository and render a Helm template before installing Rancher with Helm. For details, refer to the [air gap installation documentation.]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher)
Here is an example of a command for passing in the feature flag names when rendering the Helm template. In the below example, two features are enabled by passing the feature flag names in a comma separated list.
The Helm 3 command is as follows:
```
helm template rancher ./rancher-<VERSION>.tgz --output-dir . \
--namespace cattle-system \
--set hostname=<RANCHER.YOURDOMAIN.COM> \
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
--set ingress.tls.source=secret \
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
--set 'extraEnv[0].name=CATTLE_FEATURES' # Available as of v2.3.0
--set 'extraEnv[0].value=<FEATURE-FLAG-NAME-1>=true,<FEATURE-FLAG-NAME-2>=true' # Available as of v2.3.0
```
The Helm 2 command is as follows:
```
helm template ./rancher-<VERSION>.tgz --output-dir . \
--name rancher \
--namespace cattle-system \
--set hostname=<RANCHER.YOURDOMAIN.COM> \
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
--set ingress.tls.source=secret \
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
--set 'extraEnv[0].name=CATTLE_FEATURES' # Available as of v2.3.0
--set 'extraEnv[0].value=<FEATURE-FLAG-NAME-1>=true,<FEATURE-FLAG-NAME-2>=true' # Available as of v2.3.0
```
{{% /tab %}}
{{% tab "Docker Install" %}}
When installing Rancher with Docker, use the `--features` option. In the below example, two features are enabled by passing the feature flag names in a comma separated list:
```
docker run -d -p 80:80 -p 443:443 \
--restart=unless-stopped \
rancher/rancher:rancher-latest \
--features=<FEATURE-FLAG-NAME-1>=true,<FEATURE-NAME-2>=true # Available as of v2.3.0
```
{{% /tab %}}
{{% /tabs %}}
# Enabling Features with the Rancher UI
_Available as of Rancher v2.3.3_
1. Go to the **Global** view and click **Settings.**
1. Click the **Feature Flags** tab. You will see a list of experimental features.
1. To enable a feature, go to the disabled feature you want to enable and click **&#8942; > Activate.**
**Result:** The feature is enabled.
### Disabling Features with the Rancher UI
1. Go to the **Global** view and click **Settings.**
1. Click the **Feature Flags** tab. You will see a list of experimental features.
1. To disable a feature, go to the enabled feature you want to disable and click **&#8942; > Deactivate.**
**Result:** The feature is disabled.
# Enabling Features with the Rancher API
1. Go to `<RANCHER-SERVER-URL>/v3/features`.
1. In the `data` section, you will see an array containing all of the features that can be turned on with feature flags. The name of the feature is in the `id` field. Click the name of the feature you want to enable.
1. In the upper left corner of the screen, under **Operations,** click **Edit.**
1. In the **Value** drop-down menu, click **True.**
1. Click **Show Request.**
1. Click **Send Request.**
1. Click **Close.**
**Result:** The feature is enabled.
### Disabling Features with the Rancher API
1. Go to `<RANCHER-SERVER-URL>/v3/features`.
1. In the `data` section, you will see an array containing all of the features that can be turned on with feature flags. The name of the feature is in the `id` field. Click the name of the feature you want to enable.
1. In the upper left corner of the screen, under **Operations,** click **Edit.**
1. In the **Value** drop-down menu, click **False.**
1. Click **Show Request.**
1. Click **Send Request.**
1. Click **Close.**
**Result:** The feature is disabled.
@@ -1,43 +0,0 @@
---
title: Allow Unsupported Storage Drivers
weight: 1
aliases:
- /rancher/v2.x/en/admin-settings/feature-flags/enable-not-default-storage-drivers
---
_Available as of v2.3.0_
This feature allows you to use types for storage providers and provisioners that are not enabled by default.
To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.]({{<baseurl>}}/rancher/v2.x/en/installation/options/feature-flags/)
Environment Variable Key | Default Value | Description
---|---|---
`unsupported-storage-drivers` | `false` | This feature enables types for storage providers and provisioners that are not enabled by default.
### Types for Persistent Volume Plugins that are Enabled by Default
Below is a list of storage types for persistent volume plugins that are enabled by default. When enabling this feature flag, any persistent volume plugins that are not on this list are considered experimental and unsupported:
Name | Plugin
--------|----------
Amazon EBS Disk | `aws-ebs`
AzureFile | `azure-file`
AzureDisk | `azure-disk`
Google Persistent Disk | `gce-pd`
Longhorn | `flex-volume-longhorn`
VMware vSphere Volume | `vsphere-volume`
Local | `local`
Network File System | `nfs`
hostPath | `host-path`
### Types for StorageClass that are Enabled by Default
Below is a list of storage types for a StorageClass that are enabled by default. When enabling this feature flag, any persistent volume plugins that are not on this list are considered experimental and unsupported:
Name | Plugin
--------|--------
Amazon EBS Disk | `aws-ebs`
AzureFile | `azure-file`
AzureDisk | `azure-disk`
Google Persistent Disk | `gce-pd`
Longhorn | `flex-volume-longhorn`
VMware vSphere Volume | `vsphere-volume`
Local | `local`
@@ -1,34 +0,0 @@
---
title: UI for Istio Virtual Services and Destination Rules
weight: 2
aliases:
- /rancher/v2.x/en/admin-settings/feature-flags/istio-virtual-service-ui
---
_Available as of v2.3.0_
This feature enables a UI that lets you create, read, update and delete virtual services and destination rules, which are traffic management features of Istio.
> **Prerequisite:** Turning on this feature does not enable Istio. A cluster administrator needs to [enable Istio for the cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup) in order to use the feature.
To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.]({{<baseurl>}}/rancher/v2.x/en/installation/options/feature-flags/)
Environment Variable Key | Default Value | Status | Available as of
---|---|---|---
`istio-virtual-service-ui` |`false` | Experimental | v2.3.0
`istio-virtual-service-ui` | `true` | GA | v2.3.2
# About this Feature
A central advantage of Istio's traffic management features is that they allow dynamic request routing, which is useful for canary deployments, blue/green deployments, or A/B testing.
When enabled, this feature turns on a page that lets you configure some traffic management features of Istio using the Rancher UI. Without this feature, you need to use `kubectl` to manage traffic with Istio.
The feature enables two UI tabs: one tab for **Virtual Services** and another for **Destination Rules.**
- **Virtual services** intercept and direct traffic to your Kubernetes services, allowing you to direct percentages of traffic from a request to different services. You can use them to define a set of routing rules to apply when a host is addressed. For details, refer to the [Istio documentation.](https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/)
- **Destination rules** serve as the single source of truth about which service versions are available to receive traffic from virtual services. You can use these resources to define policies that apply to traffic that is intended for a service after routing has occurred. For details, refer to the [Istio documentation.](https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule)
To see these tabs,
1. Go to the project view in Rancher and click **Resources > Istio.**
1. You will see tabs for **Traffic Graph,** which has the Kiali network visualization integrated into the UI, and **Traffic Metrics,** which shows metrics for the success rate and request volume of traffic to your services, among other metrics. Next to these tabs, you should see the tabs for **Virtual Services** and **Destination Rules.**
@@ -1,58 +0,0 @@
---
title: Kubernetes Installation Using Helm 2
weight: 1
---
> After Helm 3 was released, the Rancher installation instructions were updated to use Helm 3.
>
> If you are using Helm 2, we recommend [migrating to Helm 3](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) because it is simpler to use and more secure than Helm 2.
>
> This section provides a copy of the older high-availability Kubernetes Rancher installation instructions that used Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible.
For production environments, we recommend installing Rancher in a high-availability configuration so that your user base can always access Rancher Server. When installed in a Kubernetes cluster, Rancher will integrate with the cluster's etcd database and take advantage of Kubernetes scheduling for high-availability.
This procedure walks you through setting up a 3-node cluster with Rancher Kubernetes Engine (RKE) and installing the Rancher chart with the Helm package manager.
> **Important:** The Rancher management server can only be run on an RKE-managed Kubernetes cluster. Use of Rancher on hosted Kubernetes or other providers is not supported.
> **Important:** For the best performance, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads.
## Recommended Architecture
- DNS for Rancher should resolve to a Layer 4 load balancer (TCP)
- The Load Balancer should forward port TCP/80 and TCP/443 to all 3 nodes in the Kubernetes cluster.
- The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443.
- The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment.
<figcaption>Kubernetes Rancher install with layer 4 load balancer, depicting SSL termination at ingress controllers</figcaption>
![High-availability Kubernetes Install]({{<baseurl>}}/img/rancher/ha/rancher2ha.svg)
<sup>Kubernetes Rancher install with Layer 4 load balancer (TCP), depicting SSL termination at ingress controllers</sup>
## Required Tools
The following CLI tools are required for this install. Please make sure these tools are installed and available in your `$PATH`
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) - Kubernetes command-line tool.
- [rke]({{<baseurl>}}/rke/latest/en/installation/) - Rancher Kubernetes Engine, cli for building Kubernetes clusters.
- [helm](https://docs.helm.sh/using_helm/#installing-helm) - Package management for Kubernetes. Refer to the [Helm version requirements]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm-version) to choose a version of Helm to install Rancher.
## Installation Outline
- [Create Nodes and Load Balancer]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/)
- [Install Kubernetes with RKE]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/)
- [Initialize Helm (tiller)]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-init/)
- [Install Rancher]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/)
## Additional Install Options
- [Migrating from a Kubernetes Install with an RKE Add-on]({{<baseurl>}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/)
## Previous Methods
[RKE add-on install]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/rke-add-on/)
> **Important: RKE add-on install is only supported up to Rancher v2.0.8**
>
> Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/#installation-outline).
>
> If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{<baseurl>}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the Helm chart.
@@ -1,30 +0,0 @@
---
title: "1. Create Nodes and Load Balancer"
weight: 185
---
Use your provider of choice to provision 3 nodes and a Load Balancer endpoint for your RKE install.
> **Note:** These nodes must be in the same region/datacenter. You may place these servers in separate availability zones.
### Node Requirements
View the supported operating systems and hardware/software/networking requirements for nodes running Rancher at [Node Requirements]({{<baseurl>}}/rancher/v2.x/en/installation/requirements).
View the OS requirements for RKE at [RKE Requirements]({{<baseurl>}}/rke/latest/en/os/)
### Load Balancer
RKE will configure an Ingress controller pod, on each of your nodes. The Ingress controller pods are bound to ports TCP/80 and TCP/443 on the host network and are the entry point for HTTPS traffic to the Rancher server.
Configure a load balancer as a basic Layer 4 TCP forwarder. The exact configuration will vary depending on your environment.
>**Important:**
>Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications.
#### Examples
* [Nginx]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/nginx/)
* [Amazon NLB]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/nlb/)
### [Next: Install Kubernetes with RKE]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/)
@@ -1,79 +0,0 @@
---
title: NGINX
weight: 270
---
NGINX will be configured as Layer 4 load balancer (TCP) that forwards connections to one of your Rancher nodes.
>**Note:**
> In this configuration, the load balancer is positioned in front of your nodes. The load balancer can be any host capable of running NGINX.
>
> One caveat: do not use one of your Rancher nodes as the load balancer.
## Install NGINX
Start by installing NGINX on the node you want to use as a load balancer. NGINX has packages available for all known operating systems. The versions tested are `1.14` and `1.15`. For help installing NGINX, refer to their [install documentation](https://www.nginx.com/resources/wiki/start/topics/tutorials/install/).
The `stream` module is required, which is present when using the official NGINX packages. Please refer to your OS documentation on how to install and enable the NGINX `stream` module on your operating system.
## Create NGINX Configuration
After installing NGINX, you need to update the NGINX configuration file, `nginx.conf`, with the IP addresses for your nodes.
1. Copy and paste the code sample below into your favorite text editor. Save it as `nginx.conf`.
2. From `nginx.conf`, replace both occurrences (port 80 and port 443) of `<IP_NODE_1>`, `<IP_NODE_2>`, and `<IP_NODE_3>` with the IPs of your [nodes]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/).
>**Note:** See [NGINX Documentation: TCP and UDP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/) for all configuration options.
<figcaption>Example NGINX config</figcaption>
```
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 8192;
}
stream {
upstream rancher_servers_http {
least_conn;
server <IP_NODE_1>:80 max_fails=3 fail_timeout=5s;
server <IP_NODE_2>:80 max_fails=3 fail_timeout=5s;
server <IP_NODE_3>:80 max_fails=3 fail_timeout=5s;
}
server {
listen 80;
proxy_pass rancher_servers_http;
}
upstream rancher_servers_https {
least_conn;
server <IP_NODE_1>:443 max_fails=3 fail_timeout=5s;
server <IP_NODE_2>:443 max_fails=3 fail_timeout=5s;
server <IP_NODE_3>:443 max_fails=3 fail_timeout=5s;
}
server {
listen 443;
proxy_pass rancher_servers_https;
}
}
```
3. Save `nginx.conf` to your load balancer at the following path: `/etc/nginx/nginx.conf`.
4. Load the updates to your NGINX configuration by running the following command:
```
# nginx -s reload
```
## Option - Run NGINX as Docker container
Instead of installing NGINX as a package on the operating system, you can rather run it as a Docker container. Save the edited **Example NGINX config** as `/etc/nginx.conf` and run the following command to launch the NGINX container:
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
-v /etc/nginx.conf:/etc/nginx/nginx.conf \
nginx:1.14
```
@@ -1,175 +0,0 @@
---
title: Amazon NLB
weight: 277
---
## Objectives
Configuring an Amazon NLB is a multistage process. We've broken it down into multiple tasks so that it's easy to follow.
1. [Create Target Groups](#create-target-groups)
Begin by creating two target groups for the **TCP** protocol, one regarding TCP port 443 and one regarding TCP port 80 (providing redirect to TCP port 443). You'll add your Linux nodes to these groups.
2. [Register Targets](#register-targets)
Add your Linux nodes to the target groups.
3. [Create Your NLB](#create-your-nlb)
Use Amazon's Wizard to create an Network Load Balancer. As part of this process, you'll add the target groups you created in **1. Create Target Groups**.
> **Note:** Rancher only supports using the Amazon NLB when terminating traffic in `tcp` mode for port 443 rather than `tls` mode. This is due to the fact that the NLB does not inject the correct headers into requests when terminated at the NLB. This means that if you want to use certificates managed by the Amazon Certificate Manager (ACM), you should use an ELB or ALB.
## Create Target Groups
Your first NLB configuration step is to create two target groups. Technically, only port 443 is needed to access Rancher, but its convenient to add a listener for port 80 which will be redirected to port 443 automatically. The NGINX ingress controller on the nodes will make sure that port 80 gets redirected to port 443.
Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started, make sure to select the **Region** where your EC2 instances (Linux nodes) are created.
The Target Groups configuration resides in the **Load Balancing** section of the **EC2** service. Select **Services** and choose **EC2**, find the section **Load Balancing** and open **Target Groups**.
{{< img "/img/rancher/ha/nlb/ec2-loadbalancing.png" "EC2 Load Balancing section">}}
Click **Create target group** to create the first target group, regarding TCP port 443.
### Target Group (TCP port 443)
Configure the first target group according to the table below. Screenshots of the configuration are shown just below the table.
Option | Setting
--------------------------------------|------------------------------------
Target Group Name | `rancher-tcp-443`
Protocol | `TCP`
Port | `443`
Target type | `instance`
VPC | Choose your VPC
Protocol<br/>(Health Check) | `HTTP`
Path<br/>(Health Check) | `/healthz`
Port (Advanced health check) | `override`,`80`
Healthy threshold (Advanced health) | `3`
Unhealthy threshold (Advanced) | `3`
Timeout (Advanced) | `6 seconds`
Interval (Advanced) | `10 second`
Success codes | `200-399`
<hr>
**Screenshot Target group TCP port 443 settings**<br/>
{{< img "/img/rancher/ha/nlb/create-targetgroup-443.png" "Target group 443">}}
<hr>
**Screenshot Target group TCP port 443 Advanced settings**<br/>
{{< img "/img/rancher/ha/nlb/create-targetgroup-443-advanced.png" "Target group 443 Advanced">}}
<hr>
Click **Create target group** to create the second target group, regarding TCP port 80.
### Target Group (TCP port 80)
Configure the second target group according to the table below. Screenshots of the configuration are shown just below the table.
Option | Setting
--------------------------------------|------------------------------------
Target Group Name | `rancher-tcp-80`
Protocol | `TCP`
Port | `80`
Target type | `instance`
VPC | Choose your VPC
Protocol<br/>(Health Check) | `HTTP`
Path<br/>(Health Check) | `/healthz`
Port (Advanced health check) | `traffic port`
Healthy threshold (Advanced health) | `3`
Unhealthy threshold (Advanced) | `3`
Timeout (Advanced) | `6 seconds`
Interval (Advanced) | `10 second`
Success codes | `200-399`
<hr>
**Screenshot Target group TCP port 80 settings**<br/>
{{< img "/img/rancher/ha/nlb/create-targetgroup-80.png" "Target group 80">}}
<hr>
**Screenshot Target group TCP port 80 Advanced settings**<br/>
{{< img "/img/rancher/ha/nlb/create-targetgroup-80-advanced.png" "Target group 80 Advanced">}}
<hr>
## Register Targets
Next, add your Linux nodes to both target groups.
Select the target group named **rancher-tcp-443**, click the tab **Targets** and choose **Edit**.
{{< img "/img/rancher/ha/nlb/edit-targetgroup-443.png" "Edit target group 443">}}
Select the instances (Linux nodes) you want to add, and click **Add to registered**.
<hr>
**Screenshot Add targets to target group TCP port 443**<br/>
{{< img "/img/rancher/ha/nlb/add-targets-targetgroup-443.png" "Add targets to target group 443">}}
<hr>
**Screenshot Added targets to target group TCP port 443**<br/>
{{< img "/img/rancher/ha/nlb/added-targets-targetgroup-443.png" "Added targets to target group 443">}}
When the instances are added, click **Save** on the bottom right of the screen.
Repeat those steps, replacing **rancher-tcp-443** with **rancher-tcp-80**. The same instances need to be added as targets to this target group.
## Create Your NLB
Use Amazon's Wizard to create an Network Load Balancer. As part of this process, you'll add the target groups you created in [Create Target Groups](#create-target-groups).
1. From your web browser, navigate to the [Amazon EC2 Console](https://console.aws.amazon.com/ec2/).
2. From the navigation pane, choose **LOAD BALANCING** > **Load Balancers**.
3. Click **Create Load Balancer**.
4. Choose **Network Load Balancer** and click **Create**.
5. Complete the **Step 1: Configure Load Balancer** form.
- **Basic Configuration**
- Name: `rancher`
- Scheme: `internal` or `internet-facing`
The Scheme that you choose for your NLB is dependent on the configuration of your instances/VPC. If your instances do not have public IPs associated with them, or you will only be accessing Rancher internally, you should set your NLB Scheme to `internal` rather than `internet-facing`.
- **Listeners**
Add the **Load Balancer Protocols** and **Load Balancer Ports** below.
- `TCP`: `443`
- **Availability Zones**
- Select Your **VPC** and **Availability Zones**.
6. Complete the **Step 2: Configure Routing** form.
- From the **Target Group** drop-down, choose **Existing target group**.
- From the **Name** drop-down, choose `rancher-tcp-443`.
- Open **Advanced health check settings**, and configure **Interval** to `10 seconds`.
7. Complete **Step 3: Register Targets**. Since you registered your targets earlier, all you have to do is click **Next: Review**.
8. Complete **Step 4: Review**. Look over the load balancer details and click **Create** when you're satisfied.
9. After AWS creates the NLB, click **Close**.
## Add listener to NLB for TCP port 80
1. Select your newly created NLB and select the **Listeners** tab.
2. Click **Add listener**.
3. Use `TCP`:`80` as **Protocol** : **Port**
4. Click **Add action** and choose **Forward to...**
5. From the **Forward to** drop-down, choose `rancher-tcp-80`.
6. Click **Save** in the top right of the screen.
@@ -1,66 +0,0 @@
---
title: "Initialize Helm: Install the Tiller Service"
description: "With Helm, you can create configurable deployments instead of using static files. In order to use Helm, the Tiller service needs to be installed on your cluster."
weight: 195
---
Helm is the package management tool of choice for Kubernetes. Helm "charts" provide templating syntax for Kubernetes YAML manifest documents. With Helm we can create configurable deployments instead of just using static files. For more information about creating your own catalog of deployments, check out the docs at [https://helm.sh/](https://helm.sh/). To be able to use Helm, the server-side component `tiller` needs to be installed on your cluster.
For systems without direct internet access, see [Helm - Air Gap]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#helm) for install details.
Refer to the [Helm version requirements]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm-version) to choose a version of Helm to install Rancher.
> **Note:** The installation instructions assume you are using Helm 2. The instructions will be updated for Helm 3 soon. In the meantime, if you want to use Helm 3, refer to [these instructions.](https://github.com/ibrokethecloud/rancher-helm3)
### Install Tiller on the Cluster
> **Important:** Due to an issue with Helm v2.12.0 and cert-manager, please use Helm v2.12.1 or higher.
Helm installs the `tiller` service on your cluster to manage charts. Since RKE enables RBAC by default we will need to use `kubectl` to create a `serviceaccount` and `clusterrolebinding` so `tiller` has permission to deploy to the cluster.
* Create the `ServiceAccount` in the `kube-system` namespace.
* Create the `ClusterRoleBinding` to give the `tiller` account access to the cluster.
* Finally use `helm` to install the `tiller` service
```plain
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account tiller
# Users in China: You will need to specify a specific tiller-image in order to initialize tiller.
# The list of tiller image tags are available here: https://dev.aliyun.com/detail.html?spm=5176.1972343.2.18.ErFNgC&repoId=62085.
# When initializing tiller, you'll need to pass in --tiller-image
helm init --service-account tiller \
--tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:<tag>
```
> **Note:** This`tiller`install has full cluster access, which should be acceptable if the cluster is dedicated to Rancher server. Check out the [helm docs](https://docs.helm.sh/using_helm/#role-based-access-control) for restricting `tiller` access to suit your security requirements.
### Test your Tiller installation
Run the following command to verify the installation of `tiller` on your cluster:
```
kubectl -n kube-system rollout status deploy/tiller-deploy
Waiting for deployment "tiller-deploy" rollout to finish: 0 of 1 updated replicas are available...
deployment "tiller-deploy" successfully rolled out
```
And run the following command to validate Helm can talk to the `tiller` service:
```
helm version
Client: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
```
### Issues or errors?
See the [Troubleshooting]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-init/troubleshooting/) page.
### [Next: Install Rancher]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/)
@@ -1,23 +0,0 @@
---
title: Troubleshooting
weight: 276
---
### Helm commands show forbidden
When Helm is initiated in the cluster without specifying the correct `ServiceAccount`, the command `helm init` will succeed but you won't be able to execute most of the other `helm` commands. The following error will be shown:
```
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
```
To resolve this, the server component (`tiller`) needs to be removed and added with the correct `ServiceAccount`. You can use `helm reset --force` to remove the `tiller` from the cluster. Please check if it is removed using `helm version --server`.
```
helm reset --force
Tiller (the Helm server-side component) has been uninstalled from your Kubernetes Cluster.
helm version --server
Error: could not find tiller
```
When you have confirmed that `tiller` has been removed, please follow the steps provided in [Initialize Helm (Install tiller)]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-init/) to install `tiller` with the correct `ServiceAccount`.
@@ -1,218 +0,0 @@
---
title: "4. Install Rancher"
weight: 200
---
Rancher installation is managed using the Helm package manager for Kubernetes. Use `helm` to install the prerequisite and charts to install Rancher.
For systems without direct internet access, see [Air Gap: Kubernetes install]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/).
Refer to the [Helm version requirements]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm-version) to choose a version of Helm to install Rancher.
> **Note:** The installation instructions assume you are using Helm 2. The instructions will be updated for Helm 3 soon. In the meantime, if you want to use Helm 3, refer to [these instructions.](https://github.com/ibrokethecloud/rancher-helm3)
### Add the Helm Chart Repository
Use `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Version of Rancher]({{<baseurl>}}/rancher/v2.x/en/installation/options/server-tags/#helm-chart-repositories).
{{< release-channel >}}
```
helm repo add rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
```
### Choose your SSL Configuration
Rancher Server is designed to be secure by default and requires SSL/TLS configuration.
There are three recommended options for the source of the certificate.
> **Note:** If you want terminate SSL/TLS externally, see [TLS termination on an External Load Balancer]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/#external-tls-termination).
| Configuration | Chart option | Description | Requires cert-manager |
|-----|-----|-----|-----|
| [Rancher Generated Certificates](#rancher-generated-certificates) | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)<br/>This is the **default** | [yes](#optional-install-cert-manager) |
| [Lets Encrypt](#let-s-encrypt) | `ingress.tls.source=letsEncrypt` | Use [Let's Encrypt](https://letsencrypt.org/) to issue a certificate | [yes](#optional-install-cert-manager) |
| [Certificates from Files](#certificates-from-files) | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s) | no |
### Optional: Install cert-manager
**Note:** cert-manager is only required for certificates issued by Rancher's generated CA (`ingress.tls.source=rancher`) and Let's Encrypt issued certificates (`ingress.tls.source=letsEncrypt`). You should skip this step if you are using your own certificate files (option `ingress.tls.source=secret`) or if you use [TLS termination on an External Load Balancer]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/#external-tls-termination).
> **Important:**
> Due to an issue with Helm v2.12.0 and cert-manager, please use Helm v2.12.1 or higher.
> Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation]({{<baseurl>}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/).
Rancher relies on [cert-manager](https://github.com/jetstack/cert-manager) to issue certificates from Rancher's own generated CA or to request Let's Encrypt certificates.
These instructions are adapted from the [official cert-manager documentation](https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html#installing-with-helm).
1. Install the CustomResourceDefinition resources separately
```plain
kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml
```
1. Create the namespace for cert-manager
```plain
kubectl create namespace cert-manager
```
1. Label the cert-manager namespace to disable resource validation
```plain
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
```
1. Add the Jetstack Helm repository
```plain
helm repo add jetstack https://charts.jetstack.io
```
1. Update your local Helm chart repository cache
```plain
helm repo update
```
1. Install the cert-manager Helm chart
```plain
helm install \
--name cert-manager \
--namespace cert-manager \
--version v0.12.0 \
jetstack/cert-manager
```
Once youve installed cert-manager, you can verify it is deployed correctly by checking the cert-manager namespace for running pods:
```
kubectl get pods --namespace cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-7cbdc48784-rpgnt 1/1 Running 0 3m
cert-manager-webhook-5b5dd6999-kst4x 1/1 Running 0 3m
cert-manager-cainjector-3ba5cd2bcd-de332x 1/1 Running 0 3m
```
If the webhook pod (2nd line) is in a ContainerCreating state, it may still be waiting for the Secret to be mounted into the pod. Wait a couple of minutes for this to happen but if you experience problems, please check the [troubleshooting](https://docs.cert-manager.io/en/latest/getting-started/troubleshooting.html) guide.
<br/>
#### Rancher Generated Certificates
> **Note:** You need to have [cert-manager](#optional-install-cert-manager) installed before proceeding.
The default is for Rancher to generate a CA and uses `cert-manager` to issue the certificate for access to the Rancher server interface. Because `rancher` is the default option for `ingress.tls.source`, we are not specifying `ingress.tls.source` when running the `helm install` command.
- Set the `hostname` to the DNS name you pointed at your load balancer.
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
```
helm install rancher-<CHART_REPO>/rancher \
--name rancher \
--namespace cattle-system \
--set hostname=rancher.my.org
```
Wait for Rancher to be rolled out:
```
kubectl -n cattle-system rollout status deploy/rancher
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
deployment "rancher" successfully rolled out
```
#### Let's Encrypt
> **Note:** You need to have [cert-manager](#optional-install-cert-manager) installed before proceeding.
This option uses `cert-manager` to automatically request and renew [Let's Encrypt](https://letsencrypt.org/) certificates. This is a free service that provides you with a valid certificate as Let's Encrypt is a trusted CA. This configuration uses HTTP validation (`HTTP-01`) so the load balancer must have a public DNS record and be accessible from the internet.
- Set `hostname` to the public DNS record, set `ingress.tls.source` to `letsEncrypt` and `letsEncrypt.email` to the email address used for communication about your certificate (for example, expiry notices)
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
```
helm install rancher-<CHART_REPO>/rancher \
--name rancher \
--namespace cattle-system \
--set hostname=rancher.my.org \
--set ingress.tls.source=letsEncrypt \
--set letsEncrypt.email=me@example.org
```
Wait for Rancher to be rolled out:
```
kubectl -n cattle-system rollout status deploy/rancher
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
deployment "rancher" successfully rolled out
```
#### Certificates from Files
Create Kubernetes secrets from your own certificates for Rancher to use.
> **Note:** The `Common Name` or a `Subject Alternative Names` entry in the server certificate must match the `hostname` option, or the ingress controller will fail to configure correctly. Although an entry in the `Subject Alternative Names` is technically required, having a matching `Common Name` maximizes compatibility with older browsers/applications. If you want to check if your certificates are correct, see [How do I check Common Name and Subject Alternative Names in my server certificate?]({{<baseurl>}}/rancher/v2.x/en/faq/technical/#how-do-i-check-common-name-and-subject-alternative-names-in-my-server-certificate)
- Set `hostname` and set `ingress.tls.source` to `secret`.
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
```
helm install rancher-<CHART_REPO>/rancher \
--name rancher \
--namespace cattle-system \
--set hostname=rancher.my.org \
--set ingress.tls.source=secret
```
If you are using a Private CA signed certificate , add `--set privateCA=true` to the command:
```
helm install rancher-<CHART_REPO>/rancher \
--name rancher \
--namespace cattle-system \
--set hostname=rancher.my.org \
--set ingress.tls.source=secret
--set privateCA=true
```
Now that Rancher is deployed, see [Adding TLS Secrets]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them.
After adding the secrets, check if Rancher was rolled out successfully:
```
kubectl -n cattle-system rollout status deploy/rancher
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
deployment "rancher" successfully rolled out
```
If you see the following error: `error: deployment "rancher" exceeded its progress deadline`, you can check the status of the deployment by running the following command:
```
kubectl -n cattle-system get deploy rancher
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
rancher 3 3 3 3 3m
```
It should show the same count for `DESIRED` and `AVAILABLE`.
### Advanced Configurations
The Rancher chart configuration has many options for customizing the install to suit your specific environment. Here are some common advanced scenarios.
* [HTTP Proxy]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/#http-proxy)
* [Private Docker Image Registry]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/#private-registry-and-air-gap-installs)
* [TLS Termination on an External Load Balancer]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/#external-tls-termination)
See the [Chart Options]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/) for the full list of options.
### Save your options
Make sure you save the `--set` options you used. You will need to use the same options when you upgrade Rancher to new versions with Helm.
### Finishing Up
That's it you should have a functional Rancher server. Point a browser at the hostname you picked and you should be greeted by the colorful login page.
Doesn't work? Take a look at the [Troubleshooting]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/troubleshooting/) Page
@@ -1,245 +0,0 @@
---
title: Chart Options
weight: 276
---
### Common Options
| Option | Default Value | Description |
| --- | --- | --- |
| `hostname` | " " | `string` - the Fully Qualified Domain Name for your Rancher Server |
| `ingress.tls.source` | "rancher" | `string` - Where to get the cert for the ingress. - "rancher, letsEncrypt, secret" |
| `letsEncrypt.email` | " " | `string` - Your email address |
| `letsEncrypt.environment` | "production" | `string` - Valid options: "staging, production" |
| `privateCA` | false | `bool` - Set to true if your cert is signed by a private CA |
<br/>
### Advanced Options
| Option | Default Value | Description |
| --- | --- | --- |
| `additionalTrustedCAs` | false | `bool` - See [Additional Trusted CAs](#additional-trusted-cas) |
| `addLocal` | "auto" | `string` - Have Rancher detect and import the "local" Rancher server cluster [Import "local Cluster](#import-local-cluster) |
| `antiAffinity` | "preferred" | `string` - AntiAffinity rule for Rancher pods - "preferred, required" |
| `auditLog.destination` | "sidecar" | `string` - Stream to sidecar container console or hostPath volume - "sidecar, hostPath" |
| `auditLog.hostPath` | "/var/log/rancher/audit" | `string` - log file destination on host (only applies when `auditLog.destination` is set to `hostPath`) |
| `auditLog.level` | 0 | `int` - set the [API Audit Log]({{<baseurl>}}/rancher/v2.x/en/installation/api-auditing) level. 0 is off. [0-3] |
| `auditLog.maxAge` | 1 | `int` - maximum number of days to retain old audit log files (only applies when `auditLog.destination` is set to `hostPath`) |
| `auditLog.maxBackups` | 1 | `int` - maximum number of audit log files to retain (only applies when `auditLog.destination` is set to `hostPath`) |
| `auditLog.maxSize` | 100 | `int` - maximum size in megabytes of the audit log file before it gets rotated (only applies when `auditLog.destination` is set to `hostPath`) |
| `busyboxImage` | "busybox" | `string` - Image location for busybox image used to collect audit logs _Note: Available as of v2.2.0_ |
| `debug` | false | `bool` - set debug flag on rancher server |
| `extraEnv` | [] | `list` - set additional environment variables for Rancher _Note: Available as of v2.2.0_ |
| `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials |
| `ingress.extraAnnotations` | {} | `map` - additional annotations to customize the ingress |
| `ingress.configurationSnippet` | "" | `string` - Add additional Nginx configuration. Can be used for proxy configuration. _Note: Available as of v2.0.15, v2.1.10 and v2.2.4_ |
| `proxy` | "" | `string` - HTTP[S] proxy server for Rancher |
| `noProxy` | "127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16" | `string` - comma separated list of hostnames or ip address not to use the proxy |
| `resources` | {} | `map` - rancher pod resource requests & limits |
| `rancherImage` | "rancher/rancher" | `string` - rancher image source |
| `rancherImageTag` | same as chart version | `string` - rancher/rancher image tag |
| `tls` | "ingress" | `string` - See [External TLS Termination](#external-tls-termination) for details. - "ingress, external" |
| `systemDefaultRegistry` | "" | `string` - private registry to be used for all system Docker images, e.g., http://registry.example.com/ _Available as of v2.3.0_ |
| `useBundledSystemChart` | `false` | `bool` - select to use the system-charts packaged with Rancher server. This option is used for air gapped installations. _Available as of v2.3.0_
<br/>
### API Audit Log
Enabling the [API Audit Log]({{<baseurl>}}/rancher/v2.x/en/installation/api-auditing/).
You can collect this log as you would any container log. Enable the [Logging service under Rancher Tools]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/) for the `System` Project on the Rancher server cluster.
```plain
--set auditLog.level=1
```
By default enabling Audit Logging will create a sidecar container in the Rancher pod. This container (`rancher-audit-log`) will stream the log to `stdout`. You can collect this log as you would any container log. When using the sidecar as the audit log destination, the `hostPath`, `maxAge`, `maxBackups`, and `maxSize` options do not apply. It's advised to use your OS or Docker daemon's log rotation features to control disk space use. Enable the [Logging service under Rancher Tools]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/) for the Rancher server cluster or System Project.
Set the `auditLog.destination` to `hostPath` to forward logs to volume shared with the host system instead of streaming to a sidecar container. When setting the destination to `hostPath` you may want to adjust the other auditLog parameters for log rotation.
### Setting Extra Environment Variables
_Available as of v2.2.0_
You can set extra environment variables for Rancher server using `extraEnv`. This list uses the same `name` and `value` keys as the container manifest definitions. Remember to quote the values.
```plain
--set 'extraEnv[0].name=CATTLE_TLS_MIN_VERSION'
--set 'extraEnv[0].value=1.0'
```
### TLS settings
_Available as of v2.2.0_
To set a different TLS configuration, you can use the `CATTLE_TLS_MIN_VERSION` and `CATTLE_TLS_CIPHERS` environment variables. For example, to configure TLS 1.0 as minimum accepted TLS version:
```plain
--set 'extraEnv[0].name=CATTLE_TLS_MIN_VERSION'
--set 'extraEnv[0].value=1.0'
```
See [TLS settings]({{<baseurl>}}/rancher/v2.x/en/admin-settings/tls-settings) for more information and options.
### Import `local` Cluster
By default Rancher server will detect and import the `local` cluster it's running on. User with access to the `local` cluster will essentially have "root" access to all the clusters managed by Rancher server.
If this is a concern in your environment you can set this option to "false" on your initial install.
> Note: This option is only effective on the initial Rancher install. See [Issue 16522](https://github.com/rancher/rancher/issues/16522) for more information.
```plain
--set addLocal="false"
```
### Customizing your Ingress
To customize or use a different ingress with Rancher server you can set your own Ingress annotations.
Example on setting a custom certificate issuer:
```plain
--set ingress.extraAnnotations.'certmanager\.k8s\.io/cluster-issuer'=ca-key-pair
```
_Available as of v2.0.15, v2.1.10 and v2.2.4_
Example on setting a static proxy header with `ingress.configurationSnippet`. This value is parsed like a template so variables can be used.
```plain
--set ingress.configurationSnippet='more_set_input_headers X-Forwarded-Host {{ .Values.hostname }};'
```
### HTTP Proxy
Rancher requires internet access for some functionality (helm charts). Use `proxy` to set your proxy server.
Add your IP exceptions to the `noProxy` list. Make sure you add the Service cluster IP range (default: 10.43.0.1/16) and any worker cluster `controlplane` nodes. Rancher supports CIDR notation ranges in this list.
```plain
--set proxy="http://<username>:<password>@<proxy_url>:<proxy_port>/"
--set noProxy="127.0.0.0/8\,10.0.0.0/8\,172.16.0.0/12\,192.168.0.0/16"
```
### Additional Trusted CAs
If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add additional trusted CAs to Rancher.
```plain
--set additionalTrustedCAs=true
```
Once the Rancher deployment is created, copy your CA certs in pem format into a file named `ca-additional.pem` and use `kubectl` to create the `tls-ca-additional` secret in the `cattle-system` namespace.
```plain
kubectl -n cattle-system create secret generic tls-ca-additional --from-file=ca-additional.pem=./ca-additional.pem
```
### Private Registry and Air Gap Installs
For details on installing Rancher with a private registry, see:
- [Air Gap: Docker Install]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap-single-node/)
- [Air Gap: Kubernetes Install]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap-high-availability/)
### External TLS Termination
We recommend configuring your load balancer as a Layer 4 balancer, forwarding plain 80/tcp and 443/tcp to the Rancher Management cluster nodes. The Ingress Controller on the cluster will redirect http traffic on port 80 to https on port 443.
You may terminate the SSL/TLS on a L7 load balancer external to the Rancher cluster (ingress). Use the `--set tls=external` option and point your load balancer at port http 80 on all of the Rancher cluster nodes. This will expose the Rancher interface on http port 80. Be aware that clients that are allowed to connect directly to the Rancher cluster will not be encrypted. If you choose to do this we recommend that you restrict direct access at the network level to just your load balancer.
> **Note:** If you are using a Private CA signed certificate, add `--set privateCA=true` and see [Adding TLS Secrets - Using a Private CA Signed Certificate]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/tls-secrets/#using-a-private-ca-signed-certificate) to add the CA cert for Rancher.
Your load balancer must support long lived websocket connections and will need to insert proxy headers so Rancher can route links correctly.
#### Configuring Ingress for External TLS when Using NGINX v0.25
In NGINX v0.25, the behavior of NGINX has [changed](https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md#0220) regarding forwarding headers and external TLS termination. Therefore, in the scenario that you are using external TLS termination configuration with NGINX v0.25, you must edit the `cluster.yml` to enable the `use-forwarded-headers` option for ingress:
```yaml
ingress:
provider: nginx
options:
use-forwarded-headers: "true"
```
#### Required Headers
* `Host`
* `X-Forwarded-Proto`
* `X-Forwarded-Port`
* `X-Forwarded-For`
#### Recommended Timeouts
* Read Timeout: `1800 seconds`
* Write Timeout: `1800 seconds`
* Connect Timeout: `30 seconds`
#### Health Checks
Rancher will respond `200` to health checks on the `/healthz` endpoint.
#### Example NGINX config
This NGINX configuration is tested on NGINX 1.14.
>**Note:** This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - HTTP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/).
* Replace `IP_NODE1`, `IP_NODE2` and `IP_NODE3` with the IP addresses of the nodes in your cluster.
* Replace both occurrences of `FQDN` to the DNS name for Rancher.
* Replace `/certs/fullchain.pem` and `/certs/privkey.pem` to the location of the server certificate and the server certificate key respectively.
```
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 8192;
}
http {
upstream rancher {
server IP_NODE_1:80;
server IP_NODE_2:80;
server IP_NODE_3:80;
}
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
listen 443 ssl http2;
server_name FQDN;
ssl_certificate /certs/fullchain.pem;
ssl_certificate_key /certs/privkey.pem;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://rancher;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
proxy_read_timeout 900s;
proxy_buffering off;
}
}
server {
listen 80;
server_name FQDN;
return 301 https://$server_name$request_uri;
}
}
```
@@ -1,33 +0,0 @@
---
title: Adding Kubernetes TLS Secrets
description: Read about how to populate the Kubernetes TLS secret for a Rancher installation
weight: 276
---
Kubernetes will create all the objects and services for Rancher, but it will not become available until we populate the `tls-rancher-ingress` secret in the `cattle-system` namespace with the certificate and key.
Combine the server certificate followed by any intermediate certificate(s) needed into a file named `tls.crt`. Copy your certificate key into a file named `tls.key`.
For example, [acme.sh](https://acme.sh) provides server certificate and CA chains in `fullchain.cer` file.
This `fullchain.cer` should be renamed to `tls.crt` & certificate key file as `tls.key`.
Use `kubectl` with the `tls` secret type to create the secrets.
```
kubectl -n cattle-system create secret tls tls-rancher-ingress \
--cert=tls.crt \
--key=tls.key
```
> **Note:** If you want to replace the certificate, you can delete the `tls-rancher-ingress` secret using `kubectl -n cattle-system delete secret tls-rancher-ingress` and add a new one using the command shown above. If you are using a private CA signed certificate, replacing the certificate is only possible if the new certificate is signed by the same CA as the certificate currently in use.
### Using a Private CA Signed Certificate
If you are using a private CA, Rancher requires a copy of the CA certificate which is used by the Rancher Agent to validate the connection to the server.
Copy the CA certificate into a file named `cacerts.pem` and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
```
kubectl -n cattle-system create secret generic tls-ca \
--from-file=cacerts.pem=./cacerts.pem
```
@@ -1,133 +0,0 @@
---
title: Troubleshooting
weight: 276
---
### Where is everything
Most of the troubleshooting will be done on objects in these 3 namespaces.
* `cattle-system` - `rancher` deployment and pods.
* `ingress-nginx` - Ingress controller pods and services.
* `kube-system` - `tiller` and `cert-manager` pods.
### "default backend - 404"
A number of things can cause the ingress-controller not to forward traffic to your rancher instance. Most of the time its due to a bad ssl configuration.
Things to check
* [Is Rancher Running](#is-rancher-running)
* [Cert CN is "Kubernetes Ingress Controller Fake Certificate"](#cert-cn-is-kubernetes-ingress-controller-fake-certificate)
### Is Rancher Running
Use `kubectl` to check the `cattle-system` system namespace and see if the Rancher pods are in a Running state.
```
kubectl -n cattle-system get pods
NAME READY STATUS RESTARTS AGE
pod/rancher-784d94f59b-vgqzh 1/1 Running 0 10m
```
If the state is not `Running`, run a `describe` on the pod and check the Events.
```
kubectl -n cattle-system describe pod
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned rancher-784d94f59b-vgqzh to localhost
Normal SuccessfulMountVolume 11m kubelet, localhost MountVolume.SetUp succeeded for volume "rancher-token-dj4mt"
Normal Pulling 11m kubelet, localhost pulling image "rancher/rancher:v2.0.4"
Normal Pulled 11m kubelet, localhost Successfully pulled image "rancher/rancher:v2.0.4"
Normal Created 11m kubelet, localhost Created container
Normal Started 11m kubelet, localhost Started container
```
### Checking the rancher logs
Use `kubectl` to list the pods.
```
kubectl -n cattle-system get pods
NAME READY STATUS RESTARTS AGE
pod/rancher-784d94f59b-vgqzh 1/1 Running 0 10m
```
Use `kubectl` and the pod name to list the logs from the pod.
```
kubectl -n cattle-system logs -f rancher-784d94f59b-vgqzh
```
### Cert CN is "Kubernetes Ingress Controller Fake Certificate"
Use your browser to check the certificate details. If it says the Common Name is "Kubernetes Ingress Controller Fake Certificate", something may have gone wrong with reading or issuing your SSL cert.
> **Note:** if you are using LetsEncrypt to issue certs it can sometimes take a few minuets to issue the cert.
#### cert-manager issued certs (Rancher Generated or LetsEncrypt)
`cert-manager` has 3 parts.
* `cert-manager` pod in the `kube-system` namespace.
* `Issuer` object in the `cattle-system` namespace.
* `Certificate` object in the `cattle-system` namespace.
Work backwards and do a `kubectl describe` on each object and check the events. You can track down what might be missing.
For example there is a problem with the Issuer:
```
kubectl -n cattle-system describe certificate
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning IssuerNotReady 18s (x23 over 19m) cert-manager Issuer rancher not ready
```
```
kubectl -n cattle-system describe issuer
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrInitIssuer 19m (x12 over 19m) cert-manager Error initializing issuer: secret "tls-rancher" not found
Warning ErrGetKeyPair 9m (x16 over 19m) cert-manager Error getting keypair for CA issuer: secret "tls-rancher" not found
```
#### Bring Your Own SSL Certs
Your certs get applied directly to the Ingress object in the `cattle-system` namespace.
Check the status of the Ingress object and see if its ready.
```
kubectl -n cattle-system describe ingress
```
If its ready and the SSL is still not working you may have a malformed cert or secret.
Check the nginx-ingress-controller logs. Because the nginx-ingress-controller has multiple containers in its pod you will need to specify the name of the container.
```
kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller
...
W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found
```
### no matches for kind "Issuer"
The [SSL configuration]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/#choose-your-ssl-configuration) option you have chosen requires [cert-manager]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/#optional-install-cert-manager) to be installed before installing Rancher or else the following error is shown:
```
Error: validation failed: unable to recognize "": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1"
```
Install [cert-manager]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/#optional-install-cert-manager) and try installing Rancher again.
@@ -1,132 +0,0 @@
---
title: "2. Install Kubernetes with RKE"
weight: 190
---
Use RKE to install Kubernetes with a high availability etcd configuration.
>**Note:** For systems without direct internet access see [Air Gap: Kubernetes install]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap-high-availability/) for install details.
### Create the `rancher-cluster.yml` File
Using the sample below create the `rancher-cluster.yml` file. Replace the IP Addresses in the `nodes` list with the IP address or DNS names of the 3 nodes you created.
> **Note:** If your node has public and internal addresses, it is recommended to set the `internal_address:` so Kubernetes will use it for intra-cluster communication. Some services like AWS EC2 require setting the `internal_address:` if you want to use self-referencing security groups or firewalls.
```yaml
nodes:
- address: 165.227.114.63
internal_address: 172.16.22.12
user: ubuntu
role: [controlplane,worker,etcd]
- address: 165.227.116.167
internal_address: 172.16.32.37
user: ubuntu
role: [controlplane,worker,etcd]
- address: 165.227.127.226
internal_address: 172.16.42.73
user: ubuntu
role: [controlplane,worker,etcd]
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
```
#### Common RKE Nodes Options
| Option | Required | Description |
| --- | --- | --- |
| `address` | yes | The public DNS or IP address |
| `user` | yes | A user that can run docker commands |
| `role` | yes | List of Kubernetes roles assigned to the node |
| `internal_address` | no | The private DNS or IP address for internal cluster traffic |
| `ssh_key_path` | no | Path to SSH private key used to authenticate to the node (defaults to `~/.ssh/id_rsa`) |
#### Advanced Configurations
RKE has many configuration options for customizing the install to suit your specific environment.
Please see the [RKE Documentation]({{<baseurl>}}/rke/latest/en/config-options/) for the full list of options and capabilities.
For tuning your etcd cluster for larger Rancher installations see the [etcd settings guide]({{<baseurl>}}/rancher/v2.x/en/installation/options/etcd/).
### Run RKE
```
rke up --config ./rancher-cluster.yml
```
When finished, it should end with the line: `Finished building Kubernetes cluster successfully`.
### Testing Your Cluster
RKE should have created a file `kube_config_rancher-cluster.yml`. This file has the credentials for `kubectl` and `helm`.
> **Note:** If you have used a different file name from `rancher-cluster.yml`, then the kube config file will be named `kube_config_<FILE_NAME>.yml`.
You can copy this file to `$HOME/.kube/config` or if you are working with multiple Kubernetes clusters, set the `KUBECONFIG` environmental variable to the path of `kube_config_rancher-cluster.yml`.
```
export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml
```
Test your connectivity with `kubectl` and see if all your nodes are in `Ready` state.
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
165.227.114.63 Ready controlplane,etcd,worker 11m v1.13.5
165.227.116.167 Ready controlplane,etcd,worker 11m v1.13.5
165.227.127.226 Ready controlplane,etcd,worker 11m v1.13.5
```
### Check the Health of Your Cluster Pods
Check that all the required pods and containers are healthy are ready to continue.
* Pods are in `Running` or `Completed` state.
* `READY` column shows all the containers are running (i.e. `3/3`) for pods with `STATUS` `Running`
* Pods with `STATUS` `Completed` are run-once Jobs. For these pods `READY` should be `0/1`.
```
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-tnsn4 1/1 Running 0 30s
ingress-nginx nginx-ingress-controller-tw2ht 1/1 Running 0 30s
ingress-nginx nginx-ingress-controller-v874b 1/1 Running 0 30s
kube-system canal-jp4hz 3/3 Running 0 30s
kube-system canal-z2hg8 3/3 Running 0 30s
kube-system canal-z6kpw 3/3 Running 0 30s
kube-system kube-dns-7588d5b5f5-sf4vh 3/3 Running 0 30s
kube-system kube-dns-autoscaler-5db9bbb766-jz2k6 1/1 Running 0 30s
kube-system metrics-server-97bc649d5-4rl2q 1/1 Running 0 30s
kube-system rke-ingress-controller-deploy-job-bhzgm 0/1 Completed 0 30s
kube-system rke-kubedns-addon-deploy-job-gl7t4 0/1 Completed 0 30s
kube-system rke-metrics-addon-deploy-job-7ljkc 0/1 Completed 0 30s
kube-system rke-network-plugin-deploy-job-6pbgj 0/1 Completed 0 30s
```
### Save Your Files
> **Important**
> The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster.
Save a copy of the following files in a secure location:
- `rancher-cluster.yml`: The RKE cluster configuration file.
- `kube_config_rancher-cluster.yml`: The [Kubeconfig file]({{<baseurl>}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file]({{<baseurl>}}/rke/latest/en/installation/#kubernetes-cluster-state), this file contains credentials for full access to the cluster.<br/><br/>_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
### Issues or errors?
See the [Troubleshooting]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/troubleshooting/) page.
### [Next: Initialize Helm (Install tiller)]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/helm-init/)
@@ -1,52 +0,0 @@
---
title: Troubleshooting
weight: 276
---
### canal Pods show READY 2/3
The most common cause of this issue is port 8472/UDP is not open between the nodes. Check your local firewall, network routing or security groups.
Once the network issue is resolved, the `canal` pods should timeout and restart to establish their connections.
### nginx-ingress-controller Pods show RESTARTS
The most common cause of this issue is the `canal` pods have failed to establish the overlay network. See [canal Pods show READY `2/3`](#canal-pods-show-ready-2-3) for troubleshooting.
### Failed to set up SSH tunneling for host [xxx.xxx.xxx.xxx]: Can't retrieve Docker Info
#### Failed to dial to /var/run/docker.sock: ssh: rejected: administratively prohibited (open failed)
* User specified to connect with does not have permission to access the Docker socket. This can be checked by logging into the host and running the command `docker ps`:
```
$ ssh user@server
user@server$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) how to set this up properly.
* When using RedHat/CentOS as operating system, you cannot use the user `root` to connect to the nodes because of [Bugzilla #1527565](https://bugzilla.redhat.com/show_bug.cgi?id=1527565). You will need to add a separate user and configure it to access the Docker socket. See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) how to set this up properly.
* SSH server version is not version 6.7 or higher. This is needed for socket forwarding to work, which is used to connect to the Docker socket over SSH. This can be checked using `sshd -V` on the host you are connecting to, or using netcat:
```
$ nc xxx.xxx.xxx.xxx 22
SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.10
```
#### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: Error configuring SSH: ssh: no key found
* The key file specified as `ssh_key_path` cannot be accessed. Make sure that you specified the private key file (not the public key, `.pub`), and that the user that is running the `rke` command can access the private key file.
#### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
* The key file specified as `ssh_key_path` is not correct for accessing the node. Double-check if you specified the correct `ssh_key_path` for the node and if you specified the correct user to connect with.
#### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: Error configuring SSH: ssh: cannot decode encrypted private keys
* If you want to use encrypted private keys, you should use `ssh-agent` to load your keys with your passphrase. If the `SSH_AUTH_SOCK` environment variable is found in the environment where the `rke` command is run, it will be used automatically to connect to the node.
#### Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
* The node is not reachable on the configured `address` and `port`.
@@ -1,16 +0,0 @@
---
title: RKE Add-On Install
weight: 276
---
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
>
>Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{<baseurl>}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
* [Kubernetes installation with External Load Balancer (TCP/Layer 4)]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-4-lb)
* [Kubernetes installation with External Load Balancer (HTTPS/Layer 7)]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-7-lb)
* [HTTP Proxy Configuration for a Kubernetes installation]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/rke-add-on/proxy/)
* [Troubleshooting RKE Add-on Installs]({{<baseurl>}}/rancher/v2.x/en/installation/options/helm2/rke-add-on/troubleshooting/)

Some files were not shown because too many files have changed in this diff Show More