diff --git a/assets/img/rancher/backup_restore/backup/backup.png b/assets/img/rancher/backup_restore/backup/backup.png new file mode 100644 index 00000000000..681b6f1f3f3 Binary files /dev/null and b/assets/img/rancher/backup_restore/backup/backup.png differ diff --git a/assets/img/rancher/backup_restore/backup/encryption.png b/assets/img/rancher/backup_restore/backup/encryption.png new file mode 100644 index 00000000000..f11f5a179bf Binary files /dev/null and b/assets/img/rancher/backup_restore/backup/encryption.png differ diff --git a/assets/img/rancher/backup_restore/backup/schedule.png b/assets/img/rancher/backup_restore/backup/schedule.png new file mode 100644 index 00000000000..9f1f340116c Binary files /dev/null and b/assets/img/rancher/backup_restore/backup/schedule.png differ diff --git a/assets/img/rancher/backup_restore/backup/storageLocation.png b/assets/img/rancher/backup_restore/backup/storageLocation.png new file mode 100644 index 00000000000..dbb7e809c8c Binary files /dev/null and b/assets/img/rancher/backup_restore/backup/storageLocation.png differ diff --git a/assets/img/rancher/backup_restore/restore/default.png b/assets/img/rancher/backup_restore/restore/default.png new file mode 100644 index 00000000000..eabf5015ae3 Binary files /dev/null and b/assets/img/rancher/backup_restore/restore/default.png differ diff --git a/assets/img/rancher/backup_restore/restore/encryption.png b/assets/img/rancher/backup_restore/restore/encryption.png new file mode 100644 index 00000000000..4949e8d1f37 Binary files /dev/null and b/assets/img/rancher/backup_restore/restore/encryption.png differ diff --git a/assets/img/rancher/backup_restore/restore/existing.png b/assets/img/rancher/backup_restore/restore/existing.png new file mode 100644 index 00000000000..e9bd6db38d3 Binary files /dev/null and b/assets/img/rancher/backup_restore/restore/existing.png differ diff --git a/assets/img/rancher/backup_restore/restore/restore.png b/assets/img/rancher/backup_restore/restore/restore.png new file mode 100644 index 00000000000..dc6541b7810 Binary files /dev/null and b/assets/img/rancher/backup_restore/restore/restore.png differ diff --git a/assets/img/rancher/backup_restore/restore/s3store.png b/assets/img/rancher/backup_restore/restore/s3store.png new file mode 100644 index 00000000000..493364deaed Binary files /dev/null and b/assets/img/rancher/backup_restore/restore/s3store.png differ diff --git a/content/k3s/latest/en/installation/network-options/_index.md b/content/k3s/latest/en/installation/network-options/_index.md index 97873e4151b..eaac4ea818e 100644 --- a/content/k3s/latest/en/installation/network-options/_index.md +++ b/content/k3s/latest/en/installation/network-options/_index.md @@ -40,7 +40,7 @@ Apply the Canal YAML. Ensure the settings were applied by running the following command on the host: ``` -cat /etc/cni/net.d/10-canal.conflist +cat /etc/cni/net.d/10-calico.conflist ``` You should see that IP forwarding is set to true. @@ -61,7 +61,7 @@ Apply the Calico YAML. Ensure the settings were applied by running the following command on the host: ``` -cat /etc/cni/net.d/10-calico.conflist +cat /etc/cni/net.d/10-canal.conflist ``` You should see that IP forwarding is set to true. diff --git a/content/k3s/latest/en/installation/tutorials/_index.md b/content/k3s/latest/en/installation/tutorials/_index.md new file mode 100644 index 00000000000..40de388bc75 --- /dev/null +++ b/content/k3s/latest/en/installation/tutorials/_index.md @@ -0,0 +1,5 @@ +--- +title: Tutorials +weight: 10000 +--- + diff --git a/content/k3s/latest/en/installation/tutorials/ha-with-external-db/_index.md b/content/k3s/latest/en/installation/tutorials/ha-with-external-db/_index.md new file mode 100644 index 00000000000..db2ce0b3c70 --- /dev/null +++ b/content/k3s/latest/en/installation/tutorials/ha-with-external-db/_index.md @@ -0,0 +1,118 @@ +--- +title: Setting up a High-availability K3s Kubernetes Cluster for Rancher +shortTitle: Set up K3s for Rancher +weight: 2 +--- + +> This page is under construction. + +This section describes how to install a Kubernetes cluster according to the [best practices for the Rancher server environment.]({{}}/rancher/v2.x/en/overview/architecture-recommendations/#environment-for-kubernetes-installations) + +For systems without direct internet access, refer to the air gap installation instructions. + +> **Single-node Installation Tip:** +> In a single-node Kubernetes cluster, the Rancher server does not have high availability, which is important for running Rancher in production. However, installing Rancher on a single-node cluster can be useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path. +> +> To set up a single-node K3s cluster, run the Rancher server installation command on just one node instead of two nodes. +> +> In both single-node setups, Rancher can be installed with Helm on the Kubernetes cluster in the same way that it would be installed on any other cluster. + +# Prerequisites + +These instructions assume you have set up two nodes, a load balancer, a DNS record, and an external MySQL database as described in [this section.](../infra-for-ha-with-external-db) + +# Installing Kubernetes + +### 1. Install Kubernetes and Set up the K3s Server + +When running the command to start the K3s Kubernetes API server, you will pass in an option to use the external datastore that you set up earlier. + +1. Connect to one of the Linux nodes that you have prepared to run the Rancher server. +1. On the Linux node, run this command to start the K3s server and connect it to the external datastore: + ``` + curl -sfL https://get.k3s.io | sh -s - server \ + --datastore-endpoint="mysql://username:password@tcp(hostname:3306)/database-name" + ``` + Note: The datastore endpoint can also be passed in using the environment variable `$K3S_DATASTORE_ENDPOINT`. + +1. Repeat the same command on your second K3s server node. + +### 2. Confirm that K3s is Running + +To confirm that K3s has been set up successfully, run the following command on either of the K3s server nodes: +``` +sudo k3s kubectl get nodes +``` + +Then you should see two nodes with the master role: +``` +ubuntu@ip-172-31-60-194:~$ sudo k3s kubectl get nodes +NAME STATUS ROLES AGE VERSION +ip-172-31-60-194 Ready master 44m v1.17.2+k3s1 +ip-172-31-63-88 Ready master 6m8s v1.17.2+k3s1 +``` + +Then test the health of the cluster pods: +``` +sudo k3s kubectl get pods --all-namespaces +``` + +**Result:** You have successfully set up a K3s Kubernetes cluster. + +### 3. Save and Start Using the kubeconfig File + +When you installed K3s on each Rancher server node, a `kubeconfig` file was created on the node at `/etc/rancher/k3s/k3s.yaml`. This file contains credentials for full access to the cluster, and you should save this file in a secure location. + +To use this `kubeconfig` file, + +1. Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool. +2. Copy the file at `/etc/rancher/k3s/k3s.yaml` and save it to the directory `~/.kube/config` on your local machine. +3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your load balancer, referring to port 6443. (The Kubernetes API server will be reached at port 6443, while the Rancher server will be reached at ports 80 and 443.) Here is an example `k3s.yaml`: + +```yml +apiVersion: v1 +clusters: +- cluster: + certificate-authority-data: [CERTIFICATE-DATA] + server: [LOAD-BALANCER-DNS]:6443 # Edit this line + name: default +contexts: +- context: + cluster: default + user: default + name: default +current-context: default +kind: Config +preferences: {} +users: +- name: default + user: + password: [PASSWORD] + username: admin +``` + +**Result:** You can now use `kubectl` to manage your K3s cluster. If you have more than one kubeconfig file, you can specify which one you want to use by passing in the path to the file when using `kubectl`: + +``` +kubectl --kubeconfig ~/.kube/config/k3s.yaml get pods --all-namespaces +``` + +For more information about the `kubeconfig` file, refer to the [K3s documentation]({{}}/k3s/latest/en/cluster-access/) or the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) about organizing cluster access using `kubeconfig` files. + +### 4. Check the Health of Your Cluster Pods + +Now that you have set up the `kubeconfig` file, you can use `kubectl` to access the cluster from your local machine. + +Check that all the required pods and containers are healthy are ready to continue: + +``` +ubuntu@ip-172-31-60-194:~$ sudo kubectl get pods --all-namespaces +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system metrics-server-6d684c7b5-bw59k 1/1 Running 0 8d +kube-system local-path-provisioner-58fb86bdfd-fmkvd 1/1 Running 0 8d +kube-system coredns-d798c9dd-ljjnf 1/1 Running 0 8d +``` + +**Result:** You have confirmed that you can access the cluster with `kubectl` and the K3s cluster is running successfully. Now the Rancher management server can be installed on the cluster. + +### [Next: Install Rancher]({{}}/rancher/v2.x/en/installation/k8s-install/helm-rancher/) diff --git a/content/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/_index.md b/content/k3s/latest/en/installation/tutorials/infra-for-ha-with-external-db/_index.md similarity index 52% rename from content/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/_index.md rename to content/k3s/latest/en/installation/tutorials/infra-for-ha-with-external-db/_index.md index 8add940e8ce..c27039ad429 100644 --- a/content/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/_index.md +++ b/content/k3s/latest/en/installation/tutorials/infra-for-ha-with-external-db/_index.md @@ -1,10 +1,10 @@ --- -title: '1. Set up Infrastructure' -weight: 185 -aliases: - - /rancher/v2.x/en/installation/ha/create-nodes-lb +title: 'Set up Infrastructure for a High Availability K3s Kubernetes Cluster' +weight: 1 --- +> This page is under construction. + In this section, you will provision the underlying infrastructure for your Rancher management server. The recommended infrastructure for the Rancher-only Kubernetes cluster differs depending on whether Rancher will be installed on a K3s Kubernetes cluster, an RKE Kubernetes cluster, or a single Docker container. @@ -13,8 +13,6 @@ For more information about each installation option, refer to [this page.]({{ **Note:** These nodes must be in the same region. You may place these servers in separate availability zones (datacenter). -{{% tabs %}} -{{% tab "K3s" %}} To install the Rancher management server on a high-availability K3s cluster, we recommend setting up the following infrastructure: - **Two Linux nodes,** typically virtual machines, in the infrastructure provider of your choice. @@ -70,61 +68,4 @@ You will need to specify this hostname in a later step when you install Rancher, For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer) - - -{{% /tab %}} -{{% tab "RKE" %}} -To install the Rancher management server on a high-availability RKE cluster, we recommend setting up the following infrastructure: - -- **Three Linux nodes,** typically virtual machines, in an infrastructure provider such as Amazon's EC2, Google Compute Engine, or vSphere. -- **A load balancer** to direct front-end traffic to the three nodes. -- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it. - -These nodes must be in the same region/data center. You may place these servers in separate availability zones. - -### Why three nodes? - -In an RKE cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes. - -The etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can suffer from [split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems), requiring the cluster to be restored from backup. If one of the three etcd nodes fails, the two remaining nodes can elect a leader because they have the majority of the total number of etcd nodes. - -### 1. Set up Linux Nodes - -Make sure that your nodes fulfill the general installation requirements for [OS, container runtime, hardware, and networking.]({{}}/rancher/v2.x/en/installation/requirements/) - -For an example of one way to set up Linux nodes, refer to this [tutorial]({{}}/rancher/v2.x/en/installation/options/ec2-node/) for setting up nodes as instances in Amazon EC2. - -### 2. Set up the Load Balancer - -You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server. - -When Kubernetes gets set up in a later step, the RKE tool will deploy an NGINX Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames. - -When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the NGINX Ingress controller to listen for traffic destined for the Rancher hostname. The NGINX Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster. - -For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer: - -- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment. -- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.]({{}}/rancher/v2.x/en/installation/options/chart-options/#external-tls-termination) - -For an example showing how to set up an NGINX load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nginx/) - -For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nlb/) - -> **Important:** -> Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications. - -### 3. Set up the DNS Record - -Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer. - -Depending on your environment, this may be an A record pointing to the LB IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on. - -You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one. - -For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer) - -{{% /tab %}} -{{% /tabs %}} - -### [Next: Set up a Kubernetes Cluster]({{}}/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/) \ No newline at end of file +### [Next: Set up a Kubernetes Cluster]({{}}/rancher/v2.x/en/installation/resources/k8s-tutorials/ka-k3s/) \ No newline at end of file diff --git a/content/rancher/v2.x/en/admin-settings/_index.md b/content/rancher/v2.x/en/admin-settings/_index.md index 2242b4d3328..0cfab9191f7 100644 --- a/content/rancher/v2.x/en/admin-settings/_index.md +++ b/content/rancher/v2.x/en/admin-settings/_index.md @@ -1,6 +1,6 @@ --- title: Authentication, Permissions and Global Configuration -weight: 1100 +weight: 6 aliases: - /rancher/v2.x/en/concepts/global-configuration/ - /rancher/v2.x/en/tasks/global-configuration/ diff --git a/content/rancher/v2.x/en/admin-settings/rbac/global-permissions/_index.md b/content/rancher/v2.x/en/admin-settings/rbac/global-permissions/_index.md index 036abcc3181..70e9c6d8a63 100644 --- a/content/rancher/v2.x/en/admin-settings/rbac/global-permissions/_index.md +++ b/content/rancher/v2.x/en/admin-settings/rbac/global-permissions/_index.md @@ -17,6 +17,7 @@ You cannot update or delete the built-in Global Permissions. This section covers the following topics: +- [Restricted Admin](#restricted-admin) - [Global permission assignment](#global-permission-assignment) - [Global permissions for new local users](#global-permissions-for-new-local-users) - [Global permissions for users with external authentication](#global-permissions-for-users-with-external-authentication) @@ -27,6 +28,49 @@ This section covers the following topics: - [Configuring global permissions for groups](#configuring-global-permissions-for-groups) - [Refreshing group memberships](#refreshing-group-memberships) +# Restricted Admin + +_Available as of Rancher v2.5_ + +A new `restricted-admin` role was created in Rancher v2.5 in order to prevent privilege escalation from the local Rancher server Kubernetes cluster. This role has full administrator access to all downstream clusters managed by Rancher, but it does not have permission to alter the local Kubernetes cluster. + +The `restricted-admin` can create other `restricted-admin` users with an equal level of access. + +A new setting was added to Rancher to set the initial bootstrapped administrator to have the `restricted-admin` role. This applies to the first user created when the Rancher server is started for the first time. If the environment variable is set, then no global administrator would be created, and it would be impossible to create the global administrator through Rancher. + +To bootstrap Rancher with the `restricted-admin` as the initial user, the Rancher server should be started with the following environment variable: + +``` +CATTLE_RESTRICTED_DEFAULT_ADMIN=true +``` +### List of `restricted-admin` Permissions + +The `restricted-admin` permissions are as follows: + +- Has full admin access to all downstream clusters managed by Rancher. +- Has very limited access to the local Kubernetes cluster. Can access Rancher custom resource definitions, but has no access to any Kubernetes native types. +- Can add other users and assign them to clusters outside of the local cluster. +- Can create other restricted admins. +- Cannot grant any permissions in the local cluster they don't currently have. (This is how Kubernetes normally operates) + +### Upgrading from Rancher with a Hidden Local Cluster + +Prior to Rancher v2.5, it was possible to run the Rancher server using this flag to hide the local cluster: + +``` +--add-local=false +``` + +You will need to drop this flag when upgrading to Rancher v2.5. Otherwise, Rancher will not start. The `restricted-admin` role can be used to continue restricting access to the local cluster. + +### Changing Global Administrators to Restricted Admins + +If Rancher already has a global administrator, they should change all global administrators over to the new `restricted-admin` role. + +This can be done through **Security > Users** and moving any Administrator role over to Restricted Administrator. + +Signed-in users can change themselves over to the `restricted-admin` if they wish, but they should only do that as the last step, otherwise they won't have the permissions to do so. + # Global Permission Assignment Global permissions for local users are assigned differently than users who log in to Rancher using external authentication. diff --git a/content/rancher/v2.x/en/api/_index.md b/content/rancher/v2.x/en/api/_index.md index b2f9e84816d..66d9a267b71 100644 --- a/content/rancher/v2.x/en/api/_index.md +++ b/content/rancher/v2.x/en/api/_index.md @@ -1,6 +1,6 @@ --- title: API -weight: 7500 +weight: 24 --- ## How to use the API diff --git a/content/rancher/v2.x/en/backups/_index.md b/content/rancher/v2.x/en/backups/_index.md index d9b66a43114..3d90e203bcf 100644 --- a/content/rancher/v2.x/en/backups/_index.md +++ b/content/rancher/v2.x/en/backups/_index.md @@ -1,19 +1,122 @@ --- title: Backups and Disaster Recovery -weight: 1000 +weight: 5 --- -This section is devoted to protecting your data in a disaster scenario. +In this section, you'll learn how to create backups of Rancher, how to restore Rancher from backup, and how to migrate Rancher to a new Kubernetes cluster. -To protect yourself from a disaster scenario, you should create backups on a regular basis. +As of Rancher v2.5, the `rancher-backup` operator is used to backup and restore Rancher. The `rancher-backup` Helm chart is [here.](https://github.com/rancher/charts/tree/main/charts/rancher-backup) - - Rancher server backups: - - [Rancher installed on a K3s Kubernetes cluster](./backups/k3s-backups) - - [Rancher installed on an RKE Kubernetes cluster](./backups/ha-backups) - - [Rancher installed with Docker](./backups/single-node-backups/) - - [Backing up Rancher Launched Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/) +The backup-restore operator needs to be installed in the local cluster, and only backs up the Rancher app. The backup and restore operations are performed only in the local Kubernetes cluster. -In a disaster scenario, you can restore your `etcd` database by restoring a backup. +The Rancher version must be v2.5.0 and up to use this approach of backing up and restoring Rancher. - - [Rancher Server Restorations]({{}}/rancher/v2.x/en/backups/restorations) - - [Restoring Rancher Launched Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-admin/restoring-etcd/) +- [Changes in Rancher v2.5](#changes-in-rancher-v2-5) + - [Backup and Restore for Rancher v2.5 installed with Docker](#backup-and-restore-for-rancher-v2-5-installed-with-docker) + - [Backup and Restore for Rancher installed on a Kubernetes Cluster Prior to v2.5](#backup-and-restore-for-rancher-installed-on-a-kubernetes-cluster-prior-to-v2-5) +- [How Backups and Restores Work](#how-backups-and-restores-work) +- [Installing the rancher-backup Operator](#installing-the-rancher-backup-operator) + - [Installing rancher-backup with the Rancher UI](#installing-rancher-backup-with-the-rancher-ui) + - [Installing rancher-backup with the Helm CLI](#installing-rancher-backup-with-the-helm-cli) +- [Backing up Rancher](#backing-up-rancher) +- [Restoring Rancher](#restoring-rancher) +- [Migrating Rancher to a New Cluster](#migrating-rancher-to-a-new-cluster) +- [Default Storage Location Configuration](#default-storage-location-configuration) + - [Example values.yaml for the rancher-backup Helm Chart](#example-values-yaml-for-the-rancher-backup-helm-chart) + +# Changes in Rancher v2.5 + +The new `rancher-backup` operator allows Rancher to be backed up and restored on any Kubernetes cluster. This application is a Helm chart, and it can be deployed through the Rancher **Apps & Marketplace** page, or by using the Helm CLI. + +Previously, the way that cluster data was backed up depended on the type of Kubernetes cluster that was used. + +In Rancher v2.4, it was only supported to install Rancher on two types of Kubernetes clusters: an RKE cluster, or a K3s cluster with an external database. If Rancher was installed on an RKE cluster, [RKE would be used]({{}}/rancher/v2.x/en/backups/legacy/backup/k8s-backups/ha-backups/) to take a snapshot of the etcd database and restore the cluster. If Rancher was installed on a K3s cluster with an external database, the database would need to be backed up and restored using the upstream documentation for the database. + +In Rancher v2.5, it is now supported to install Rancher hosted Kubernetes clusters, such as Amazon EKS clusters, which do not expose etcd to a degree that would allow snapshots to be created by an external tool. etcd doesn't need to be exposed for `rancher-backup` to work, because the operator gathers resources by making calls to `kube-apiserver`. + +### Backup and Restore for Rancher v2.5 installed with Docker + +For Rancher installed with Docker, refer to the same steps used up till 2.5 for [backups](./docker-installs/docker-backups) and [restores.](./docker-installs/docker-backups) + +### Backup and Restore for Rancher installed on a Kubernetes Cluster Prior to v2.5 + +For Rancher prior to v2.5, the way that Rancher is backed up and restored differs based on the way that Rancher was installed. Our legacy backup and restore documentation is here: + +- For Rancher installed on an RKE Kubernetes cluster, refer to the legacy [backup]({{}}/rancher/v2.x/en/backups/legacy/backup/k8s-backups/ha-backups/) and [restore]({{}}/rancher/v2.x/en/backups/legacy/restore/k8s-restore/rke-restore/) documentation. +- For Rancher installed on a K3s Kubernetes cluster, refer to the legacy [backup]({{}}/rancher/v2.x/en/backups/legacy/backup/k8s-backups/k3s-backups/) and [restore]({{}}/rancher/v2.x/en/backups/legacy/restore/k8s-restore/k3s-restore/) documentation. + +# How Backups and Restores Work + +The `rancher-backup` operator introduces three custom resources: Backups, Restores, and ResourceSets. The following cluster-scoped custom resource definitions are added to the cluster: + +- `backups.resources.cattle.io` +- `resourcesets.resources.cattle.io` +- `restores.resources.cattle.io` + +The ResourceSet defines which Kubernetes resources need to be backed up. The ResourceSet is not available to be configured in the Rancher UI because the values required to back up Rancher are predefined. This ResourceSet should not be modified. + +When a Backup custom resource is created, the `rancher-backup` operator calls the `kube-apiserver` to get the resources in the ResourceSet (specifically, the predefined `rancher-resource-set`) that the Backup custom resource refers to. + +The operator then creates the backup file in the .tar.gz format and stores it in the location configured in the Backup resource. + +When a Restore custom resource is created, the operator accesses the backup .tar.gz file specified by the Restore, and restores the application from that file. + +The Backup and Restore custom resources can be created in the Rancher UI, or by using `kubectl apply`. + +# Installing the rancher-backup Operator + +The `rancher-backup` operator can be installed from the Rancher UI, or with the Helm CLI. In both cases, the `rancher-backup` Helm chart is installed on the Kubernetes cluster running the Rancher server. It is a cluster-admin only feature and available only for the local cluster. + +### Installing rancher-backup with the Rancher UI + +1. In the Rancher UI, go to the **Cluster Explorer.** +1. Click **Apps.** +1. Click the `rancher-backup` operator. +1. Optional: Configure the default storage location. For help, refer to the [configuration section.](./configuration/storage-config) + +**Result:** The `rancher-backup` operator is installed. + +From the **Cluster Explorer,** you can see the `rancher-backup` operator listed under **Deployments.** + +To configure the backup app in Rancher, click **Cluster Explorer** in the upper left corner and click **Rancher Backups.** + +### Installing rancher-backup with the Helm CLI + +Install the backup app as a Helm chart: + +``` +helm repo add rancher-charts https://charts.rancher.io +helm repo update +helm install rancher-backup-crd rancher-charts/rancher-backup-crd -n cattle-resources-system --create-namespace +helm install rancher-backup rancher-charts/rancher-backup -n cattle-resources-system +``` + +### RBAC + +Only the rancher admins, and local cluster’s cluster-owner can: + +* Install the Chart +* See the navigation links for Backup and Restore CRDs +* Perform a backup or restore by creating a Backup CR and Restore CR respectively, list backups/restores performed so far + +# Backing up Rancher + +A backup is performed by creating a Backup custom resource. For a tutorial, refer to [this page.](./back-up-rancher) + +# Restoring Rancher + +A restore is performed by creating a Restore custom resource. For a tutorial, refer to [this page.](./restoring-rancher) + +# Migrating Rancher to a New Cluster + +A migration is performed by following [these steps.](./migrating-rancher) + +# Default Storage Location Configuration + +Configure a storage location where all backups are saved by default. You will have the option to override this with each backup, but will be limited to using an S3-compatible or Minio object store. + +For information on configuring these options, refer to [this page.](./configuration/storage-config) + +### Example values.yaml for the rancher-backup Helm Chart + +The example [values.yaml file](./configuration/storage-config/#example-values-yaml-for-the-rancher-backup-helm-chart) can be used to configure the `rancher-backup` operator when the Helm CLI is used to install it. diff --git a/content/rancher/v2.x/en/backups/back-up-rancher/_index.md b/content/rancher/v2.x/en/backups/back-up-rancher/_index.md new file mode 100644 index 00000000000..0c9e6e5a119 --- /dev/null +++ b/content/rancher/v2.x/en/backups/back-up-rancher/_index.md @@ -0,0 +1,61 @@ +--- +title: Backing up Rancher +weight: 1 +--- + +In this section, you'll learn how to back up Rancher running on any Kubernetes cluster. To backup Rancher installed with Docker, refer the instructions for [single node backups](../legacy/backup/single-node-backups/) + +### Prerequisites + +Rancher version must be v2.5.0 and up + +### 1. Install the `rancher-backup` operator + +The backup storage location is an operator-level setting, so it needs to be configured when `rancher-backup` is installed or upgraded. + +Backups are created as .tar.gz files. These files can be pushed to S3 or Minio, or they can be stored in a persistent volume. + +1. In the Rancher UI, go to the **Cluster Explorer.** +1. Click **Apps.** +1. Click `rancher-backup`. +1. Configure the default storage location. For help, refer to the [storage configuration section.](../configuration/storage-config) + +### 2. Perform a Backup + +To perform a backup, a custom resource of type Backup must be created. + +1. In the **Cluster Explorer,** go to the dropdown menu in the upper left corner and click **Rancher Backups.** +1. Click **Backup.** +1. Create the Backup with the form, or with YAML editor. +1. For configuring the Backup details using the form, click **Create** and refer to the [configuration reference](../configuration/backup-config) and to the [examples.](../examples/#backup) +1. For using the YAML editor, we can click **Create > Create from YAML.** Enter the Backup YAML. This example Backup custom resource would create encrypted recurring backups in S3: + + ```yaml + apiVersion: resources.cattle.io/v1 + kind: Backup + metadata: + name: s3-recurring-backup + spec: + storageLocation: + s3: + credentialSecretName: s3-creds + credentialSecretNamespace: default + bucketName: rancher-backups + folder: rancher + region: us-west-2 + endpoint: s3.us-west-2.amazonaws.com + resourceSetName: rancher-resource-set + encryptionConfigSecretName: encryptionconfig + schedule: "@every 1h" + retentionCount: 10 + ``` + + > **Note:** When creating the Backup resource using YAML editor, the `resourceSetName` must be set to `rancher-resource-set` + + For help configuring the Backup, refer to the [configuration reference](../configuration/backup-config) and to the [examples.](../examples/#backup) + + > **Important:** The `rancher-backup` operator doesn't save the EncryptionConfiguration file. The contents of the EncryptionConfiguration file must be saved when an encrypted backup is created, and the same file must be used when restoring from this backup. +1. Click **Create.** + +**Result:** The backup file is created in the storage location configured in the Backup custom resource. The name of this file is used when performing a restore. + diff --git a/content/rancher/v2.x/en/backups/configuration/_index.md b/content/rancher/v2.x/en/backups/configuration/_index.md new file mode 100644 index 00000000000..d83cc04b29f --- /dev/null +++ b/content/rancher/v2.x/en/backups/configuration/_index.md @@ -0,0 +1,10 @@ +--- +title: Rancher Backup Configuration Reference +shortTitle: Configuration +weight: 4 +--- + +- [Backup configuration](./backup-config) +- [Restore configuration](./restore-config) +- [Storage location configuration](./storage-config) +- [Example Backup and Restore Custom Resources](../examples) \ No newline at end of file diff --git a/content/rancher/v2.x/en/backups/configuration/backup-config/_index.md b/content/rancher/v2.x/en/backups/configuration/backup-config/_index.md new file mode 100644 index 00000000000..adbb9e065f1 --- /dev/null +++ b/content/rancher/v2.x/en/backups/configuration/backup-config/_index.md @@ -0,0 +1,184 @@ +--- +title: Backup Configuration +shortTitle: Backup +weight: 1 +--- + +The Backup Create page lets you configure a schedule, enable encryption and specify the storage location for your backups. + +{{< img "/img/rancher/backup_restore/backup/backup.png" "">}} + +- [Schedule](#schedule) +- [Encryption](#encryptionconfigname) +- [Storage Location](#storagelocation) + - [S3](#s3) + - [Example S3 Storage Configuration](#example-s3-storage-configuration) + - [Example MinIO Configuration](#example-minio-configuration) + - [Example credentialSecret](#example-credentialsecret) + - [IAM Permissions for EC2 Nodes to Access S3](#iam-permissions-for-ec2-nodes-to-access-s3) +- [RetentionCount](#retentioncount) +- [Examples](#examples) + + +# Schedule + +Select the first option to perform a one-time backup, or select the second option to schedule recurring backups. Selecting **Recurring Backups** lets you configure following two fields: + +- **Schedule**: This field accepts + - Standard [cron expressions](https://en.wikipedia.org/wiki/Cron), such as `"0 * * * *"` + - Descriptors, such as `"@midnight"` or `"@every 1h30m"` +- **Retention Count**: This value specifies how many backup files must be retained. If files exceed the given retentionCount, the oldest files will be deleted. The default value is 10. + +{{< img "/img/rancher/backup_restore/backup/schedule.png" "">}} + +| YAML Directive Name | Description | +| ---------------- | ---------------- | +| `schedule` | Provide the cron string for scheduling recurring backups. | +| `retentionCount` | Provide the number of backup files to be retained. | + +# Encryption + +The rancher-backup gathers resources by making calls to the kube-apiserver. Objects returned by apiserver are decrypted, so even if [encryption At rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) is enabled, even the encrypted objects gathered by the backup will be in plaintext. + +To avoid storing them in plaintext, you can use the same encryptionConfig file that was used for at-rest encryption, to encrypt certain resources in your backup. + +> **Important:** You must save the encryptionConfig file, because it won’t be saved by the rancher-backup operator. +The same encryptionFile needs to be used when performing a restore. + +The operator consumes this encryptionConfig as a Kubernetes Secret, and the Secret must be in the operator’s namespace. Rancher installs the `rancher-backup` operator in the `cattle-resources-system` namespace, so create this encryptionConfig secret in that namespace. + +For the `EncryptionConfiguration`, you can use the [sample file provided in the Kubernetes documentation.](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration) + +To create the Secret, the encryption configuration file must be named `encryption-provider-config.yaml`, and the `--from-file` flag must be used to create this secret. + +Save the `EncryptionConfiguration` in a file called `encryption-provider-config.yaml` and run this command: + +``` +kubectl create secret generic encryptionconfig \ + --from-file=./encryption-provider-config.yaml \ + -n cattle-resources-system +``` + +This will ensure that the secret contains a key named `encryption-provider-config.yaml`, and the operator will use this key to get the encryption configuration. + +The `Encryption Config Secret` dropdown will filter out and list only those Secrets that have this exact key + +{{< img "/img/rancher/backup_restore/backup/encryption.png" "">}} + +In the example command above, the name `encryptionconfig` can be changed to anything. + + +| YAML Directive Name | Description | +| ---------------- | ---------------- | +| `encryptionConfigSecretName` | Provide the name of the Secret from `cattle-resources-system` namespace, that contains the encryption config file. | + +# Storage Location + +{{< img "/img/rancher/backup_restore/backup/storageLocation.png" "">}} + +If the StorageLocation is specified in the Backup, the operator will retrieve the backup location from that particular S3 bucket. If not specified, the operator will try to find this file in the default operator-level S3 store, and in the operator-level PVC store. The default storage location is configured during the deployment of the `rancher-backup` operator. + +Selecting the first option stores this backup in the storage location configured while installing the rancher-backup chart. The second option lets you configure a different S3 compatible storage provider for storing the backup. + +### S3 + +The S3 storage location contains the following configuration fields: + +1. **Credential Secret** (optional): If you need to use the AWS Access keys Secret keys to access s3 bucket, create a secret with your credentials with keys and the directives `accessKey` and `secretKey`. It can be in any namespace. An example secret is [here.](#example-credentialsecret) This directive is unnecessary if the nodes running your operator are in EC2 and set up with IAM permissions that allow them to access S3, as described in [this section.](#iam-permissions-for-ec2-nodes-to-access-s3) +1. **Bucket Name**: The name of the S3 bucket where backup files will be stored. +1. **Region** (optional): The AWS [region](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) where the S3 bucket is located. This field isn't needed for configuring MinIO. +1. **Folder** (optional): The name of the folder in the S3 bucket where backup files will be stored. +1. **Endpoint**: The [endpoint](https://docs.aws.amazon.com/general/latest/gr/s3.html) that is used to access S3 in the region of your bucket. +1. **Endpoint CA** (optional): This should be the Base64 encoded CA cert. For an example, refer to the [example S3 compatible configuration.](#example-s3-compatible-storage-configuration) +1. **Skip TLS Verifications** (optional): Set to true if you are not using TLS. + + +| YAML Directive Name | Description | Required | +| ---------------- | ---------------- | ------------ | +| `credentialSecretName` | If you need to use the AWS Access keys Secret keys to access s3 bucket, create a secret with your credentials with keys and the directives `accessKey` and `secretKey`. It can be in any namespace as long as you provide that namespace in `credentialSecretNamespace`. An example secret is [here.](#example-credentialsecret) This directive is unnecessary if the nodes running your operator are in EC2 and set up with IAM permissions that allow them to access S3, as described in [this section.](#iam-permissions-for-ec2-nodes-to-access-s3) | | +| `credentialSecretNamespace` | The namespace of the secret containing the credentials to access S3. This directive is unnecessary if the nodes running your operator are in EC2 and set up with IAM permissions that allow them to access S3, as described in [this section.](#iam-permissions-for-ec2-nodes-to-access-s3) | | +| `bucketName` | The name of the S3 bucket where backup files will be stored. | ✓ | +| `folder` | The name of the folder in the S3 bucket where backup files will be stored. | | +| `region` | The AWS [region](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) where the S3 bucket is located. | ✓ | +| `endpoint` | The [endpoint](https://docs.aws.amazon.com/general/latest/gr/s3.html) that is used to access S3 in the region of your bucket. | ✓ | +| `endpointCA` | This should be the Base64 encoded CA cert. For an example, refer to the [example S3 compatible configuration.](#example-s3-compatible-storage-configuration) | | +| `insecureTLSSkipVerify` | Set to true if you are not using TLS. | | + +### Example S3 Storage Configuration + +```yaml +s3: + credentialSecretName: s3-creds + credentialSecretNamespace: default + bucketName: rancher-backups + folder: rancher + region: us-west-2 + endpoint: s3.us-west-2.amazonaws.com +``` + +### Example MinIO Configuration + +```yaml +s3: + credentialSecretName: minio-creds + bucketName: rancherbackups + endpoint: minio.35.202.130.254.xip.io + endpointCA: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHakNDQWdLZ0F3SUJBZ0lKQUtpWFZpNEpBb0J5TUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIzUmxjM1F0WTJFd0hoY05NakF3T0RNd01UZ3lOVFE1V2hjTk1qQXhNREk1TVRneU5UUTVXakFTTVJBdwpEZ1lEVlFRRERBZDBaWE4wTFdOaE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCjA4dnV3Q2Y0SEhtR2Q2azVNTmozRW5NOG00T2RpS3czSGszd1NlOUlXQkwyVzY5WDZxenBhN2I2M3U2L05mMnkKSnZWNDVqeXplRFB6bFJycjlpbEpWaVZ1NFNqWlFjdG9jWmFCaVNsL0xDbEFDdkFaUlYvKzN0TFVTZSs1ZDY0QQpWcUhDQlZObU5xM3E3aVY0TE1aSVpRc3N6K0FxaU1Sd0pOMVVKQTZ6V0tUc2Yzc3ByQ0J2dWxJWmZsVXVETVAyCnRCTCt6cXZEc0pDdWlhNEEvU2JNT29tVmM2WnNtTGkwMjdub3dGRld3MnRpSkM5d0xMRE14NnJoVHQ4a3VvVHYKQXJpUjB4WktiRU45L1Uzb011eUVKbHZyck9YS2ZuUDUwbk8ycGNaQnZCb3pUTStYZnRvQ1d5UnhKUmI5cFNTRApKQjlmUEFtLzNZcFpMMGRKY2sxR1h3SURBUUFCbzNNd2NUQWRCZ05WSFE0RUZnUVU5NHU4WXlMdmE2MTJnT1pyCm44QnlFQ2NucVFjd1FnWURWUjBqQkRzd09ZQVU5NHU4WXlMdmE2MTJnT1pybjhCeUVDY25xUWVoRnFRVU1CSXgKRURBT0JnTlZCQU1NQjNSbGMzUXRZMkdDQ1FDb2wxWXVDUUtBY2pBTUJnTlZIUk1FQlRBREFRSC9NQTBHQ1NxRwpTSWIzRFFFQkN3VUFBNElCQVFER1JRZ1RtdzdVNXRQRHA5Q2psOXlLRW9Vd2pYWWM2UlAwdm1GSHpubXJ3dUVLCjFrTkVJNzhBTUw1MEpuS29CY0ljVDNEeGQ3TGdIbTNCRE5mVVh2anArNnZqaXhJYXR2UWhsSFNVaWIyZjJsSTkKVEMxNzVyNCtROFkzelc1RlFXSDdLK08vY3pJTGh5ei93aHRDUlFkQ29lS1dXZkFiby8wd0VSejZzNkhkVFJzNwpHcWlGNWZtWGp6S0lOcTBjMHRyZ0xtalNKd1hwSnU0ZnNGOEcyZUh4b2pOKzdJQ1FuSkg5cGRIRVpUQUtOL2ppCnIvem04RlZtd1kvdTBndEZneWVQY1ZWbXBqRm03Y0ZOSkc4Y2ZYd0QzcEFwVjhVOGNocTZGeFBHTkVvWFZnclMKY1VRMklaU0RJd1FFY3FvSzFKSGdCUWw2RXBaUVpWMW1DRklrdFBwSQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t +``` +### Example credentialSecret + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: creds +type: Opaque +data: + accessKey: + secretKey: +``` + +### IAM Permissions for EC2 Nodes to Access S3 + +There are two ways to set up the `rancher-backup` operator to use S3 as the backup storage location. + +One way is to configure the `credentialSecretName` in the Backup custom resource, which refers to AWS credentials that have access to S3. + +If the cluster nodes are in Amazon EC2, the S3 access can also be set up by assigning IAM permissions to the EC2 nodes so that they can access S3. + +To allow a node to access S3, follow the instructions in the [AWS documentation](https://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-access-s3-bucket/) to create an IAM role for EC2. When you add a custom policy to the role, add the following permissions, and replace the `Resource` with your bucket name: + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "s3:ListBucket" + ], + "Resource": [ + "arn:aws:s3:::rancher-backups" + ] + }, + { + "Effect": "Allow", + "Action": [ + "s3:PutObject", + "s3:GetObject", + "s3:DeleteObject", + "s3:PutObjectAcl" + ], + "Resource": [ + "arn:aws:s3:::rancher-backups/*" + ] + } + ] +} +``` + +After the role is created, and you have attached the corresponding instance profile to your EC2 instance(s), the `credentialSecretName` directive can be left empty in the Backup custom resource. + +# Examples + +For example Backup custom resources, refer to [this page.](../../examples/#backup) \ No newline at end of file diff --git a/content/rancher/v2.x/en/backups/configuration/restore-config/_index.md b/content/rancher/v2.x/en/backups/configuration/restore-config/_index.md new file mode 100644 index 00000000000..fbd9b1368c2 --- /dev/null +++ b/content/rancher/v2.x/en/backups/configuration/restore-config/_index.md @@ -0,0 +1,87 @@ +--- +title: Restore Configuration +shortTitle: Restore +weight: 2 +--- + +The Restore Create page lets you provide details of the backup to restore from + +{{< img "/img/rancher/backup_restore/restore/restore.png" "">}} + +- [Backup Source](#backup-source) + - [An Existing Backup Config](#an-existing-backup-config) + - [The default storage target](#the-default-storage-target) + - [An S3-compatible object store](#an-s3-compatible-object-store) +- [Encryption](#encryption) +- [Prune during restore](#prune-during-restore) +- [Getting the Backup Filename from S3](#getting-the-backup-filename-from-s3) + +# Backup Source +Provide details of the backup file and its storage location, which the operator will then use to perform the restore. Select from the following options to provide these details + + + + +### An existing backup config + +Selecting this option will populate the **Target Backup** dropdown with the Backups available in this cluster. Select the Backup from the dropdown, and that will fill out the **Backup Filename** field for you, and will also pass the backup source information from the selected Backup to the operator. + +{{< img "/img/rancher/backup_restore/restore/existing.png" "">}} + +If the Backup custom resource does not exist in the cluster, you need to get the exact filename and provide the backup source details with the default storage target or an S3-compatible object store. + + +### The default storage target + +Select this option if you are restoring from a backup file that exists in the default storage location configured at the operator-level. The operator-level configuration is the storage location that was configured when the `rancher-backup` operator was installed or upgraded. Provide the exact filename in the **Backup Filename** field. + +{{< img "/img/rancher/backup_restore/restore/default.png" "">}} + +### An S3-compatible object store + +Select this option if no default storage location is configured at the operator-level, OR if the backup file exists in a different S3 bucket than the one configured as the default storage location. Provide the exact filename in the **Backup Filename** field. Refer to [this section](#getting-the-backup-filename-from-s3) for exact steps on getting the backup filename from s3. Fill in all the details for the S3 compatible object store. Its fields are exactly same as ones for the `backup.StorageLocation` configuration in the [Backup custom resource.](../../configuration/backup-config/#storagelocation) + +{{< img "/img/rancher/backup_restore/restore/s3store.png" "">}} + +# Encryption + +If the backup was created with encryption enabled, its file will have `.enc` suffix. Choosing such a Backup, or providing a backup filename with `.enc` suffix will display another dropdown named **Encryption Config Secret**. + +{{< img "/img/rancher/backup_restore/restore/encryption.png" "">}} + +The Secret selected from this dropdown must have the same contents as the one used for the Backup custom resource while performing the backup. If the encryption configuration doesn't match, the restore will fail + +The `Encryption Config Secret` dropdown will filter out and list only those Secrets that have this exact key + +| YAML Directive Name | Description | +| ---------------- | ---------------- | +| `encryptionConfigSecretName` | Provide the name of the Secret from `cattle-resources-system` namespace, that contains the encryption config file. | + +> **Important** +This field should only be set if the backup was created with encryption enabled. Providing the incorrect encryption config will cause the restore to fail. + +# Prune During Restore + +* **Prune**: In order to fully restore Rancher from a backup, and to go back to the exact state it was at when the backup was performed, we need to delete any additional resources that were created by Rancher after the backup was taken. The operator does so if the **Prune** flag is enabled. Prune is enabled by default and it is recommended to keep it enabled. +* **Delete Timeout**: This is the amount of time the operator will wait while deleting a resource before editing the resource to remove finalizers and attempt deletion again. + +| YAML Directive Name | Description | +| ---------------- | ---------------- | +| `prune` | Delete the resources managed by Rancher that are not present in the backup (Recommended). | +| `deleteTimeoutSeconds` | Amount of time the operator will wait while deleting a resource before editing the resource to remove finalizers and attempt deletion again. | + +# Getting the Backup Filename from S3 + +This is the name of the backup file that the `rancher-backup` operator will use to perform the restore. + +To obtain this file name from S3, go to your S3 bucket (and folder if it was specified while performing backup). + +Copy the filename and store it in your Restore custom resource. So assuming the name of your backup file is `backupfile`, + +- If your bucket name is `s3bucket` and no folder was specified, then the `backupFilename` to use will be `backupfile`. +- If your bucket name is `s3bucket` and the base folder is`s3folder`, the `backupFilename` to use is only `backupfile` . +- If there is a subfolder inside `s3Folder` called `s3sub`, and that has your backup file, then the `backupFilename` to use is `s3sub/backupfile`. + +| YAML Directive Name | Description | +| ---------------- | ---------------- | +| `backupFilename` | This is the name of the backup file that the `rancher-backup` operator will use to perform the restore. | diff --git a/content/rancher/v2.x/en/backups/configuration/storage-config/_index.md b/content/rancher/v2.x/en/backups/configuration/storage-config/_index.md new file mode 100644 index 00000000000..1ebb4259b85 --- /dev/null +++ b/content/rancher/v2.x/en/backups/configuration/storage-config/_index.md @@ -0,0 +1,112 @@ +--- +title: Backup Storage Location Configuration +shortTitle: Storage +weight: 3 +--- + +Configure a storage location where all backups are saved by default. You will have the option to override this with each backup, but will be limited to using an S3-compatible object store. + +Only one storage location can be configured at the operator level. + +- [Storage Location Configuration](#storage-location-configuration) + - [No Default Storage Location](#no-default-storage-location) + - [S3-compatible Object Store](#s3-compatible-object-store) + - [Use an existing StorageClass](#existing-storageclass) + - [Use an existing PersistentVolume](#existing-persistent-volume) +- [Encryption](#encryption) +- [Example values.yaml for the rancher-backup Helm Chart](#example-values-yaml-for-the-rancher-backup-helm-chart) + +# Storage Location Configuration + +### No Default Storage Location + +You can choose to not have any operator-level storage location configured. If you select this option, you must configure an S3-compatible object store as the storage location for each individual backup. + +### S3-compatible Object Store + +| Parameter | Description | +| -------------- | -------------- | +| Credential Secret | Choose the credentials for S3 from your secrets in Rancher. | +| Bucket Name | Enter the name of the [S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html) where the backups will be stored. Default: `rancherbackups`. | +| Region | The [AWS region](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) where the S3 bucket is located. | +| Folder | The [folder in the S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html) where the backups will be stored. | +| Endpoint | The [S3 endpoint](https://docs.aws.amazon.com/general/latest/gr/s3.html) For example, `s3.us-west-2.amazonaws.com`. | +| Endpoint CA | The CA cert used to for the S3 endpoint. Default: base64 encoded CA cert | +| insecureTLSSkipVerify | Set to true if you are not using TLS. | + +### Existing StorageClass + +Installing the `rancher-backup` chart by selecting the StorageClass option will create a Persistent Volume Claim (PVC), and Kubernetes will in turn dynamically provision a Persistent Volume (PV) where all the backups will be saved by default. + +For information about creating storage classes refer to [this section.]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/#1-add-a-storage-class-and-configure-it-to-use-your-storage-provider) + +> **Important** +It is highly recommended to use a StorageClass with a reclaim policy of "Retain". Otherwise if the PVC created by the `rancher-backup` chart gets deleted (either during app upgrade, or accidentally), the PV will get deleted too, which means all backups saved in it will get deleted. +If no such StorageClass is available, after the PV is provisioned, make sure to edit its reclaim policy and set it to "Retain" before storing backups in it. + +### Existing Persistent Volume + +Select an existing Persistent Volume (PV) that will be used to store your backups. For information about creating PersistentVolumes in Rancher, refer to [this section.]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/#2-add-a-persistent-volume-that-refers-to-the-persistent-storage) + +> **Important** +It is highly recommended to use a Persistent Volume with a reclaim policy of "Retain". Otherwise if the PVC created by the `rancher-backup` chart gets deleted (either during app upgrade, or accidentally), the PV will get deleted too, which means all backups saved in it will get deleted. + + +# Example values.yaml for the rancher-backup Helm Chart + + +This values.yaml file can be used to configure `rancher-backup` operator when the Helm CLI is used to install it. + +For more information about `values.yaml` files and configuring Helm charts during installation, refer to the [Helm documentation.](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing) + +```yaml +image: + repository: rancher/rancher-backup + tag: v0.0.1-rc10 + +## Default s3 bucket for storing all backup files created by the rancher-backup operator +s3: + enabled: false + ## credentialSecretName if set, should be the name of the Secret containing AWS credentials. + ## To use IAM Role, don't set this field + credentialSecretName: creds + credentialSecretNamespace: "" + region: us-west-2 + bucketName: rancherbackups + folder: base folder + endpoint: s3.us-west-2.amazonaws.com + endpointCA: base64 encoded CA cert + # insecureTLSSkipVerify: optional + +## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ +## If persistence is enabled, operator will create a PVC with mountPath /var/lib/backups +persistence: + enabled: false + + ## If defined, storageClassName: + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. (gp2 on AWS, standard on + ## GKE, AWS & OpenStack). + ## Refer to https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1 + ## + storageClass: "-" + + ## If you want to disable dynamic provisioning by setting storageClass to "-" above, + ## and want to target a particular PV, provide name of the target volume + volumeName: "" + + ## Only certain StorageClasses allow resizing PVs; Refer to https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/ + size: 2Gi + + +global: + cattle: + systemDefaultRegistry: "" + +nodeSelector: {} + +tolerations: [] + +affinity: {} +``` \ No newline at end of file diff --git a/content/rancher/v2.x/en/backups/docker-installs/_index.md b/content/rancher/v2.x/en/backups/docker-installs/_index.md new file mode 100644 index 00000000000..3f818e0faf8 --- /dev/null +++ b/content/rancher/v2.x/en/backups/docker-installs/_index.md @@ -0,0 +1,10 @@ +--- +title: Backup and Restore for Rancher Installed with Docker +shortTitle: Docker Installs +weight: 10 +--- + +The steps for backing up and restoring Rancher installed with Docker did not change in Rancher v2.5. + +- [Backups](./docker-backups) +- [Restores](./docker-restores) \ No newline at end of file diff --git a/content/rancher/v2.x/en/backups/backups/single-node-backups/_index.md b/content/rancher/v2.x/en/backups/docker-installs/docker-backups/_index.md similarity index 97% rename from content/rancher/v2.x/en/backups/backups/single-node-backups/_index.md rename to content/rancher/v2.x/en/backups/docker-installs/docker-backups/_index.md index ae0ee7b1ae7..d06c933dda6 100644 --- a/content/rancher/v2.x/en/backups/backups/single-node-backups/_index.md +++ b/content/rancher/v2.x/en/backups/docker-installs/docker-backups/_index.md @@ -1,10 +1,13 @@ --- title: Backing up Rancher Installed with Docker +shortTitle: Docker Installs weight: 3 aliases: - /rancher/v2.x/en/installation/after-installation/single-node-backup-and-restoration/ + - /rancher/v2.x/en/installation/after-installation/single-node-backup-and-restoration/ --- + After completing your Docker installation of Rancher, we recommend creating backups of it on a regular basis. Having a recent backup will let you recover quickly from an unexpected disaster. ## Before You Start diff --git a/content/rancher/v2.x/en/backups/restorations/single-node-restoration/_index.md b/content/rancher/v2.x/en/backups/docker-installs/docker-restores/_index.md similarity index 90% rename from content/rancher/v2.x/en/backups/restorations/single-node-restoration/_index.md rename to content/rancher/v2.x/en/backups/docker-installs/docker-restores/_index.md index aefa51a9da5..6f79b790e63 100644 --- a/content/rancher/v2.x/en/backups/restorations/single-node-restoration/_index.md +++ b/content/rancher/v2.x/en/backups/docker-installs/docker-restores/_index.md @@ -1,16 +1,17 @@ --- title: Restoring Backups—Docker Installs shortTitle: Docker Installs -weight: 365 +weight: 3 aliases: - /rancher/v2.x/en/installation/after-installation/single-node-backup-and-restoration/ + - /rancher/v2.x/en/backups/restorations/single-node-restoration --- If you encounter a disaster scenario, you can restore your Rancher Server to your most recent backup. ## Before You Start -During restoration of your backup, you'll enter a series of commands, filling placeholders with data from your environment. These placeholders are denoted with angled brackets and all capital letters (``). Here's an example of a command with a placeholder: +During restore of your backup, you'll enter a series of commands, filling placeholders with data from your environment. These placeholders are denoted with angled brackets and all capital letters (``). Here's an example of a command with a placeholder: ``` docker run --volumes-from -v $PWD:/backup \ @@ -68,4 +69,4 @@ Using a [backup]({{}}/rancher/v2.x/en/backups/backups/single-node-backu docker start ``` -1. Wait a few moments and then open Rancher in a web browser. Confirm that the restoration succeeded and that your data is restored. +1. Wait a few moments and then open Rancher in a web browser. Confirm that the restore succeeded and that your data is restored. diff --git a/content/rancher/v2.x/en/backups/examples/_index.md b/content/rancher/v2.x/en/backups/examples/_index.md new file mode 100644 index 00000000000..b73e0405afd --- /dev/null +++ b/content/rancher/v2.x/en/backups/examples/_index.md @@ -0,0 +1,301 @@ +--- +title: Examples +weight: 5 +--- + +This section contains examples of Backup and Restore custom resources. + +The default backup storage location is configured when the `rancher-backup` operator is installed or upgraded. + +Encrypted backups can only be restored if the Restore custom resource uses the same encryption configuration secret that was used to create the backup. + +- [Backup](#backup) + - [Backup in the default location with encryption](#backup-in-the-default-location-with-encryption) + - [Recurring backup in the default location](#recurring-backup-in-the-default-location) + - [Encrypted recurring backup in the default location](#encrypted-recurring-backup-in-the-default-location) + - [Encrypted backup in Minio](#encrypted-backup-in-minio) + - [Backup in S3 using AWS credential secret](#backup-in-s3-using-aws-credential-secret) + - [Recurring backup in S3 using AWS credential secret](#recurring-backup-in-s3-using-aws-credential-secret) + - [Backup from EC2 nodes with IAM permission to access S3](#backup-from-ec2-nodes-with-iam-permission-to-access-s3) +- [Restore](#restore) + - [Restore using the default backup file location](#restore-using-the-default-backup-file-location) + - [Restore for Rancher migration](#restore-for-rancher-migration) + - [Restore from encrypted backup](#restore-from-encrypted-backup) + - [Restore an encrypted backup from Minio](#restore-an-encrypted-backup-from-minio) + - [Restore from backup using an AWS credential secret to access S3](#restore-from-backup-using-an-aws-credential-secret-to-access-s3) + - [Restore from EC2 nodes with IAM permissions to access S3](#restore-from-ec2-nodes-with-iam-permissions-to-access-s3) +- [Example Credential Secret for Storing Backups in S3](#example-credential-secret-for-storing-backups-in-s3) +- [Example EncryptionConfiguration](#example-encryptionconfiguration) + +# Backup + +This section contains example Backup custom resources. + +### Backup in the Default Location with Encryption + +```yaml +apiVersion: resources.cattle.io/v1 +kind: Backup +metadata: + name: default-location-encrypted-backup +spec: + resourceSetName: rancher-resource-set + encryptionConfigSecretName: encryptionconfig +``` + +### Recurring Backup in the Default Location + +```yaml +apiVersion: resources.cattle.io/v1 +kind: Backup +metadata: + name: default-location-recurring-backup +spec: + resourceSetName: rancher-resource-set + schedule: "@every 1h" + retentionCount: 10 +``` + +### Encrypted Recurring Backup in the Default Location + +```yaml +apiVersion: resources.cattle.io/v1 +kind: Backup +metadata: + name: default-enc-recurring-backup +spec: + resourceSetName: rancher-resource-set + encryptionConfigSecretName: encryptionconfig + schedule: "@every 1h" + retentionCount: 3 +``` + +### Encrypted Backup in Minio + +```yaml +apiVersion: resources.cattle.io/v1 +kind: Backup +metadata: + name: minio-backup +spec: + storageLocation: + s3: + credentialSecretName: minio-creds + credentialSecretNamespace: default + bucketName: rancherbackups + endpoint: minio.xip.io + endpointCA: LS0tLS1CRUdJTi3VUFNQkl5UUT.....pbEpWaVzNkRS0tLS0t + resourceSetName: rancher-resource-set + encryptionConfigSecretName: encryptionconfig +``` + +### Backup in S3 Using AWS Credential Secret + +```yaml +apiVersion: resources.cattle.io/v1 +kind: Backup +metadata: + name: s3-backup +spec: + storageLocation: + s3: + credentialSecretName: s3-creds + credentialSecretNamespace: default + bucketName: rancher-backups + folder: ecm1 + region: us-west-2 + endpoint: s3.us-west-2.amazonaws.com + resourceSetName: rancher-resource-set + encryptionConfigSecretName: encryptionconfig +``` + +### Recurring Backup in S3 Using AWS Credential Secret + +```yaml +apiVersion: resources.cattle.io/v1 +kind: Backup +metadata: + name: s3-recurring-backup +spec: + storageLocation: + s3: + credentialSecretName: s3-creds + credentialSecretNamespace: default + bucketName: rancher-backups + folder: ecm1 + region: us-west-2 + endpoint: s3.us-west-2.amazonaws.com + resourceSetName: rancher-resource-set + encryptionConfigSecretName: encryptionconfig + schedule: "@every 1h" + retentionCount: 10 +``` + +### Backup from EC2 Nodes with IAM Permission to Access S3 + +This example shows that the AWS credential secret does not have to be provided to create a backup if the nodes running `rancher-backup` have [these permissions for access to S3.](../configuration/backup-config/#iam-permissions-for-ec2-nodes-to-access-s3) + +```yaml +apiVersion: resources.cattle.io/v1 +kind: Backup +metadata: + name: s3-iam-backup +spec: + storageLocation: + s3: + bucketName: rancher-backups + folder: ecm1 + region: us-west-2 + endpoint: s3.us-west-2.amazonaws.com + resourceSetName: rancher-resource-set + encryptionConfigSecretName: encryptionconfig +``` + +# Restore + +This section contains example Restore custom resources. + +### Restore Using the Default Backup File Location + +```yaml +apiVersion: resources.cattle.io/v1 +kind: Restore +metadata: + name: restore-default +spec: + backupFilename: default-location-recurring-backup-752ecd87-d958-4d20-8350-072f8d090045-2020-09-26T12-29-54-07-00.tar.gz +# encryptionConfigSecretName: test-encryptionconfig +``` + +### Restore for Rancher Migration +```yaml +apiVersion: resources.cattle.io/v1 +kind: Restore +metadata: + name: restore-migration +spec: + backupFilename: backup-b0450532-cee1-4aa1-a881-f5f48a007b1c-2020-09-15T07-27-09Z.tar.gz + prune: false + storageLocation: + s3: + credentialSecretName: s3-creds + credentialSecretNamespace: default + bucketName: rancher-backups + folder: ecm1 + region: us-west-2 + endpoint: s3.us-west-2.amazonaws.com +``` + +### Restore from Encrypted Backup + +```yaml +apiVersion: resources.cattle.io/v1 +kind: Restore +metadata: + name: restore-encrypted +spec: + backupFilename: default-test-s3-def-backup-c583d8f2-6daf-4648-8ead-ed826c591471-2020-08-24T20-47-05Z.tar.gz + encryptionConfigSecretName: encryptionconfig +``` + +### Restore an Encrypted Backup from Minio + +```yaml +apiVersion: resources.cattle.io/v1 +kind: Restore +metadata: + name: restore-minio +spec: + backupFilename: default-minio-backup-demo-aa5c04b7-4dba-4c48-9ac4-ab7916812eaa-2020-08-30T13-18-17-07-00.tar.gz + storageLocation: + s3: + credentialSecretName: minio-creds + credentialSecretNamespace: default + bucketName: rancherbackups + endpoint: minio.xip.io + endpointCA: LS0tLS1CRUdJTi3VUFNQkl5UUT.....pbEpWaVzNkRS0tLS0t + encryptionConfigSecretName: test-encryptionconfig +``` + +### Restore from Backup Using an AWS Credential Secret to Access S3 + +```yaml +apiVersion: resources.cattle.io/v1 +kind: Restore +metadata: + name: restore-s3-demo +spec: + backupFilename: test-s3-recurring-backup-752ecd87-d958-4d20-8350-072f8d090045-2020-09-26T12-49-34-07-00.tar.gz.enc + storageLocation: + s3: + credentialSecretName: s3-creds + credentialSecretNamespace: default + bucketName: rancher-backups + folder: ecm1 + region: us-west-2 + endpoint: s3.us-west-2.amazonaws.com + encryptionConfigSecretName: test-encryptionconfig +``` + +### Restore from EC2 Nodes with IAM Permissions to Access S3 + +This example shows that the AWS credential secret does not have to be provided to restore from backup if the nodes running `rancher-backup` have [these permissions for access to S3.](../configuration/backup-config/#iam-permissions-for-ec2-nodes-to-access-s3) + +```yaml +apiVersion: resources.cattle.io/v1 +kind: Restore +metadata: + name: restore-s3-demo +spec: + backupFilename: default-test-s3-recurring-backup-84bf8dd8-0ef3-4240-8ad1-fc7ec308e216-2020-08-24T10#52#44-07#00.tar.gz + storageLocation: + s3: + bucketName: rajashree-backup-test + folder: ecm1 + region: us-west-2 + endpoint: s3.us-west-2.amazonaws.com + encryptionConfigSecretName: test-encryptionconfig +``` + +# Example Credential Secret for Storing Backups in S3 + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: creds +type: Opaque +data: + accessKey: + secretKey: +``` + +# Example EncryptionConfiguration + +```yaml +apiVersion: apiserver.config.k8s.io/v1 +kind: EncryptionConfiguration +resources: + - resources: + - secrets + providers: + - aesgcm: + keys: + - name: key1 + secret: c2VjcmV0IGlzIHNlY3VyZQ== + - name: key2 + secret: dGhpcyBpcyBwYXNzd29yZA== + - aescbc: + keys: + - name: key1 + secret: c2VjcmV0IGlzIHNlY3VyZQ== + - name: key2 + secret: dGhpcyBpcyBwYXNzd29yZA== + - secretbox: + keys: + - name: key1 + secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY= +``` + + + diff --git a/content/rancher/v2.x/en/backups/legacy/_index.md b/content/rancher/v2.x/en/backups/legacy/_index.md new file mode 100644 index 00000000000..5033e89d4db --- /dev/null +++ b/content/rancher/v2.x/en/backups/legacy/_index.md @@ -0,0 +1,20 @@ +--- +title: Legacy Backup and Restore Documentation +weight: 6 +--- + +This section is devoted to protecting your data in a disaster scenario. + +To protect yourself from a disaster scenario, you should create backups on a regular basis. + + - Rancher server backups: + - [Rancher installed on a K3s Kubernetes cluster](./backups/k3s-backups) + - [Rancher installed on an RKE Kubernetes cluster](./backups/ha-backups) + - [Backing up Rancher Launched Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/) + +In a disaster scenario, you can restore your `etcd` database by restoring a backup. + + - [Rancher Server Restorations]({{}}/rancher/v2.x/en/backups/restorations) + - [Restoring Rancher Launched Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-admin/restoring-etcd/) + +For Rancher installed with Docker, the backup and restore procedure is the same in Rancher v2.5. The backup and restore instructions for Docker installs are [here.]({{}}/rancher/v2.x/en/backups/docker-installs) \ No newline at end of file diff --git a/content/rancher/v2.x/en/backups/backups/_index.md b/content/rancher/v2.x/en/backups/legacy/backup/_index.md similarity index 78% rename from content/rancher/v2.x/en/backups/backups/_index.md rename to content/rancher/v2.x/en/backups/legacy/backup/_index.md index 072c1913cac..2bad5e462b6 100644 --- a/content/rancher/v2.x/en/backups/backups/_index.md +++ b/content/rancher/v2.x/en/backups/legacy/backup/_index.md @@ -1,14 +1,15 @@ --- -title: Backups +title: Backup weight: 50 aliases: - /rancher/v2.x/en/installation/after-installation/ - /rancher/v2.x/en/backups/ + - /rancher/v2.x/en/backups/backups --- This section contains information about how to create backups of your Rancher data and how to restore them in a disaster scenario. - [Backing up Rancher installed on a K3s Kubernetes cluster](./k3s-backups) - [Backing up Rancher installed on an RKE Kubernetes cluster](./ha-backups/) -- [Backing up Rancher installed with Docker](./single-node-backups/) +- [Backing up Rancher installed with Docker]({{}}/rancher/v2.x/en/backups/docker-installs/docker-backups) If you are looking to back up your [Rancher launched Kubernetes cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), please refer [here]({{}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/). diff --git a/content/rancher/v2.x/en/backups/backups/ha-backups/_index.md b/content/rancher/v2.x/en/backups/legacy/backup/ha-backups/_index.md similarity index 98% rename from content/rancher/v2.x/en/backups/backups/ha-backups/_index.md rename to content/rancher/v2.x/en/backups/legacy/backup/ha-backups/_index.md index 808e2377dd6..b659ba7a1e9 100644 --- a/content/rancher/v2.x/en/backups/backups/ha-backups/_index.md +++ b/content/rancher/v2.x/en/backups/legacy/backup/ha-backups/_index.md @@ -1,9 +1,12 @@ --- title: Backing up Rancher Installed on an RKE Kubernetes Cluster +shortTitle: RKE Installs weight: 2 aliases: - /rancher/v2.x/en/installation/after-installation/k8s-install-backup-and-restoration/ - /rancher/v2.x/en/installation/backups-and-restoration/ha-backup-and-restoration/ + - /rancher/v2.x/en/backups/backups/ha-backups + - /rancher/v2.x/en/backups/backups/k8s-backups/ha-backups --- This section describes how to create backups of your high-availability Rancher install. diff --git a/content/rancher/v2.x/en/backups/backups/k3s-backups/_index.md b/content/rancher/v2.x/en/backups/legacy/backup/k3s-backups/_index.md similarity index 90% rename from content/rancher/v2.x/en/backups/backups/k3s-backups/_index.md rename to content/rancher/v2.x/en/backups/legacy/backup/k3s-backups/_index.md index 01408849bb0..f2aea16e349 100644 --- a/content/rancher/v2.x/en/backups/backups/k3s-backups/_index.md +++ b/content/rancher/v2.x/en/backups/legacy/backup/k3s-backups/_index.md @@ -1,6 +1,10 @@ --- title: Backing up Rancher Installed on a K3s Kubernetes Cluster +shortTitle: K3s Installs weight: 1 +aliases: + - /rancher/v2.x/en/backups/backups/k3s-backups + - /rancher/v2.x/en/backups/backups/k8s-backups/k3s-backups --- When Rancher is installed on a high-availability Kubernetes cluster, we recommend using an external database to store the cluster data. diff --git a/content/rancher/v2.x/en/backups/restorations/_index.md b/content/rancher/v2.x/en/backups/legacy/restore/_index.md similarity index 82% rename from content/rancher/v2.x/en/backups/restorations/_index.md rename to content/rancher/v2.x/en/backups/legacy/restore/_index.md index 2f32ad1d9e2..9acfbf7ad73 100644 --- a/content/rancher/v2.x/en/backups/restorations/_index.md +++ b/content/rancher/v2.x/en/backups/legacy/restore/_index.md @@ -1,10 +1,12 @@ --- -title: Restorations +title: Restore weight: 1010 +aliases: + - /rancher/v2.x/en/backups/restorations --- If you lose the data on your Rancher Server, you can restore it if you have backups stored in a safe location. -- [Restoring Backups—Docker Installs]({{}}/rancher/v2.x/en/backups/restorations/single-node-restoration/) +- [Restoring Backups—Docker Installs]({{}}/rancher/v2.x/en/backups/docker-installs/docker-restores) - [Restoring Backups—Kubernetes installs]({{}}/rancher/v2.x/en/backups/restorations/ha-restoration/) If you are looking to restore your [Rancher launched Kubernetes cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), please refer [here]({{}}/rancher/v2.x/en/cluster-admin/restoring-etcd/). diff --git a/content/rancher/v2.x/en/backups/restorations/k3s-restoration/_index.md b/content/rancher/v2.x/en/backups/legacy/restore/k3s-restore/_index.md similarity index 85% rename from content/rancher/v2.x/en/backups/restorations/k3s-restoration/_index.md rename to content/rancher/v2.x/en/backups/legacy/restore/k3s-restore/_index.md index 16b242a6024..472c8a30239 100644 --- a/content/rancher/v2.x/en/backups/restorations/k3s-restoration/_index.md +++ b/content/rancher/v2.x/en/backups/legacy/restore/k3s-restore/_index.md @@ -1,6 +1,10 @@ --- title: Restoring Rancher Installed on a K3s Kubernetes Cluster +shortTitle: K3s Installs weight: 1 +aliases: + - /rancher/v2.x/en/backups/restorations/k3s-restoration + - /rancher/v2.x/en/backups/restorations/k8s-restore/k3s-restore --- When Rancher is installed on a high-availability Kubernetes cluster, we recommend using an external database to store the cluster data. diff --git a/content/rancher/v2.x/en/backups/legacy/restore/rke-restore/_index.md b/content/rancher/v2.x/en/backups/legacy/restore/rke-restore/_index.md new file mode 100644 index 00000000000..4826a35171c --- /dev/null +++ b/content/rancher/v2.x/en/backups/legacy/restore/rke-restore/_index.md @@ -0,0 +1,136 @@ +--- +title: Restoring Backups—Kubernetes installs +shortTitle: RKE Installs +weight: 2 +aliases: + - /rancher/v2.x/en/installation/after-installation/ha-backup-and-restoration/ + - /rancher/v2.x/en/backups/restorations/ha-restoration + - /rancher/v2.x/en/backups/restorations/k8s-restore/rke-restore +--- + +This procedure describes how to use RKE to restore a snapshot of the Rancher Kubernetes cluster. +This will restore the Kubernetes configuration and the Rancher database and state. + +> **Note:** This document covers clusters set up with RKE >= v0.2.x, for older RKE versions refer to the [RKE Documentation]({{}}/rke/latest/en/etcd-snapshots/restoring-from-backup). + +## Restore Outline + + + +- [1. Preparation](#1-preparation) +- [2. Place Snapshot](#2-place-snapshot) +- [3. Configure RKE](#3-configure-rke) +- [4. Restore the Database and bring up the Cluster](#4-restore-the-database-and-bring-up-the-cluster) + + + +### 1. Preparation + +It is advised that you run the restore from your local host or a jump box/bastion where your cluster yaml, rke statefile, and kubeconfig are stored. You will need [RKE]({{}}/rke/latest/en/installation/) and [kubectl]({{}}/rancher/v2.x/en/faq/kubectl/) CLI utilities installed locally. + +Prepare by creating 3 new nodes to be the target for the restored Rancher instance. We recommend that you start with fresh nodes and a clean state. For clarification on the requirements, review the [Installation Requirements](https://rancher.com/docs/rancher/v2.x/en/installation/requirements/). + +Alternatively you can re-use the existing nodes after clearing Kubernetes and Rancher configurations. This will destroy the data on these nodes. See [Node Cleanup]({{}}/rancher/v2.x/en/faq/cleaning-cluster-nodes/) for the procedure. + +> **IMPORTANT:** Before starting the restore make sure all the Kubernetes services on the old cluster nodes are stopped. We recommend powering off the nodes to be sure. + +### 2. Place Snapshot + +As of RKE v0.2.0, snapshots could be saved in an S3 compatible backend. To restore your cluster from the snapshot stored in S3 compatible backend, you can skip this step and retrieve the snapshot in [4. Restore the Database and bring up the Cluster](#4-restore-the-database-and-bring-up-the-cluster). Otherwise, you will need to place the snapshot directly on one of the etcd nodes. + +Pick one of the clean nodes that will have the etcd role assigned and place the zip-compressed snapshot file in `/opt/rke/etcd-snapshots` on that node. + +> **Note:** Because of a current limitation in RKE, the restore process does not work correctly if `/opt/rke/etcd-snapshots` is a NFS share that is mounted on all nodes with the etcd role. The easiest options are to either keep `/opt/rke/etcd-snapshots` as a local folder during the restore process and only mount the NFS share there after it has been completed, or to only mount the NFS share to one node with an etcd role in the beginning. + +### 3. Configure RKE + +Use your original `rancher-cluster.yml` and `rancher-cluster.rkestate` files. If they are not stored in a version control system, it is a good idea to back them up before making any changes. + +``` +cp rancher-cluster.yml rancher-cluster.yml.bak +cp rancher-cluster.rkestate rancher-cluster.rkestate.bak +``` + +If the replaced or cleaned nodes have been configured with new IP addresses, modify the `rancher-cluster.yml` file to ensure the address and optional internal_address fields reflect the new addresses. + +> **IMPORTANT:** You should not rename the `rancher-cluster.yml` or `rancher-cluster.rkestate` files. It is important that the filenames match each other. + +### 4. Restore the Database and bring up the Cluster + +You will now use the RKE command-line tool with the `rancher-cluster.yml` and the `rancher-cluster.rkestate` configuration files to restore the etcd database and bring up the cluster on the new nodes. + +> **Note:** Ensure your `rancher-cluster.rkestate` is present in the same directory as the `rancher-cluster.yml` file before starting the restore, as this file contains the certificate data for the cluster. + +#### Restoring from a Local Snapshot + +When restoring etcd from a local snapshot, the snapshot is assumed to be located on the target node in the directory `/opt/rke/etcd-snapshots`. + +``` +rke etcd snapshot-restore --name snapshot-name --config ./rancher-cluster.yml +``` + +> **Note:** The --name parameter expects the filename of the snapshot without the extension. + +#### Restoring from a Snapshot in S3 + +_Available as of RKE v0.2.0_ + +When restoring etcd from a snapshot located in an S3 compatible backend, the command needs the S3 information in order to connect to the S3 backend and retrieve the snapshot. + +``` +$ rke etcd snapshot-restore --config ./rancher-cluster.yml --name snapshot-name \ +--s3 --access-key S3_ACCESS_KEY --secret-key S3_SECRET_KEY \ +--bucket-name s3-bucket-name --s3-endpoint s3.amazonaws.com \ +--folder folder-name # Available as of v2.3.0 +``` + +#### Options for `rke etcd snapshot-restore` + +S3 specific options are only available for RKE v0.2.0+. + +| Option | Description | S3 Specific | +| --- | --- | ---| +| `--name` value | Specify snapshot name | | +| `--config` value | Specify an alternate cluster YAML file (default: "cluster.yml") [$RKE_CONFIG] | | +| `--s3` | Enabled backup to s3 |* | +| `--s3-endpoint` value | Specify s3 endpoint url (default: "s3.amazonaws.com") | * | +| `--access-key` value | Specify s3 accessKey | *| +| `--secret-key` value | Specify s3 secretKey | *| +| `--bucket-name` value | Specify s3 bucket name | *| +| `--folder` value | Specify s3 folder in the bucket name _Available as of v2.3.0_ | *| +| `--region` value | Specify the s3 bucket location (optional) | *| +| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK]({{}}/rke/latest/en/config-options/#ssh-agent) | | +| `--ignore-docker-version` | [Disable Docker version check]({{}}/rke/latest/en/config-options/#supported-docker-versions) | + +#### Testing the Cluster + +Once RKE completes it will have created a credentials file in the local directory. Configure `kubectl` to use the `kube_config_rancher-cluster.yml` credentials file and check on the state of the cluster. See [Installing and Configuring kubectl]({{}}/rancher/v2.x/en/faq/kubectl/#configuration) for details. + +#### Check Kubernetes Pods + +Wait for the pods running in `kube-system`, `ingress-nginx` and the `rancher` pod in `cattle-system` to return to the `Running` state. + +> **Note:** `cattle-cluster-agent` and `cattle-node-agent` pods will be in an `Error` or `CrashLoopBackOff` state until Rancher server is up and the DNS/Load Balancer have been pointed at the new cluster. + +``` +kubectl get pods --all-namespaces + +NAMESPACE NAME READY STATUS RESTARTS AGE +cattle-system cattle-cluster-agent-766585f6b-kj88m 0/1 Error 6 4m +cattle-system cattle-node-agent-wvhqm 0/1 Error 8 8m +cattle-system rancher-78947c8548-jzlsr 0/1 Running 1 4m +ingress-nginx default-http-backend-797c5bc547-f5ztd 1/1 Running 1 4m +ingress-nginx nginx-ingress-controller-ljvkf 1/1 Running 1 8m +kube-system canal-4pf9v 3/3 Running 3 8m +kube-system cert-manager-6b47fc5fc-jnrl5 1/1 Running 1 4m +kube-system kube-dns-7588d5b5f5-kgskt 3/3 Running 3 4m +kube-system kube-dns-autoscaler-5db9bbb766-s698d 1/1 Running 1 4m +kube-system metrics-server-97bc649d5-6w7zc 1/1 Running 1 4m +kube-system tiller-deploy-56c4cf647b-j4whh 1/1 Running 1 4m +``` + +#### Finishing Up + +Rancher should now be running and available to manage your Kubernetes clusters. Review the [recommended architecture]({{}}/rancher/v2.x/en/installation/k8s-install/#recommended-architecture) for Kubernetes installations and update the endpoints for Rancher DNS or the Load Balancer that you built during Step 1 of the Kubernetes install ([1. Create Nodes and Load Balancer]({{}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/#load-balancer)) to target the new cluster. Once the endpoints are updated, the agents on your managed clusters should automatically reconnect. This may take 10-15 minutes due to reconnect back off timeouts. + +> **IMPORTANT:** Remember to save your updated RKE config (`rancher-cluster.yml`) state file (`rancher-cluster.rkestate`) and `kubectl` credentials (`kube_config_rancher-cluster.yml`) files in a safe place for future maintenance for example in a version control system. diff --git a/content/rancher/v2.x/en/backups/migrating-rancher/_index.md b/content/rancher/v2.x/en/backups/migrating-rancher/_index.md new file mode 100644 index 00000000000..65f3437585b --- /dev/null +++ b/content/rancher/v2.x/en/backups/migrating-rancher/_index.md @@ -0,0 +1,97 @@ +--- +title: Migrating Rancher to a New Cluster +weight: 3 +--- + +If you are migrating Rancher to a new Kubernetes cluster, you don't need to install Rancher on the new cluster first. If Rancher is restored to a new cluster with Rancher already installed, it can cause problems. + +### Prerequisites + +These instructions assume you have [created a backup](../back-up-rancher) and you have already installed a new Kubernetes cluster where Rancher will be deployed. + +It is required to use the same hostname that was set as the server URL in the first cluster. + +Rancher version must be v2.5.0 and up + +Rancher can be installed on any Kubernetes cluster, including hosted Kubernetes clusters such as Amazon EKS clusters. For help installing Kubernetes, refer to the documentation of the Kubernetes distribution. One of Rancher's Kubernetes distributions may also be used: + +- [RKE Kubernetes installation docs]({{}}/rke/latest/en/installation/) +- [K3s Kubernetes installation docs]({{}}/k3s/latest/en/installation/) + +### 1. Install the rancher-backup Helm chart +``` +helm repo add rancher-charts https://charts.rancher.io +helm repo update +helm install rancher-backup-crd rancher-charts/rancher-backup-crd -n cattle-resources-system --create-namespace +helm install rancher-backup rancher-charts/rancher-backup -n cattle-resources-system +``` + +### 2. Restore from backup using a Restore custom resource + +If you are using an S3 store as the backup source, and need to use your S3 credentials for restore, create a secret in this cluster using your S3 credentials. The Secret data must have two keys, `accessKey` and `secretKey` containing the s3 credentials like this: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: s3-creds +type: Opaque +data: + accessKey: + secretKey: +``` + +This secret can be created in any namespace, with the above example it will get created in the default namespace + +In the Restore custom resource, `prune` must be set to false. + +Create a Restore custom resource like the example below: + +```yaml +# migrationResource.yaml +apiVersion: resources.cattle.io/v1 +kind: Restore +metadata: + name: restore-migration +spec: + backupFilename: backup-b0450532-cee1-4aa1-a881-f5f48a007b1c-2020-09-15T07-27-09Z.tar.gz + prune: false + encryptionConfigSecretName: encryptionconfig + storageLocation: + s3: + credentialSecretName: s3-creds + credentialSecretNamespace: default + bucketName: backup-test + folder: ecm1 + region: us-west-2 + endpoint: s3.us-west-2.amazonaws.com +``` + +> **Important:** The field `encryptionConfigSecretName` must be set only if your backup was created with encryption enabled. Provide the name of the Secret containing the encryption config file. If you only have the encryption config file, but don't have a secret created with it in this cluster, use the following steps to create the secret: +1. The encryption configuration file must be named `encryption-provider-config.yaml`, and the `--from-file` flag must be used to create this secret. So save your `EncryptionConfiguration` in a file called `encryption-provider-config.yaml` and run this command: + +``` +kubectl create secret generic encryptionconfig \ + --from-file=./encryption-provider-config.yaml \ + -n cattle-resources-system +``` + +Then apply the resource: + +``` +kubectl apply -f migrationResource.yaml +``` + +### 3. Install cert-manager + +Follow the steps to [install cert-manager]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/install/#5-install-cert-manager) in the documentation about installing cert-manager on Kubernetes. + +### 4. Bring up Rancher with Helm + +Use the same version of Helm to install Rancher, that was used on the first cluster. + +``` +helm install rancher rancher-latest/rancher \ + --namespace cattle-system \ + --set hostname= \ +``` \ No newline at end of file diff --git a/content/rancher/v2.x/en/backups/restoring-rancher/_index.md b/content/rancher/v2.x/en/backups/restoring-rancher/_index.md new file mode 100644 index 00000000000..c7c07b8a9dd --- /dev/null +++ b/content/rancher/v2.x/en/backups/restoring-rancher/_index.md @@ -0,0 +1,52 @@ +--- +title: Restoring Rancher +weight: 2 +--- + +A restore is performed by creating a Restore custom resource. + +> **Important** +* Follow the instructions from this page for restoring rancher on the same cluster where it was backed up from. In order to migrate rancher to a new cluster, follow the steps to [migrate rancher.](../migrating-rancher) +* While restoring rancher on the same setup, the operator will scale down the rancher deployment when restore starts, and it will scale back up the deployment once restore completes. So Rancher will be unavailable during the restore. + +### Create the Restore Custom Resource + +1. In the **Cluster Explorer,** go to the dropdown menu in the upper left corner and click **Rancher Backups.** +1. Click **Restore.** +1. Create the Restore with the form, or with YAML. For creating the Restore resource using form, refer to the [configuration reference](../configuration/restore-config) and to the [examples.](../examples/#restore) +1. For using the YAML editor, we can click **Create > Create from YAML.** Enter the Restore YAML. + + ```yaml + apiVersion: resources.cattle.io/v1 + kind: Restore + metadata: + name: restore-migration + spec: + backupFilename: backup-b0450532-cee1-4aa1-a881-f5f48a007b1c-2020-09-15T07-27-09Z.tar.gz + encryptionConfigSecretName: encryptionconfig + storageLocation: + s3: + credentialSecretName: s3-creds + credentialSecretNamespace: default + bucketName: rancher-backups + folder: rancher + region: us-west-2 + endpoint: s3.us-west-2.amazonaws.com + ``` + + For help configuring the Restore, refer to the [configuration reference](../configuration/restore-config) and to the [examples.](../examples/#restore) + +1. Click **Create.** + +**Result:** The rancher-operator scales down the rancher deployment during restore, and scales it back up once the restore completes. The resources are restored in this order: + +1. Custom Resource Definitions (CRDs) +2. Cluster-scoped resources +3. Namespaced resources + +To check how the restore is progressing, you can check the logs of the operator. Follow these steps to get the logs: + +```yaml +kubectl get pods -n cattle-resources-system +kubectl logs -n cattle-resources-system -f +``` \ No newline at end of file diff --git a/content/rancher/v2.x/en/best-practices/_index.md b/content/rancher/v2.x/en/best-practices/_index.md index 41bbb4cc9c4..0894996d4c8 100644 --- a/content/rancher/v2.x/en/best-practices/_index.md +++ b/content/rancher/v2.x/en/best-practices/_index.md @@ -1,6 +1,6 @@ --- title: Best Practices Guide -weight: 1000 +weight: 4 --- The purpose of this section is to consolidate best practices for Rancher implementations. This also includes recommendations for related technologies, such as Kubernetes, Docker, containers, and more. The objective is to improve the outcome of a Rancher implementation using the operational experience of Rancher and its customers. diff --git a/content/rancher/v2.x/en/cis-scans/_index.md b/content/rancher/v2.x/en/cis-scans/_index.md new file mode 100644 index 00000000000..dbf6d64b863 --- /dev/null +++ b/content/rancher/v2.x/en/cis-scans/_index.md @@ -0,0 +1,236 @@ +--- +title: CIS Scans +weight: 18 +--- + +_Available as of v2.4.0_ + +Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. + +The `rancher-cis-benchmark` app leverages kube-bench, an open-source tool from Aqua Security, to check clusters for CIS Kubernetes Benchmark compliance. Also, to generate a cluster-wide report, the application utilizes Sonobuoy for report aggregation. + +> The CIS scan feature was improved in Rancher v2.5. If you are using Rancher v2.4, refer to the older version of the CIS scan documentation [here.](./legacy) + +- [Changes in Rancher v2.5](#changes-in-rancher-v2-5) +- [About the CIS Benchmark](#about-the-cis-benchmark) +- [Installing rancher-cis-benchmark](#installing-rancher-cis-benchmark) +- [Uninstalling rancher-cis-benchmark](#uninstalling-rancher-cis-benchmark) +- [Running a Scan](#running-a-scan) +- [Skipping Tests](#skipping-tests) +- [Viewing Reports](#viewing-reports) +- [About the generated report](#about-the-generated-report) +- [Test Profiles](#test-profiles) +- [About Skipped and Not Applicable Tests](#about-skipped-and-not-applicable-tests) +- [Roles-based access control](./rbac) +- [Configuration](./configuration) + +### Changes in Rancher v2.5 + +We now support running CIS scans on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE. Previously it was only supported to run CIS scans on RKE Kubernetes clusters. + +In Rancher v2.4, the CIS scan tool was available from the **cluster manager** in the Rancher UI. Now it is available in the **Cluster Explorer** and it can be enabled and deployed using a Helm chart. It can be installed from the Rancher UI, but it can also be installed independently of Rancher. It deploys a CIS scan operator for the cluster, and deploys Kubernetes custom resources for cluster scans. The custom resources can be managed directly from the **Cluster Explorer.** + +In v1 of the CIS scan tool, which was available in Rancher v2.4 through the cluster manager, recurring scans could be scheduled. The ability to schedule recurring scans is not yet available in Rancher v2.5. + +Support for alerting for the cluster scan results is not available for Rancher v2.5 yet. + +More test profiles were added. In Rancher v2.4, permissive and hardened profiles were included. In Rancher v2.5, the following profiles are available: + +- Generic CIS 1.5 +- RKE permissive +- RKE hardened +- EKS +- GKE + +The default profile depends on the type of cluster that will be scanned: + +- For RKE Kubernetes clusters, the RKE permissive profile is the default. +- EKS and GKE have their own CIS Benchmarks published by `kube-bench`. The corresponding test profiles are used by default for those clusters. +- For cluster types other than RKE, EKS and GKE, the Generic CIS 1.5 profile will be used by default. + +The `rancher-cis-benchmark` currently supports the CIS 1.5 Benchmark version. + +> **Note:** CIS v1 cannot run on a cluster when CIS v2 is deployed. In other words, after `rancher-cis-benchmark` is installed, you can't run scans by going to the Cluster Manager view in the Rancher UI and clicking **Tools > CIS Scans.** + +# About the CIS Benchmark + +The Center for Internet Security is a 501(c)(3) nonprofit organization, formed in October 2000, with a mission is to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace". The organization is headquartered in East Greenbush, New York, with members including large corporations, government agencies, and academic institutions. + +CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team. + +The official Benchmark documents are available through the CIS website. The sign-up form to access the documents is +here. + +# Installing rancher-cis-benchmark + +The application can be installed with the Rancher UI or with Helm. + +### Installing with the Rancher UI + +1. In the Rancher UI, go to the **Cluster Explorer.** +1. Click **Apps.** +1. Click `rancher-cis-benchmark`. +1. Click **Install.** + +**Result:** The CIS scan application is deployed on the Kubernetes cluster. + +### Installing with Helm + +There are two Helm charts for the application: + +- `rancher-cis-benchmark-crds`, the custom resource definition chart +- `rancher-cis-benchmark`, the chart deploying rancher/cis-operator + +To install the charts, run the following commands: +``` +helm repo add rancherchart https://charts.rancher.io +helm repo update +helm install rancher-cis-benchmark-crd --kubeconfig <> rancherchart/rancher-cis-benchmark-crd --create-namespace -n cis-operator-system +helm install rancher-cis-benchmark --kubeconfig <> rancherchart/rancher-cis-benchmark -n cis-operator-system +``` + +# Uninstalling rancher-cis-benchmark + +The application can be uninstalled with the Rancher UI or with Helm. + +### Uninstalling with the Rancher UI + +1. From the **Cluster Explorer,** go to the top left dropdown menu and click **Apps & Marketplace.** +1. Click **Installed Apps.** +1. Go to the `cis-operator-system` namespace and check the boxes next to `rancher-cis-benchmark-crd` and `rancher-cis-benchmark`. +1. Click **Delete** and confirm **Delete.** + +**Result:** The `rancher-cis-benchmark` application is uninstalled. + +### Uninstalling with Helm + +Run the following commands: + +``` +helm uninstall rancher-cis-benchmark -n cis-operator-system +helm uninstall rancher-cis-benchmark-crd -n cis-operator-system +``` + +# Running a Scan + +When a ClusterScan custom resource is created, it launches a new CIS scan on the cluster for the chosen ClusterScanProfile. + +Note: There is currently a limitation of running only one CIS scan at a time for a cluster. If you create multiple ClusterScan custom resources, they will be run one after the other by the operator, and until one scan finishes, the rest of the ClusterScan custom resources will be in the "Pending" state. + +To run a scan, + +1. Go to the **Cluster Explorer** in the Rancher UI. In the top left dropdown menu, click **Cluster Explorer > CIS Benchmark.** +1. In the **Scans** section, click **Create.** +1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on. +1. Click **Create.** + +**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears. + +# Skipping Tests + +CIS scans can be run using test profiles with user-defined skips. + +To skip tests, you will create a custom CIS scan profile. A profile contains the configuration for the CIS scan, which includes the benchmark versions to use and any specific tests to skip in that benchmark. + +1. In the **Cluster Explorer,** go to the top-left dropdown menu and click **CIS Benchmark.** +1. Click **Profiles.** +1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile, click the three vertical dots, and click **Clone as YAML.** If you are filling out the form, add the tests to skip using the test IDs, using the relevant CIS Benchmark as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name: + + ```yaml + apiVersion: cis.cattle.io/v1 + kind: ClusterScanProfile + metadata: + annotations: + meta.helm.sh/release-name: clusterscan-operator + meta.helm.sh/release-namespace: cis-operator-system + labels: + app.kubernetes.io/managed-by: Helm + name: "" + spec: + benchmarkVersion: cis-1.5 + skipTests: + - "1.1.20" + - "1.1.21" + ``` +1. Click **Create.** + +**Result:** A new CIS scan profile is created. + +When you [run a scan](#running-a-scan) that uses this profile, the defined tests will be skipped during the scan. The skipped tests will be marked in the generated report as `Skip`. + +# Viewing Reports + +To view the generated CIS scan reports, + +1. In the **Cluster Explorer,** go to the top left dropdown menu and click **Cluster Explorer > CIS Benchmark.** +1. The **Scans** page will show the generated reports. To see a detailed report, go to a scan report and click the name. + +One can download the report from the Scans list or from the scan detail page. + +# About the Generated Report + +Each scan generates a report can be viewed in the Rancher UI and can be downloaded in CSV format. + +In Rancher v2.5, the scan will use the CIS Benchmark v1.5. The Benchmark version is included in the generated report. + +The Benchmark provides recommendations of two types: Scored and Not Scored. Recommendations marked as Not Scored in the Benchmark are not included in the generated report. + +Some tests are designated as "Not Applicable." These tests will not be run on any CIS scan because of the way that Rancher provisions RKE clusters. For information on how test results can be audited, and why some tests are designated to be not applicable, refer to Rancher's self-assessment guide for the corresponding Kubernetes version. + +The report contains the following information: + +| Column in Report | Description | +|------------------|-------------| +| `id` | The ID number of the CIS Benchmark. | +| `description` | The description of the CIS Benchmark test. | +| `remediation` | What needs to be fixed in order to pass the test. | +| `state` | Indicates if the test passed, failed, was skipped, or was not applicable. | +| `node_type` | The node role, which affects which tests are run on the node. Master tests are run on controlplane nodes, etcd tests are run on etcd nodes, and node tests are run on the worker nodes. | +| `audit` | This is the audit check that `kube-bench` runs for this test. | +| `audit_config` | Any configuration applicable to the audit script. | +| `test_info` | Test-related info as reported by `kube-bench`, if any. | +| `commands` | Test-related commands as reported by `kube-bench`, if any. | +| `config_commands` | Test-related configuration data as reported by `kube-bench`, if any. | +| `actual_value` | The test's actual value, present if reported by `kube-bench`. | +| `expected_result` | The test's expected result, present if reported by `kube-bench`. | + +Refer to the table in the cluster hardening guide for information on which versions of Kubernetes, the Benchmark, Rancher, and our cluster hardening guide correspond to each other. Also refer to the hardening guide for configuration files of CIS-compliant clusters and information on remediating failed tests. + +# Test Profiles + +The following profiles are available: + +- Generic CIS 1.5 (default) +- RKE permissive +- RKE hardened +- EKS +- GKE + +You also have the ability to customize a profile by saving a set of tests to skip. + +All profiles will have a set of not applicable tests that will be skipped during the CIS scan. These tests are not applicable based on how a RKE cluster manages Kubernetes. + +There are 2 types of RKE cluster scan profiles: + +- **Permissive:** This profile has a set of tests that have been will be skipped as these tests will fail on a default RKE Kubernetes cluster. Besides the list of skipped tests, the profile will also not run the not applicable tests. +- **Hardened:** This profile will not skip any tests, except for the non-applicable tests. + +The EKS and GKE cluster scan profiles are based on CIS Benchmark versions that are specific to those types of clusters. + +In order to pass the "Hardened" profile, you will need to follow the steps on the hardening guide and use the `cluster.yml` defined in the hardening guide to provision a hardened cluster. + +# About Skipped and Not Applicable Tests + +For a list of skipped and not applicable tests, refer to this page. + +For now, only user-defined skipped tests are marked as skipped in the generated report. + +Any skipped tests that are defined as being skipped by one of the default profiles are marked as not applicable. + +# Roles-based Access Control + +For information about permissions, refer to this page. + +# Configuration + +For more information about configuring the custom resources for the scans, profiles, and benchmark versions, refer to this page. diff --git a/content/rancher/v2.x/en/cis-scans/configuration/_index.md b/content/rancher/v2.x/en/cis-scans/configuration/_index.md new file mode 100644 index 00000000000..c2b3629f838 --- /dev/null +++ b/content/rancher/v2.x/en/cis-scans/configuration/_index.md @@ -0,0 +1,94 @@ +--- +title: Configuration +weight: 3 +--- + +This configuration reference is intended to help you manage the custom resources created by the `rancher-cis-benchmark` application. These resources are used for performing CIS scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization. + +To configure the custom resources, go to the **Cluster Explorer** in the Rancher UI. In dropdown menu in the top left corner, click **Cluster Explorer > CIS Benchmark.** + +### Scans + +A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed. + +When configuring a scan, you need to define the name of the scan profile that will be used with the `scanProfileName` directive. + +An example ClusterScan custom resource is below: + +```yaml +apiVersion: cis.cattle.io/v1 +kind: ClusterScan +metadata: + name: rke-cis +spec: + scanProfileName: rke-profile-hardened +``` + +### Profiles + +A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark. + +> By default, a few ClusterScanProfiles are installed as part of the `rancher-cis-benchmark` chart. If a user edits these default benchmarks or profiles, the next chart update will reset them back. So it is advisable for users to not edit the default ClusterScanProfiles. + +Users can clone the ClusterScanProfiles to create custom profiles. + +Skipped tests are listed under the `skipTests` directive. + +When you create a new profile, you will also need to give it a name. + +An example `ClusterScanProfile` is below: + +```yaml +apiVersion: cis.cattle.io/v1 +kind: ClusterScanProfile +metadata: + annotations: + meta.helm.sh/release-name: clusterscan-operator + meta.helm.sh/release-namespace: cis-operator-system + labels: + app.kubernetes.io/managed-by: Helm + name: "" +spec: + benchmarkVersion: cis-1.5 + skipTests: + - "1.1.20" + - "1.1.21" +``` + +### Benchmark Versions + +A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark. + +A `ClusterScanBenchmark` defines the CIS `BenchmarkVersion` name and test configurations. The `BenchmarkVersion` name is a parameter provided to the `kube-bench` tool. + +By default, a few `BenchmarkVersion` names and test configurations are packaged as part of the CIS scan application. When this feature is enabled, these default BenchmarkVersions will be automatically installed and available for users to create a ClusterScanProfile. + +> If the default BenchmarkVersions are edited, the next chart update will reset them back. Therefore we don't recommend editing the default ClusterScanBenchmarks. + +A ClusterScanBenchmark consists of the fields: + +- `ClusterProvider`: This is the cluster provider name for which this benchmark is applicable. For example: RKE, EKS, GKE, etc. Leave it empty if this benchmark can be run on any cluster type. +- `MinKubernetesVersion`: Specifies the cluster's minimum kubernetes version necessary to run this benchmark. Leave it empty if there is no dependency on a particular Kubernetes version. +- `MaxKubernetesVersion`: Specifies the cluster's maximum Kubernetes version necessary to run this benchmark. Leave it empty if there is no dependency on a particular k8s version. + +An example `ClusterScanBenchmark` is below: + +```yaml +apiVersion: cis.cattle.io/v1 +kind: ClusterScanBenchmark +metadata: + annotations: + meta.helm.sh/release-name: clusterscan-operator + meta.helm.sh/release-namespace: cis-operator-system + creationTimestamp: "2020-08-28T18:18:07Z" + generation: 1 + labels: + app.kubernetes.io/managed-by: Helm + name: cis-1.5 + resourceVersion: "203878" + selfLink: /apis/cis.cattle.io/v1/clusterscanbenchmarks/cis-1.5 + uid: 309e543e-9102-4091-be91-08d7af7fb7a7 +spec: + clusterProvider: "" + minKubernetesVersion: 1.15.0 +``` \ No newline at end of file diff --git a/content/rancher/v2.x/en/cis-scans/legacy/_index.md b/content/rancher/v2.x/en/cis-scans/legacy/_index.md new file mode 100644 index 00000000000..155073640f1 --- /dev/null +++ b/content/rancher/v2.x/en/cis-scans/legacy/_index.md @@ -0,0 +1,157 @@ +--- +title: Cluster Manager CIS Scan (Deprecated) +shortTitle: Cluster Manager +weight: 1 +--- +_Available as of v2.4.0_ + +This section contains the legacy documentation for the CIS Scan tool that was released in Rancher v2.4, and was available under the **Tools** menu in the top navigation bar of the cluster manager. + +As of Rancher v2.5, it is deprecated and replaced with the `rancher-cis-benchmark` application. + +- [Prerequisites](#prerequisites) +- [Running a scan](#running-a-scan) +- [Scheduling recurring scans](#scheduling-recurring-scans) +- [Skipping tests](#skipping-tests) +- [Setting alerts](#setting-alerts) +- [Deleting a report](#deleting-a-report) +- [Downloading a report](#downloading-a-report) +- [List of skipped and not applicable tests](#list-of-skipped-and-not-applicable-tests) + + +# Prerequisites + +To run security scans on a cluster and access the generated reports, you must be an [Administrator]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [Cluster Owner.]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/) + +Rancher can only run security scans on clusters that were created with RKE, which includes custom clusters and clusters that Rancher created in an infrastructure provider such as Amazon EC2 or GCE. Imported clusters and clusters in hosted Kubernetes providers can't be scanned by Rancher. + +The security scan cannot run in a cluster that has Windows nodes. + +You will only be able to see the CIS scan reports for clusters that you have access to. + +# Running a Scan + +1. From the cluster view in Rancher, click **Tools > CIS Scans.** +1. Click **Run Scan.** +1. Choose a CIS scan profile. + +**Result:** A report is generated and displayed in the **CIS Scans** page. To see details of the report, click the report's name. + +# Scheduling Recurring Scans + +Recurring scans can be scheduled to run on any RKE Kubernetes cluster. + +To enable recurring scans, edit the advanced options in the cluster configuration during cluster creation or after the cluster has been created. + +To schedule scans for an existing cluster: + +1. Go to the cluster view in Rancher. +1. Click **Tools > CIS Scans.** +1. Click **Add Schedule.** This takes you to the section of the cluster editing page that is applicable to configuring a schedule for CIS scans. (This section can also be reached by going to the cluster view, clicking **⋮ > Edit,** and going to the **Advanced Options.**) +1. In the **CIS Scan Enabled** field, click **Yes.** +1. In the **CIS Scan Profile** field, choose a **Permissive** or **Hardened** profile. The corresponding CIS Benchmark version is included in the profile name. Note: Any skipped tests [defined in a separate ConfigMap](#skipping-tests) will be skipped regardless of whether a **Permissive** or **Hardened** profile is selected. When selecting the the permissive profile, you should see which tests were skipped by Rancher (tests that are skipped by default for RKE clusters) and which tests were skipped by a Rancher user. In the hardened test profile, the only skipped tests will be skipped by users. +1. In the **CIS Scan Interval (cron)** job, enter a [cron expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) to define how often the cluster will be scanned. +1. In the **CIS Scan Report Retention** field, enter the number of past reports that should be kept. + +**Result:** The security scan will run and generate reports at the scheduled intervals. + +The test schedule can be configured in the `cluster.yml`: + +```yaml +scheduled_cluster_scan: +    enabled: true +    scan_config: +        cis_scan_config: +            override_benchmark_version: rke-cis-1.4 +            profile: permissive +    schedule_config: +        cron_schedule: 0 0 * * * +        retention: 24 +``` + + +# Skipping Tests + +You can define a set of tests that will be skipped by the CIS scan when the next report is generated. + +These tests will be skipped for subsequent CIS scans, including both manually triggered and scheduled scans, and the tests will be skipped with any profile. + +The skipped tests will be listed alongside the test profile name in the cluster configuration options when a test profile is selected for a recurring cluster scan. The skipped tests will also be shown every time a scan is triggered manually from the Rancher UI by clicking **Run Scan.** The display of skipped tests allows you to know ahead of time which tests will be run in each scan. + +To skip tests, you will need to define them in a Kubernetes ConfigMap resource. Each skipped CIS scan test is listed in the ConfigMap alongside the version of the CIS benchmark that the test belongs to. + +To skip tests by editing a ConfigMap resource, + +1. Create a `security-scan` namespace. +1. Create a ConfigMap named `security-scan-cfg`. +1. Enter the skip information under the key `config.json` in the following format: + + ```json + { + "skip": { + "rke-cis-1.4": [ + "1.1.1", + "1.2.2" + ] + } + } + ``` + + In the example above, the CIS benchmark version is specified alongside the tests to be skipped for that version. + +**Result:** These tests will be skipped on subsequent scans that use the defined CIS Benchmark version. + +# Setting Alerts + +Rancher provides a set of alerts for cluster scans. which are not configured to have notifiers by default: + +- A manual cluster scan was completed +- A manual cluster scan has failures +- A scheduled cluster scan was completed +- A scheduled cluster scan has failures + +> **Prerequisite:** You need to configure a [notifier]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) before configuring, sending, or receiving alerts. + +To activate an existing alert for a CIS scan result, + +1. From the cluster view in Rancher, click **Tools > Alerts.** +1. Go to the section called **A set of alerts for cluster scans.** +1. Go to the alert you want to activate and click **⋮ > Activate.** +1. Go to the alert rule group **A set of alerts for cluster scans** and click **⋮ > Edit.** +1. Scroll down to the **Alert** section. In the **To** field, select the notifier that you would like to use for sending alert notifications. +1. Optional: To limit the frequency of the notifications, click on **Show advanced options** and configure the time interval of the alerts. +1. Click **Save.** + +**Result:** The notifications will be triggered when the a scan is run on a cluster and the active alerts have satisfied conditions. + +To create a new alert, + +1. Go to the cluster view and click **Tools > CIS Scans.** +1. Click **Add Alert.** +1. Fill out the form. +1. Enter a name for the alert. +1. In the **Is** field, set the alert to be triggered when a scan is completed or when a scan has a failure. +1. In the **Send a** field, set the alert as a **Critical,** **Warning,** or **Info** alert level. +1. Choose a [notifier]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) for the alert. + +**Result:** The alert is created and activated. The notifications will be triggered when the a scan is run on a cluster and the active alerts have satisfied conditions. + +For more information about alerts, refer to [this page.]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/) + +# Deleting a Report + +1. From the cluster view in Rancher, click **Tools > CIS Scans.** +1. Go to the report that should be deleted. +1. Click the **⋮ > Delete.** +1. Click **Delete.** + +# Downloading a Report + +1. From the cluster view in Rancher, click **Tools > CIS Scans.** +1. Go to the report that you want to download. Click **⋮ > Download.** + +**Result:** The report is downloaded in CSV format. For more information on each columns, refer to the [section about the generated report.](#about-the-generated-report) + +# List of Skipped and Not Applicable Tests + +For a list of skipped and not applicable tests, refer to this page. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cis-scans/legacy/skipped-tests/_index.md b/content/rancher/v2.x/en/cis-scans/legacy/skipped-tests/_index.md new file mode 100644 index 00000000000..849e69019f6 --- /dev/null +++ b/content/rancher/v2.x/en/cis-scans/legacy/skipped-tests/_index.md @@ -0,0 +1,105 @@ +--- +title: Skipped and Not Applicable Tests +weight: 1 +--- + +This section lists the tests that are skipped in the permissive test profile for RKE. + +All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile. + +- [CIS Benchmark v1.5](#cis-benchmark-v1-5) +- [CIS Benchmark v1.4](#cis-benchmark-v1-4) + +# CIS Benchmark v1.5 + +### CIS Benchmark v1.5 Skipped Tests + +| Number | Description | Reason for Skipping | +| ---------- | ------------- | --------- | +| 1.1.12 | Ensure that the etcd data directory ownership is set to etcd:etcd (Scored) | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership. | +| 1.2.6 | Ensure that the --kubelet-certificate-authority argument is set as appropriate (Scored) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | +| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. | +| 1.2.34 | Ensure that encryption providers are appropriately configured (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. | +| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true. | +| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | +| 5.1.5 | Ensure that default service accounts are not actively used. (Scored) | Kubernetes provides default service accounts to be used. | +| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.2.3 | Minimize the admission of containers wishing to share the host IPC namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.2.4 | Minimize the admission of containers wishing to share the host network namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.2.5 | Minimize the admission of containers with allowPrivilegeEscalation (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.3.2 | Ensure that all Namespaces have Network Policies defined (Scored) | Enabling Network Policies can prevent certain applications from communicating with each other. | +| 5.6.4 | The default namespace should not be used (Scored) | Kubernetes provides a default namespace. | + +### CIS Benchmark v1.5 Not Applicable Tests + +| Number | Description | Reason for being not applicable | +| ---------- | ------------- | --------- | +| 1.1.1 | Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | +| 1.1.2 | Ensure that the API server pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | +| 1.1.3 | Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.1.4 | Ensure that the controller manager pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.1.5 | Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.6 | Ensure that the scheduler pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.7 | Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | +| 1.1.8 | Ensure that the etcd pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | +| 1.1.13 | Ensure that the admin.conf file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | +| 1.1.14 | Ensure that the admin.conf file ownership is set to root:root (Scored) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | +| 1.1.15 | Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.16 | Ensure that the scheduler.conf file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.17 | Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.1.18 | Ensure that the controller-manager.conf file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.3.6 | Ensure that the RotateKubeletServerCertificate argument is set to true (Scored) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | +| 4.1.1 | Ensure that the kubelet service file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | +| 4.1.2 | Ensure that the kubelet service file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | +| 4.1.9 | Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | +| 4.1.10 | Ensure that the kubelet configuration file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | +| 4.2.12 | Ensure that the RotateKubeletServerCertificate argument is set to true (Scored) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | + +# CIS Benchmark v1.4 + +The skipped and not applicable tests for CIS Benchmark v1.4 are as follows: + +### CIS Benchmark v1.4 Skipped Tests + +Number | Description | Reason for Skipping +---|---|--- +1.1.11 | "Ensure that the admission control plugin AlwaysPullImages is set (Scored)" | Enabling AlwaysPullImages can use significant bandwidth. +1.1.21 | "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Scored)" | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. +1.1.24 | "Ensure that the admission control plugin PodSecurityPolicy is set (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. +1.1.34 | "Ensure that the --encryption-provider-config argument is set as appropriate (Scored)" | Enabling encryption changes how data can be recovered as data is encrypted. +1.1.35 | "Ensure that the encryption provider is set to aescbc (Scored)" | Enabling encryption changes how data can be recovered as data is encrypted. +1.1.36 | "Ensure that the admission control plugin EventRateLimit is set (Scored)" | EventRateLimit needs to be tuned depending on the cluster. +1.2.2 | "Ensure that the --address argument is set to 127.0.0.1 (Scored)" | Adding this argument prevents Rancher's monitoring tool to collect metrics on the scheduler. +1.3.7 | "Ensure that the --address argument is set to 127.0.0.1 (Scored)" | Adding this argument prevents Rancher's monitoring tool to collect metrics on the controller manager. +1.4.12 | "Ensure that the etcd data directory ownership is set to etcd:etcd (Scored)" | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership. +1.7.2 | "Do not admit containers wishing to share the host process ID namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. +1.7.3 | "Do not admit containers wishing to share the host IPC namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. +1.7.4 | "Do not admit containers wishing to share the host network namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. +1.7.5 | " Do not admit containers with allowPrivilegeEscalation (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. +2.1.6 | "Ensure that the --protect-kernel-defaults argument is set to true (Scored)" | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true. +2.1.10 | "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored)" | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. + +### CIS Benchmark v1.4 Not Applicable Tests + +Number | Description | Reason for being not applicable +---|---|--- +1.1.9 | "Ensure that the --repair-malformed-updates argument is set to false (Scored)" | The argument --repair-malformed-updates has been removed as of Kubernetes version 1.14 +1.3.6 | "Ensure that the RotateKubeletServerCertificate argument is set to true" | Cluster provisioned by RKE handles certificate rotation directly through RKE. +1.4.1 | "Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +1.4.2 | "Ensure that the API server pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +1.4.3 | "Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +1.4.4 | "Ensure that the controller manager pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +1.4.5 | "Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +1.4.6 | "Ensure that the scheduler pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +1.4.7 | "Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd. +1.4.8 | "Ensure that the etcd pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd. +1.4.13 | "Ensure that the admin.conf file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. +1.4.14 | "Ensure that the admin.conf file ownership is set to root:root (Scored)" | Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. +2.1.8 | "Ensure that the --hostname-override argument is not set (Scored)" | Clusters provisioned by RKE clusters and most cloud providers require hostnames. +2.1.12 | "Ensure that the --rotate-certificates argument is not set to false (Scored)" | Cluster provisioned by RKE handles certificate rotation directly through RKE. +2.1.13 | "Ensure that the RotateKubeletServerCertificate argument is set to true (Scored)" | Cluster provisioned by RKE handles certificate rotation directly through RKE. +2.2.3 | "Ensure that the kubelet service file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. +2.2.4 | "Ensure that the kubelet service file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. +2.2.9 | "Ensure that the kubelet configuration file ownership is set to root:root (Scored)" | RKE doesn’t require or maintain a configuration file for the kubelet. +2.2.10 | "Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Scored)" | RKE doesn’t require or maintain a configuration file for the kubelet. diff --git a/content/rancher/v2.x/en/cis-scans/rbac/_index.md b/content/rancher/v2.x/en/cis-scans/rbac/_index.md new file mode 100644 index 00000000000..d3800df3559 --- /dev/null +++ b/content/rancher/v2.x/en/cis-scans/rbac/_index.md @@ -0,0 +1,45 @@ +--- +title: Roles-based Access Control +shortTitle: RBAC +weight: 3 +--- + +This section describes the permissions required to use the rancher-cis-benchmark App. + +The rancher-cis-benchmark is a cluster-admin only feature by default. + +However, the `rancher-cis-benchmark` chart installs three default `ClusterRoles`: +- cis-admin +- cis-edit +- cis-view + +In Rancher, only cluster owners and global administrators have `cis-admin` access by default. + +# Cluster-Admin Access + +Rancher CIS Scans is a cluster-admin only feature by default. +This means only the Rancher global admins, and the cluster’s cluster-owner can: + +- Install/Uninstall the rancher-cis-benchmark App +- See the navigation links for CIS Benchmark CRDs - ClusterScanBenchmarks, ClusterScanProfiles, ClusterScans +- List the default ClusterScanBenchmarks and ClusterScanProfiles +- Create/Edit/Delete new ClusterScanProfiles +- Create/Edit/Delete a new ClusterScan to run the CIS scan on the cluster +- View and Download the ClusterScanReport created after the ClusterScan is complete + + +# Summary of Default Permissions for Kubernetes Default Roles + +The rancher-cis-benchmark creates three `ClusterRoles` and adds the CIS Benchmark CRD access to the following default K8s `ClusterRoles`: + +| ClusterRole created by chart | Default K8s ClusterRole | Permissions given with Role +| ------------------------------| ---------------------------| ---------------------------| +| `cis-admin` | `admin`| Ability to CRUD clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR +| `cis-edit`| `edit` | Ability to CRUD clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR +| `cis-view` | `view `| Ability to List(R) clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR + +By default only cluster-owner role will have ability to manage and use `rancher-cis-benchmark` feature. + +The other Rancher roles (cluster-member, project-owner, project-member) do not have default permissions to manage and use rancher-cis-benchmark resources. + +But if a cluster-owner wants to delegate access to other users, they can do so by creating ClusterRoleBindings between these users and the CIS ClusterRoles manually. diff --git a/content/rancher/v2.x/en/cis-scans/skipped-tests/_index.md b/content/rancher/v2.x/en/cis-scans/skipped-tests/_index.md new file mode 100644 index 00000000000..feaf42e27b9 --- /dev/null +++ b/content/rancher/v2.x/en/cis-scans/skipped-tests/_index.md @@ -0,0 +1,54 @@ +--- +title: Skipped and Not Applicable Tests +weight: 3 +--- + +This section lists the tests that are skipped in the permissive test profile for RKE. + +> All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile. + +# CIS Benchmark v1.5 + +### CIS Benchmark v1.5 Skipped Tests + +| Number | Description | Reason for Skipping | +| ---------- | ------------- | --------- | +| 1.1.12 | Ensure that the etcd data directory ownership is set to etcd:etcd (Scored) | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership. | +| 1.2.6 | Ensure that the --kubelet-certificate-authority argument is set as appropriate (Scored) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | +| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. | +| 1.2.34 | Ensure that encryption providers are appropriately configured (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. | +| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true. | +| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. | +| 5.1.5 | Ensure that default service accounts are not actively used. (Scored) | Kubernetes provides default service accounts to be used. | +| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.2.3 | Minimize the admission of containers wishing to share the host IPC namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.2.4 | Minimize the admission of containers wishing to share the host network namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.2.5 | Minimize the admission of containers with allowPrivilegeEscalation (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. | +| 5.3.2 | Ensure that all Namespaces have Network Policies defined (Scored) | Enabling Network Policies can prevent certain applications from communicating with each other. | +| 5.6.4 | The default namespace should not be used (Scored) | Kubernetes provides a default namespace. | + +### CIS Benchmark v1.5 Not Applicable Tests + +| Number | Description | Reason for being not applicable | +| ---------- | ------------- | --------- | +| 1.1.1 | Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | +| 1.1.2 | Ensure that the API server pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. | +| 1.1.3 | Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.1.4 | Ensure that the controller manager pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.1.5 | Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.6 | Ensure that the scheduler pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.7 | Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | +| 1.1.8 | Ensure that the etcd pod specification file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. | +| 1.1.13 | Ensure that the admin.conf file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | +| 1.1.14 | Ensure that the admin.conf file ownership is set to root:root (Scored) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. | +| 1.1.15 | Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.16 | Ensure that the scheduler.conf file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. | +| 1.1.17 | Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.1.18 | Ensure that the controller-manager.conf file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. | +| 1.3.6 | Ensure that the RotateKubeletServerCertificate argument is set to true (Scored) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | +| 4.1.1 | Ensure that the kubelet service file permissions are set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | +| 4.1.2 | Ensure that the kubelet service file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. | +| 4.1.9 | Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Scored) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | +| 4.1.10 | Ensure that the kubelet configuration file ownership is set to root:root (Scored) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. | +| 4.2.12 | Ensure that the RotateKubeletServerCertificate argument is set to true (Scored) | Clusters provisioned by RKE handles certificate rotation directly through RKE. | \ No newline at end of file diff --git a/content/rancher/v2.x/en/cli/_index.md b/content/rancher/v2.x/en/cli/_index.md index dd4d656fd19..69d6f7a1805 100644 --- a/content/rancher/v2.x/en/cli/_index.md +++ b/content/rancher/v2.x/en/cli/_index.md @@ -3,7 +3,7 @@ title: Using the Rancher Command Line Interface description: The Rancher CLI is a unified tool that you can use to interact with Rancher. With it, you can operate Rancher using a command line interface rather than the GUI metaTitle: "Using the Rancher Command Line Interface " metaDescription: "The Rancher CLI is a unified tool that you can use to interact with Rancher. With it, you can operate Rancher using a command line interface rather than the GUI" -weight: 6000 +weight: 21 --- The Rancher CLI (Command Line Interface) is a unified tool that you can use to interact with Rancher. With this tool, you can operate Rancher using a command line rather than the GUI. diff --git a/content/rancher/v2.x/en/cluster-admin/_index.md b/content/rancher/v2.x/en/cluster-admin/_index.md index 59615cb48a2..022d90a8da5 100644 --- a/content/rancher/v2.x/en/cluster-admin/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/_index.md @@ -1,6 +1,6 @@ --- title: Cluster Administration -weight: 2005 +weight: 8 --- After you provision a cluster in Rancher, you can begin using powerful Kubernetes features to deploy and scale your containerized applications in development, testing, or production environments. diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/_index.md index 145e647c331..cdc9bfaef3b 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/_index.md @@ -24,7 +24,7 @@ For attaching existing persistent storage to a cluster, the cloud provider does The overall workflow for setting up existing storage is as follows: -1. Set up persistent storage in an infrastructure provider. +1. Set up your persistent storage. This may be storage in an infrastructure provider, or it could be your own storage. 2. Add a persistent volume (PV) that refers to the persistent storage. 3. Add a persistent volume claim (PVC) that refers to the PV. 4. Mount the PVC as a volume in your workload. @@ -35,12 +35,22 @@ For details and prerequisites, refer to [this page.](./attaching-existing-storag The overall workflow for provisioning new storage is as follows: -1. Add a storage class and configure it to use your storage provider. +1. Add a StorageClass and configure it to use your storage provider. The StorageClass could refer to storage in an infrastructure provider, or it could refer to your own storage. 2. Add a persistent volume claim (PVC) that refers to the storage class. 3. Mount the PVC as a volume for your workload. For details and prerequisites, refer to [this page.](./provisioning-new-storage) +### Longhorn Storage + +[Longhorn](https://longhorn.io/) is a lightweight, reliable and easy-to-use distributed block storage system for Kubernetes. + +Longhorn is free, open source software. Originally developed by Rancher Labs, it is now being developed as a sandbox project of the Cloud Native Computing Foundation. It can be installed on any Kubernetes cluster with Helm, with kubectl, or with the Rancher UI. + +If you have a pool of block storage, Longhorn can help you provide persistent storage to your Kubernetes cluster without relying on cloud providers. For more information about Longhorn features, refer to the [documentation.](https://longhorn.io/docs/1.0.2/what-is-longhorn/) + +Rancher v2.5 simplified the process of installing Longhorn on a Rancher-managed cluster. For more information, see [this page.]({{}}/rancher/v2.x/en/longhorn) + ### Provisioning Storage Examples We provide examples of how to provision storage with [NFS,](./examples/nfs) [vSphere,](./examples/vsphere) and [Amazon's EBS.](./examples/ebs) diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/_index.md index d85a4e9ad3d..0044a07d13b 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/_index.md @@ -9,7 +9,7 @@ This section describes how to set up existing persistent storage for workloads i To set up storage, follow these steps: -1. [Set up persistent storage in an infrastructure provider.](#1-set-up-persistent-storage-in-an-infrastructure-provider) +1. [Set up persistent storage.](#1-set-up-persistent-storage) 2. [Add a persistent volume that refers to the persistent storage.](#2-add-a-persistent-volume-that-refers-to-the-persistent-storage) 3. [Add a persistent volume claim that refers to the persistent volume.](#3-add-a-persistent-volume-claim-that-refers-to-the-persistent-volume) 4. [Mount the persistent volume claim as a volume in your workload.](#4-mount-the-persistent-storage-claim-as-a-volume-in-your-workload) @@ -19,11 +19,13 @@ To set up storage, follow these steps: - To create a persistent volume as a Kubernetes resource, you must have the `Manage Volumes` [role.]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-role-reference) - If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider. -### 1. Set up persistent storage in an infrastructure provider +### 1. Set up persistent storage Creating a persistent volume in Rancher will not create a storage volume. It only creates a Kubernetes resource that maps to an existing volume. Therefore, before you can create a persistent volume as a Kubernetes resource, you must have storage provisioned. -The steps to set up a persistent storage device will differ based on your infrastructure. We provide examples of how to set up storage using [vSphere,](../examples/vsphere) [NFS,](../examples/nfs) or Amazon's [EBS.](../examples/ebs) +The steps to set up a persistent storage device will differ based on your infrastructure. We provide examples of how to set up storage using [vSphere,](../examples/vsphere) [NFS,](../examples/nfs) or Amazon's [EBS.](../examples/ebs) + +If you have a pool of block storage, and you don't want to use a cloud provider, Longhorn could help you provide persistent storage to your Kubernetes cluster. For more information, see [this page.]({{}}/rancher/v2.x/en/longhorn) ### 2. Add a persistent volume that refers to the persistent storage diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md index 50f33cce160..fef9bfcda05 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md @@ -5,11 +5,15 @@ weight: 2 This section describes how to provision new persistent storage for workloads in Rancher. -> This section assumes that you understand the Kubernetes concepts of storage classes and persistent volume claims. For more information, refer to the section on [how storage works.](../how-storage-works) +This section assumes that you understand the Kubernetes concepts of storage classes and persistent volume claims. For more information, refer to the section on [how storage works.](../how-storage-works) + +New storage is often provisioned by a cloud provider such as Amazon EBS. However, new storage doesn't have to be in the cloud. + +If you have a pool of block storage, and you don't want to use a cloud provider, Longhorn could help you provide persistent storage to your Kubernetes cluster. For more information, see [this page.]({{}}/rancher/v2.x/en/longhorn) To provision new storage for your workloads, follow these steps: -1. [Add a storage class and configure it to use your storage provider.](#1-add-a-storage-class-and-configure-it-to-use-your-storage-provider) +1. [Add a storage class and configure it to use your storage.](#1-add-a-storage-class-and-configure-it-to-use-your-storage) 2. [Add a persistent volume claim that refers to the storage class.](#2-add-a-persistent-volume-claim-that-refers-to-the-storage-class) 3. [Mount the persistent volume claim as a volume for your workload.](#3-mount-the-persistent-volume-claim-as-a-volume-for-your-workload) @@ -36,7 +40,7 @@ hostPath | `host-path` To use a storage provisioner that is not on the above list, you will need to use a [feature flag to enable unsupported storage drivers.]({{}}/rancher/v2.x/en/installation/options/feature-flags/enable-not-default-storage-drivers/) -### 1. Add a storage class and configure it to use your storage provider +### 1. Add a storage class and configure it to use your storage These steps describe how to set up a storage class at the cluster level. diff --git a/content/rancher/v2.x/en/cluster-provisioning/_index.md b/content/rancher/v2.x/en/cluster-provisioning/_index.md index eb65528492b..31129b3e6cb 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/_index.md @@ -1,7 +1,7 @@ --- title: Setting up Kubernetes Clusters in Rancher description: Provisioning Kubernetes Clusters -weight: 2000 +weight: 7 aliases: - /rancher/v2.x/en/concepts/clusters/ - /rancher/v2.x/en/concepts/clusters/cluster-providers/ diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md index e2da323ee7a..5a84152ebef 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md @@ -1,6 +1,6 @@ --- title: Setting up Clusters from Hosted Kubernetes Providers -weight: 2100 +weight: 3 --- In this scenario, Rancher does not provision Kubernetes because it is installed by providers such as Google Kubernetes Engine (GKE), Amazon Elastic Container Service for Kubernetes, or Azure Kubernetes Service. diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md index 895e4a890d6..74d9e9ef038 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md @@ -8,13 +8,24 @@ aliases: Amazon EKS provides a managed control plane for your Kubernetes cluster. Amazon EKS runs the Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Rancher provides an intuitive user interface for managing and deploying the Kubernetes clusters you run in Amazon EKS. With this guide, you will use Rancher to quickly and easily launch an Amazon EKS Kubernetes cluster in your AWS account. For more information on Amazon EKS, see this [documentation](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html). +- [Prerequisites in Amazon Web Services](#prerequisites-in-amazon-web-services) + - [Amazon VPC](#amazon-vpc) + - [IAM Policies](#iam-policies) +- [Architecture](#architecture) +- [Create the EKS Cluster](#create-the-eks-cluster) +- [EKS Cluster Configuration Reference](#eks-cluster-configuration-reference) +- [Troubleshooting](#troubleshooting) +- [AWS Service Events](#aws-service-events) +- [Security and Compliance](#security-and-compliance) +- [Tutorial](#tutorial) +- [Minimum EKS Permissions](#minimum-eks-permissions) -## Prerequisites in Amazon Web Services +# Prerequisites in Amazon Web Services >**Note** >Deploying to Amazon AWS will incur charges. For more information, refer to the [EKS pricing page](https://aws.amazon.com/eks/pricing/). -To set up a cluster on EKS, you will need to set up an Amazon VPC (Virtual Private Cloud). You will also need to make sure that the account you will be using to create the EKS cluster has the appropriate permissions. For details, refer to the official guide on [Amazon EKS Prerequisites](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-prereqs). +To set up a cluster on EKS, you will need to set up an Amazon VPC (Virtual Private Cloud). You will also need to make sure that the account you will be using to create the EKS cluster has the appropriate [permissions.](#minimum-eks-permissions) For details, refer to the official guide on [Amazon EKS Prerequisites](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-prereqs). ### Amazon VPC @@ -26,7 +37,7 @@ Rancher needs access to your AWS account in order to provision and administer yo 1. Create a user with programmatic access by following the steps [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html). -2. Next, create an IAM policy that defines what this user has access to in your AWS account. The required permissions are [here.]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/#appendix-minimum-eks-permissions) Follow the steps [here](https://docs.aws.amazon.com/eks/latest/userguide/EKS_IAM_user_policies.html) to create an IAM policy and attach it to your user. +2. Next, create an IAM policy that defines what this user has access to in your AWS account. It's important to only grant this user minimal access within your account. The minimum permissions required for an EKS cluster are listed [here.](#minimum-eks-permissions) Follow the steps [here](https://docs.aws.amazon.com/eks/latest/userguide/EKS_IAM_user_policies.html) to create an IAM policy and attach it to your user. 3. Finally, follow the steps [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) to create an access key and secret key for this user. @@ -34,13 +45,15 @@ Rancher needs access to your AWS account in order to provision and administer yo For more detailed information on IAM policies for EKS, refer to the official [documentation on Amazon EKS IAM Policies, Roles, and Permissions](https://docs.aws.amazon.com/eks/latest/userguide/IAM_policies.html). -## Architecture +# Architecture The figure below illustrates the high-level architecture of Rancher 2.x. The figure depicts a Rancher Server installation that manages two Kubernetes clusters: one created by RKE and another created by EKS. -![Rancher architecture with EKS hosted cluster]({{}}/img/rancher/rancher-architecture.svg) +
Managing Kubernetes Clusters through Rancher's Authentication Proxy
-## Create the EKS Cluster +![Architecture]({{}}/img/rancher/rancher-architecture-rancher-api-server.svg) + +# Create the EKS Cluster Use Rancher to set up and configure your Kubernetes cluster. @@ -48,120 +61,279 @@ Use Rancher to set up and configure your Kubernetes cluster. 1. Choose **Amazon EKS**. -1. Enter a **Cluster Name**. +1. Enter a **Cluster Name.** 1. {{< step_create-cluster_member-roles >}} -1. Configure **Account Access** for the EKS cluster. Complete each drop-down and field using the information obtained in [2. Create Access Key and Secret Key](#prerequisites-in-amazon-web-services). - - | Setting | Description | - | ---------- | -------------------------------------------------------------------------------------------------------------------- | - | Region | From the drop-down choose the geographical region in which to build your cluster. | - | Access Key | Enter the access key that you created in [2. Create Access Key and Secret Key](#2-create-access-key-and-secret-key). | - | Secret Key | Enter the secret key that you created in [2. Create Access Key and Secret Key](#2-create-access-key-and-secret-key). | - -1. Click **Next: Select Service Role**. Then choose a [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html). - - Service Role | Description - -------------|--------------------------- - Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster. - Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you're already created within AWS. For more information on creating a custom service role in AWS, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#create-service-linked-role). - -1. Click **Next: Select VPC and Subnet**. - -1. Choose an option for **Public IP for Worker Nodes**. Your selection for this option determines what options are available for **VPC & Subnet**. - - Option | Description - -------|------------ - Yes | When your cluster nodes are provisioned, they're assigned a both a private and public IP address. - No: Private IPs only | When your cluster nodes are provisioned, they're assigned only a private IP address.

If you choose this option, you must also choose a **VPC & Subnet** that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane. - -1. Now choose a **VPC & Subnet**. For more information, refer to the AWS documentation for [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html). Follow one of the sets of instructions below based on your selection from the previous step. - - - [What Is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) - - [VPCs and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html) - - {{% accordion id="yes" label="Public IP for Worker Nodes—Yes" %}} -If you choose to assign a public IP address to your cluster's worker nodes, you have the option of choosing between a VPC that's automatically generated by Rancher (i.e., **Standard: Rancher generated VPC and Subnet**), or a VPC that you're already created with AWS (i.e., **Custom: Choose from your existing VPC and Subnets**). Choose the option that best fits your use case. - -1. Choose a **VPC and Subnet** option. - - Option | Description - -------|------------ - Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC and Subnet. - Custom: Choose from your exiting VPC and Subnets | While provisioning your cluster, Rancher configures your nodes to use a VPC and Subnet that you've already [created in AWS](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html). If you choose this option, complete the remaining steps below. - -1. If you're using **Custom: Choose from your existing VPC and Subnets**: - - (If you're using **Standard**, skip to [step 11](#select-instance-options)) - - 1. Make sure **Custom: Choose from your existing VPC and Subnets** is selected. - - 1. From the drop-down that displays, choose a VPC. - - 1. Click **Next: Select Subnets**. Then choose one of the **Subnets** that displays. - - 1. Click **Next: Select Security Group**. - {{% /accordion %}} - {{% accordion id="no" label="Public IP for Worker Nodes—No: Private IPs only" %}} -If you chose this option, you must also choose a **VPC & Subnet** that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane. Follow the steps below. - ->**Tip:** When using only private IP addresses, you can provide your nodes internet access by creating a VPC constructed with two subnets, a private set and a public set. The private set should have its route tables configured to point toward a NAT in the public set. For more information on routing traffic from private subnets, please see the [official AWS documentation](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html). - - 1. From the drop-down that displays, choose a VPC. - - 1. Click **Next: Select Subnets**. Then choose one of the **Subnets** that displays. - - 1. Click **Next: Select Security Group**. - {{% /accordion %}} - -1. Choose a **Security Group**. See the documentation below on how to create one. - - Amazon Documentation: - - [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) - - [Security Groups for Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) - - [Create a Security Group](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html#getting-started-create-security-group) - -1. Click **Select Instance Options**, and then edit the node options available. Instance type and size of your worker nodes affects how many IP addresses each worker node will have available. See this [documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI) for more information. - - Option | Description - -------|------------ - Instance Type | Choose the [hardware specs](https://aws.amazon.com/ec2/instance-types/) for the instance you're provisioning. - Custom AMI Override | If you want to use a custom [Amazon Machine Image](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html#creating-an-ami) (AMI), specify it here. By default, Rancher will use the [EKS-optimized AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) for the EKS version that you chose. - Desired ASG Size | The number of instances that your cluster will provision. - User Data | Custom commands can to be passed to perform automated configuration tasks **WARNING: Modifying this may cause your nodes to be unable to join the cluster.** _Note: Available as of v2.2.0_ +1. Fill out the rest of the form. For help, refer to the [configuration reference.](#eks-cluster-configuration-reference) 1. Click **Create**. {{< result_create-cluster >}} -## Troubleshooting +# EKS Cluster Configuration Reference + +### Changes in Rancher v2.5 + +More EKS options can be configured when you create an EKS cluster in Rancher, including the following: + +- Managed node groups +- Desired size, minimum size, maximum size (requires the Cluster Autoscaler to be installed) +- Control plane logging +- Secrets encryption with KMS + +The following capabilities have been added for configuring EKS clusters in Rancher: + +- GPU support +- Exclusively use managed nodegroups that come with the most up-to-date AMIs +- Add new nodes +- Upgrade nodes +- Add and remove node groups +- Disable and enable private access +- Add restrictions to public access +- Use your cloud credentials to create the EKS cluster instead of passing in your access key and secret key + +Due to the way that the cluster data is synced with EKS, if the cluster is modified from another source, such as in the EKS console, and in Rancher within five minutes, it could cause some changes to be overwritten. For information about how the sync works and how to configure it, refer to [this section](#syncing). + +{{% tabs %}} +{{% tab "Rancher v2.5+" %}} + +### Account Access + + + +Complete each drop-down and field using the information obtained for your [IAM policy.](#iam-policy) + +| Setting | Description | +| ---------- | -------------------------------------------------------------------------------------------------------------------- | +| Region | From the drop-down choose the geographical region in which to build your cluster. | +| Cloud Credentials | Select the cloud credentials that you created for your [IAM policy.](#iam-policy) For more information on creating cloud credentials in Rancher, refer to [this page.]({{}}/rancher/v2.x/en/user-settings/cloud-credentials/) | + +### Service Role + + + +Choose a [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html). + +Service Role | Description +-------------|--------------------------- +Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster. +Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you're already created within AWS. For more information on creating a custom service role in AWS, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#create-service-linked-role). + +### Secrets Encryption + + + +Optional: To encrypt secrets, select or enter a key created in [AWS Key Management Service (KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) + +### API Server Endpoint Access + + + +Configuring Public/Private API access is an advanced use case. For details, refer to the EKS cluster endpoint access control [documentation.](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) + +### Public Access Endpoints + + + +Optionally limit access to the public endpoint via explicit CIDR blocks. + +If you limit access to specific CIDR blocks, then it is recommended that you also enable the private access to avoid losing network communication to the cluster. + +One of the following is required to enable private access: +- Rancher's IP must be part of an allowed CIDR block +- Private access should be enabled, and Rancher must share a subnet with the cluster and have network access to the cluster, which can be configured with a security group + +For more information about public and private access to the cluster endpoint, refer to the [Amazon EKS documentation.](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) + +### Subnet + + + +| Option | Description | +| ------- | ------------ | +| Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC with 3 public subnets. | +| Custom: Choose from your existing VPC and Subnets | While provisioning your cluster, Rancher configures your Control Plane and nodes to use a VPC and Subnet that you've already [created in AWS](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html). | + + For more information, refer to the AWS documentation for [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html). Follow one of the sets of instructions below based on your selection from the previous step. + +- [What Is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) +- [VPCs and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html) + +### Security Group + + + +Amazon Documentation: + +- [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) +- [Security Groups for Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) +- [Create a Security Group](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html#getting-started-create-security-group) + +### Logging + + + +Configure control plane logs to send to Amazon CloudWatch. You are charged the standard CloudWatch Logs data ingestion and storage costs for any logs sent to CloudWatch Logs from your clusters. + +Each log type corresponds to a component of the Kubernetes control plane. To learn more about these components, see [Kubernetes Components](https://kubernetes.io/docs/concepts/overview/components/) in the Kubernetes documentation. + +For more information on EKS control plane logging, refer to the official [documentation.](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) + +### Managed Node Groups + + + +Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters. + +For more information about how node groups work and how they are configured, refer to the [EKS documentation.](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) + +Amazon will use the [EKS-optimized AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) for the Kubernetes version. You can configure whether the AMI has GPU enabled. + +| Option | Description | +| ------- | ------------ | +| Instance Type | Choose the [hardware specs](https://aws.amazon.com/ec2/instance-types/) for the instance you're provisioning. | +| Maximum ASG Size | The maximum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. | +| Minimum ASG Size | The minimum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. | + +{{% /tab %}} +{{% tab "Rancher prior to v2.5" %}} + + +### Account Access + + + +Complete each drop-down and field using the information obtained for your [IAM policy.](#iam-policy) + +| Setting | Description | +| ---------- | -------------------------------------------------------------------------------------------------------------------- | +| Region | From the drop-down choose the geographical region in which to build your cluster. | +| Access Key | Enter the access key that you created for your [IAM policy.](#iam-policy) | +| Secret Key | Enter the secret key that you created for your [IAM policy.](#iam-policy) | + +### Service Role + + + +Choose a [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html). + +Service Role | Description +-------------|--------------------------- +Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster. +Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you're already created within AWS. For more information on creating a custom service role in AWS, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#create-service-linked-role). + +### Public IP for Worker Nodes + + + +Your selection for this option determines what options are available for **VPC & Subnet**. + +Option | Description +-------|------------ +Yes | When your cluster nodes are provisioned, they're assigned a both a private and public IP address. +No: Private IPs only | When your cluster nodes are provisioned, they're assigned only a private IP address.

If you choose this option, you must also choose a **VPC & Subnet** that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane. + +### VPC & Subnet + + + +The available options depend on the [public IP for worker nodes.](#public-ip-for-worker-nodes) + +Option | Description + -------|------------ + Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC and Subnet. + Custom: Choose from your existing VPC and Subnets | While provisioning your cluster, Rancher configures your nodes to use a VPC and Subnet that you've already [created in AWS](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html). If you choose this option, complete the remaining steps below. + + For more information, refer to the AWS documentation for [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html). Follow one of the sets of instructions below based on your selection from the previous step. + +- [What Is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) +- [VPCs and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html) + + +If you choose to assign a public IP address to your cluster's worker nodes, you have the option of choosing between a VPC that's automatically generated by Rancher (i.e., **Standard: Rancher generated VPC and Subnet**), or a VPC that you've already created with AWS (i.e., **Custom: Choose from your existing VPC and Subnets**). Choose the option that best fits your use case. + +{{% accordion id="yes" label="Click to expand" %}} + +If you're using **Custom: Choose from your existing VPC and Subnets**: + +(If you're using **Standard**, skip to the [instance options.)](#select-instance-options-2-4) + +1. Make sure **Custom: Choose from your existing VPC and Subnets** is selected. + +1. From the drop-down that displays, choose a VPC. + +1. Click **Next: Select Subnets**. Then choose one of the **Subnets** that displays. + +1. Click **Next: Select Security Group**. +{{% /accordion %}} + +If your worker nodes have Private IPs only, you must also choose a **VPC & Subnet** that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane. +{{% accordion id="no" label="Click to expand" %}} +Follow the steps below. + +>**Tip:** When using only private IP addresses, you can provide your nodes internet access by creating a VPC constructed with two subnets, a private set and a public set. The private set should have its route tables configured to point toward a NAT in the public set. For more information on routing traffic from private subnets, please see the [official AWS documentation](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html). + +1. From the drop-down that displays, choose a VPC. + +1. Click **Next: Select Subnets**. Then choose one of the **Subnets** that displays. + +{{% /accordion %}} + +### Security Group + + + +Amazon Documentation: + +- [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) +- [Security Groups for Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) +- [Create a Security Group](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html#getting-started-create-security-group) + +### Instance Options + + + +Instance type and size of your worker nodes affects how many IP addresses each worker node will have available. See this [documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI) for more information. + +Option | Description +-------|------------ +Instance Type | Choose the [hardware specs](https://aws.amazon.com/ec2/instance-types/) for the instance you're provisioning. +Custom AMI Override | If you want to use a custom [Amazon Machine Image](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html#creating-an-ami) (AMI), specify it here. By default, Rancher will use the [EKS-optimized AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) for the EKS version that you chose. +Desired ASG Size | The number of instances that your cluster will provision. +User Data | Custom commands can to be passed to perform automated configuration tasks **WARNING: Modifying this may cause your nodes to be unable to join the cluster.** _Note: Available as of v2.2.0_ + +{{% /tab %}} +{{% /tabs %}} + + +# Troubleshooting + +If your changes were overwritten, it could be due to the way the cluster data is synced with EKS. Changes shouldn't be made to the cluster from another source, such as in the EKS console, and in Rancher within a five-minute span. For information on how this works and how to configure the refresh interval, refer to [Syncing.](#syncing) + +If an unauthorized error is returned while attempting to modify or register the cluster and the cluster was not created with the role or user that your credentials belong to, refer to [Security and Compliance.](#security-and-compliance) For any issues or troubleshooting details for your Amazon EKS Kubernetes cluster, please see this [documentation](https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html). -## AWS Service Events +# AWS Service Events To find information on any AWS Service events, please see [this page](https://status.aws.amazon.com/). -## Security and Compliance +# Security and Compliance + +By default only the IAM user or role that created a cluster has access to it. Attempting to access the cluster with any other user or role without additional configuration will lead to an error. In Rancher, this means using a credential that maps to a user or role that was not used to create the cluster will cause an unauthorized error. For example, an EKSCtl cluster will not register in Rancher unless the credentials used to register the cluster match the role or user used by EKSCtl. Additional users and roles can be authorized to access a cluster by being added to the aws-auth configmap in the kube-system namespace. For a more in-depth explanation and detailed instructions, please see this [documentation](https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/). For more information on security and compliance with your Amazon EKS Kubernetes cluster, please see this [documentation](https://docs.aws.amazon.com/eks/latest/userguide/shared-responsibilty.html). -## Tutorial +# Tutorial This [tutorial](https://aws.amazon.com/blogs/opensource/managing-eks-clusters-rancher/) on the AWS Open Source Blog will walk you through how to set up an EKS cluster with Rancher, deploy a publicly accessible app to test the cluster, and deploy a sample project to track real-time geospatial data using a combination of other open-source software such as Grafana and InfluxDB. -## Appendix - Minimum EKS Permissions +# Minimum EKS Permissions -Documented here is a minimum set of permissions necessary to use all functionality of the EKS driver in Rancher. Additional permissions are required for Rancher to provision the `Service Role` and `VPC` resources. Optionally these resources can be created **before** the cluster creation and will be selectable when defining the cluster configuration. +Documented here is a minimum set of permissions necessary to use all functionality of the EKS driver in Rancher. -Resource | Description ----------|------------ -Service Role | The service role provides Kubernetes the permissions it requires to manage resources on your behalf. Rancher can create the service role with the following [Service Role Permissions](http://localhost:9001/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/#service-role-permissions). -VPC | Provides isolated network resouces utilised by EKS and worker nodes. Rancher can create the VPC resouces with the follwoing [VPC Permissions](http://localhost:9001/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/#vpc-permissions). - - -Resource targeting uses `*` as the ARN of many of the resources created cannot be known prior to creating the EKS cluster in Rancher. +Resource targeting uses `*` as the ARN of many of the resources created cannot be known prior to creating the EKS cluster in Rancher. Some permissions (for example `ec2:CreateVpc`) are only used in situations where Rancher handles the creation of certain resources. ```json { @@ -171,41 +343,70 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b "Sid": "EC2Permisssions", "Effect": "Allow", "Action": [ - "ec2:RevokeSecurityGroupIngress", - "ec2:RevokeSecurityGroupEgress", - "ec2:DescribeVpcs", - "ec2:DescribeTags", - "ec2:DescribeSubnets", - "ec2:DescribeSecurityGroups", - "ec2:DescribeRouteTables", - "ec2:DescribeKeyPairs", - "ec2:DescribeInternetGateways", - "ec2:DescribeImages", - "ec2:DescribeAvailabilityZones", - "ec2:DescribeAccountAttributes", - "ec2:DeleteTags", - "ec2:DeleteSecurityGroup", - "ec2:DeleteKeyPair", - "ec2:CreateTags", - "ec2:CreateSecurityGroup", - "ec2:CreateKeyPair", "ec2:AuthorizeSecurityGroupIngress", - "ec2:AuthorizeSecurityGroupEgress" + "ec2:DeleteSubnet", + "ec2:CreateKeyPair", + "ec2:AttachInternetGateway", + "ec2:ReplaceRoute", + "ec2:DeleteRouteTable", + "ec2:AssociateRouteTable", + "ec2:DescribeInternetGateways", + "ec2:CreateRoute", + "ec2:CreateInternetGateway", + "ec2:RevokeSecurityGroupEgress", + "ec2:DescribeAccountAttributes", + "ec2:DeleteInternetGateway", + "ec2:DescribeKeyPairs", + "ec2:CreateTags", + "ec2:CreateRouteTable", + "ec2:DescribeRouteTables", + "ec2:DetachInternetGateway", + "ec2:DisassociateRouteTable", + "ec2:RevokeSecurityGroupIngress", + "ec2:DeleteVpc", + "ec2:CreateSubnet", + "ec2:DescribeSubnets", + "ec2:DeleteKeyPair", + "ec2:DeleteTags", + "ec2:CreateVpc", + "ec2:DescribeAvailabilityZones", + "ec2:CreateSecurityGroup", + "ec2:ModifyVpcAttribute", + "ec2:AuthorizeSecurityGroupEgress", + "ec2:DescribeTags", + "ec2:DeleteRoute", + "ec2:DescribeSecurityGroups", + "ec2:DescribeImages", + "ec2:DescribeVpcs", + "ec2:DeleteSecurityGroup" ], "Resource": "*" }, { - "Sid": "CloudFormationPermisssions", + "Sid": "EKSPermissions", "Effect": "Allow", "Action": [ - "cloudformation:ListStacks", - "cloudformation:ListStackResources", - "cloudformation:DescribeStacks", - "cloudformation:DescribeStackResources", - "cloudformation:DescribeStackResource", - "cloudformation:DeleteStack", - "cloudformation:CreateStackSet", - "cloudformation:CreateStack" + "eks:DeleteFargateProfile", + "eks:DescribeFargateProfile", + "eks:ListTagsForResource", + "eks:UpdateClusterConfig", + "eks:DescribeNodegroup", + "eks:ListNodegroups", + "eks:DeleteCluster", + "eks:CreateFargateProfile", + "eks:DeleteNodegroup", + "eks:UpdateNodegroupConfig", + "eks:DescribeCluster", + "eks:ListClusters", + "eks:UpdateClusterVersion", + "eks:UpdateNodegroupVersion", + "eks:ListUpdates", + "eks:CreateCluster", + "eks:UntagResource", + "eks:CreateNodegroup", + "eks:ListFargateProfiles", + "eks:DescribeUpdate", + "eks:TagResource" ], "Resource": "*" }, @@ -213,52 +414,52 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b "Sid": "IAMPermissions", "Effect": "Allow", "Action": [ - "iam:PassRole", - "iam:ListRoles", "iam:ListRoleTags", - "iam:ListInstanceProfilesForRole", - "iam:ListInstanceProfiles", - "iam:ListAttachedRolePolicies", - "iam:GetRole", - "iam:GetInstanceProfile", - "iam:DetachRolePolicy", - "iam:DeleteRole", + "iam:RemoveRoleFromInstanceProfile", "iam:CreateRole", - "iam:AttachRolePolicy" + "iam:AttachRolePolicy", + "iam:AddRoleToInstanceProfile", + "iam:DetachRolePolicy", + "iam:GetRole", + "iam:DeleteRole", + "iam:CreateInstanceProfile", + "iam:ListInstanceProfilesForRole", + "iam:PassRole", + "iam:GetInstanceProfile", + "iam:ListRoles", + "iam:ListInstanceProfiles", + "iam:DeleteInstanceProfile" ], "Resource": "*" }, { - "Sid": "KMSPermisssions", + "Sid": "CloudFormationPermisssions", "Effect": "Allow", - "Action": "kms:ListKeys", + "Action": [ + "cloudformation:DescribeStackResource", + "cloudformation:ListStackResources", + "cloudformation:DescribeStackResources", + "cloudformation:DescribeStacks", + "cloudformation:ListStacks", + "cloudformation:CreateStack" + ], "Resource": "*" }, { - "Sid": "EKSPermisssions", + "Sid": "AutoScalingPermissions", "Effect": "Allow", "Action": [ - "eks:UpdateNodegroupVersion", - "eks:UpdateNodegroupConfig", - "eks:UpdateClusterVersion", - "eks:UpdateClusterConfig", - "eks:UntagResource", - "eks:TagResource", - "eks:ListUpdates", - "eks:ListTagsForResource", - "eks:ListNodegroups", - "eks:ListFargateProfiles", - "eks:ListClusters", - "eks:DescribeUpdate", - "eks:DescribeNodegroup", - "eks:DescribeFargateProfile", - "eks:DescribeCluster", - "eks:DeleteNodegroup", - "eks:DeleteFargateProfile", - "eks:DeleteCluster", - "eks:CreateNodegroup", - "eks:CreateFargateProfile", - "eks:CreateCluster" + "autoscaling:DescribeAutoScalingGroups", + "autoscaling:UpdateAutoScalingGroup", + "autoscaling:TerminateInstanceInAutoScalingGroup", + "autoscaling:CreateOrUpdateTags", + "autoscaling:DeleteAutoScalingGroup", + "autoscaling:CreateAutoScalingGroup", + "autoscaling:DescribeAutoScalingInstances", + "autoscaling:DescribeLaunchConfigurations", + "autoscaling:DescribeScalingActivities", + "autoscaling:CreateLaunchConfiguration", + "autoscaling:DeleteLaunchConfiguration" ], "Resource": "*" } @@ -266,97 +467,29 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b } ``` -### Service Role Permissions +# Syncing -Rancher will create a service role with the following trust policy: +Syncing is the feature that causes Rancher to update its EKS clusters' values so they are up to date with their corresponding cluster object in the EKS console. This enables Rancher to not be the sole owner of an EKS cluster’s state. Its largest limitation is that processing an update from Rancher and another source at the same time or within 5 minutes of one finishing may cause the state from one source to completely overwrite the other. -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Action": "sts:AssumeRole", - "Principal": { - "Service": "eks.amazonaws.com" - }, - "Effect": "Allow", - "Sid": "" - } - ] -} -``` +### How it works -This role will also have two role policy attachments with the following policies ARNs: +There are two fields on the Rancher Cluster object that must be understood to understand how syncing works: -``` -arn:aws:iam::aws:policy/AmazonEKSClusterPolicy -arn:aws:iam::aws:policy/AmazonEKSServicePolicy -``` +1. EKSConfig which is located on the Spec of the Cluster. +2. UpstreamSpec which is located on the EKSStatus field on the Status of the Cluster. -Permissions required for Rancher to create service role on users behalf during the EKS cluster creation process. +Both of which are defined by the struct EKSClusterConfigSpec found in the eks-operator project: https://github.com/rancher/eks-operator/blob/master/pkg/apis/eks.cattle.io/v1/types.go -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "IAMPermisssions", - "Effect": "Allow", - "Action": [ - "iam:AddRoleToInstanceProfile", - "iam:AttachRolePolicy", - "iam:CreateInstanceProfile", - "iam:CreateRole", - "iam:CreateServiceLinkedRole", - "iam:DeleteInstanceProfile", - "iam:DeleteRole", - "iam:DetachRolePolicy", - "iam:GetInstanceProfile", - "iam:GetRole", - "iam:ListAttachedRolePolicies", - "iam:ListInstanceProfiles", - "iam:ListInstanceProfilesForRole", - "iam:ListRoles", - "iam:ListRoleTags", - "iam:PassRole", - "iam:RemoveRoleFromInstanceProfile" - ], - "Resource": "*" - } - ] -} -``` +All fields with the exception of DisplayName, AmazonCredentialSecret, Region, and Imported are nillable on the EKSClusterConfigSpec. -### VPC Permissions +The EKSConfig represents desired state for its non-nil values. Fields that are non-nil in the EKSConfig can be thought of as “managed".When a cluster is created in Rancher, all fields are non-nil and therefore “managed”. When a pre-existing cluster is registered in rancher all nillable fields are nil and are not “managed”. Those fields become managed once their value has been changed by Rancher. -Permissions required for Rancher to create VPC and associated resources. +UpstreamSpec represents the cluster as it is in EKS and is refreshed on an interval of 5 minutes. After the UpstreamSpec has been refreshed rancher checks if the EKS cluster has an update in progress. If it is updating, nothing further is done. If it is not currently updating, any “managed” fields on EKSConfig are overwritten with their corresponding value from the recently updated UpstreamSpec. -```json -{ - "Sid": "VPCPermissions", - "Effect": "Allow", - "Action": [ - "ec2:ReplaceRoute", - "ec2:ModifyVpcAttribute", - "ec2:ModifySubnetAttribute", - "ec2:DisassociateRouteTable", - "ec2:DetachInternetGateway", - "ec2:DescribeVpcs", - "ec2:DeleteVpc", - "ec2:DeleteTags", - "ec2:DeleteSubnet", - "ec2:DeleteRouteTable", - "ec2:DeleteRoute", - "ec2:DeleteInternetGateway", - "ec2:CreateVpc", - "ec2:CreateSubnet", - "ec2:CreateSecurityGroup", - "ec2:CreateRouteTable", - "ec2:CreateRoute", - "ec2:CreateInternetGateway", - "ec2:AttachInternetGateway", - "ec2:AssociateRouteTable" - ], - "Resource": "*" -} -``` +The effective desired state can be thought of as the UpstreamSpec + all non-nil fields in the EKSConfig. This is what is displayed in the UI. + +If Rancher and another source attempt to update an EKS cluster at the same time or within the 5 minute refresh window of an update finishing, then it is likely any “managed” fields can be caught in a race condition. For example, a cluster may have PrivateAccess as a managed field. If PrivateAccess is false and then enabled in EKS console, then finishes at 11:01, and then tags are updated from Rancher before 11:05 the value will likely be overwritten. This would also occur if tags were updated while the cluster was processing the update. If the cluster was registered and the PrivateAccess fields was nil then this issue should not occur in the aforementioend case. + +### Configuring the Refresh Interval + +It is possible to change the refresh interval through the setting “eks-refresh-cron". This setting accepts values in the Cron format. The default is `*/5 * * * *`. The shorter the refresh window is the less likely any race conditions will occur, but it does increase the likelihood of encountering request limits that may be in place for AWS APIs. diff --git a/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md index 8f59cbb3348..b5396fa7e2b 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md @@ -1,9 +1,9 @@ --- -title: Importing Existing Clusters into Rancher +title: Importing Existing Clusters description: Learn how you can create a cluster in Rancher by importing an existing Kubernetes cluster. Then, you can manage it using Rancher metaTitle: 'Kubernetes Cluster Management' metaDescription: 'Learn how you can import an existing Kubernetes cluster and then manage it using Rancher' -weight: 2300 +weight: 5 aliases: - /rancher/v2.x/en/tasks/clusters/import-cluster/ --- @@ -16,6 +16,9 @@ For all imported Kubernetes clusters except for K3s clusters, the configuration Rancher v2.4 added the capability to import a K3s cluster into Rancher, as well as the ability to upgrade Kubernetes by editing the cluster in the Rancher UI. +> Rancher v2.5 added the ability to [register clusters.](#changes-in-rancher-v2-5) This page will be updated to reflect the new functionality. + +- [Changes in Rancher v2.5](#changes-in-rancher-v2-5) - [Features](#features) - [Prerequisites](#prerequisites) - [Importing a cluster](#importing-a-cluster) @@ -25,6 +28,14 @@ Rancher v2.4 added the capability to import a K3s cluster into Rancher, as well - [Debug Logging and Troubleshooting for Imported K3s clusters](#debug-logging-and-troubleshooting-for-imported-k3s-clusters) - [Annotating imported clusters](#annotating-imported-clusters) +# Changes in Rancher v2.5 + +In Rancher v2.5, the cluster registration feature replaced the feature to import clusters. Rancher has more capabilities to manage registered clusters compared to imported clusters, and registering a cluster allows Rancher to treat it as though it were created in Rancher. + +Amazon EKS clusters can now be registered in Rancher. For the most part, registered EKS clusters and EKS clusters created in Rancher are treated the same way in the Rancher UI, except for deletion. + +When you delete an EKS cluster that was created in Rancher, the cluster is destroyed. When you delete an EKS that was registered in Rancher, it is disconnected from the Rancher server, but it still exists and you can still access it in the same way you did before it was registered in Rancher. + # Features After importing a cluster, the cluster owner can: diff --git a/content/rancher/v2.x/en/cluster-provisioning/node-requirements/_index.md b/content/rancher/v2.x/en/cluster-provisioning/node-requirements/_index.md index 403ecdcc208..5521e62800c 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/node-requirements/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/node-requirements/_index.md @@ -28,7 +28,7 @@ If you plan to use ARM64, see [Running on ARM64 (Experimental).]({{}}/r For information on how to install Docker, refer to the official [Docker documentation.](https://docs.docker.com/) -Some distributions of Linux derived from RHEL, including Oracle Linux, may have default firewall rules that block communication with Helm. This [how-to guide]({{}}/rancher/v2.x/en/installation/options/firewall) shows how to check the default firewall rules and how to open the ports with `firewalld` if necessary. +Some distributions of Linux derived from RHEL, including Oracle Linux, may have default firewall rules that block communication with Helm. We recommend disabling firewalld. For Kubernetes 1.19, firewalld must be turned off. SUSE Linux may have a firewall that blocks all ports by default. In that situation, follow [these steps](#opening-suse-linux-ports) to open the ports needed for adding a host to a custom cluster. diff --git a/content/rancher/v2.x/en/cluster-provisioning/production/_index.md b/content/rancher/v2.x/en/cluster-provisioning/production/_index.md index d5ab40db6fd..2aea2df7329 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/production/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/production/_index.md @@ -1,6 +1,6 @@ --- title: Checklist for Production-Ready Clusters -weight: 2005 +weight: 2 --- In this section, we recommend best practices for creating the production-ready Kubernetes clusters that will run your apps and services. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/_index.md index 111b0a58faa..ce7512f5ba7 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/_index.md @@ -1,6 +1,6 @@ --- title: Launching Kubernetes with Rancher -weight: 2200 +weight: 4 --- You can have Rancher launch a Kubernetes cluster using any nodes you want. When Rancher deploys Kubernetes onto these nodes, it uses [Rancher Kubernetes Engine]({{}}/rke/latest/en/) (RKE), which is Rancher's own lightweight Kubernetes installer. It can launch Kubernetes on any computers, including: diff --git a/content/rancher/v2.x/en/contributing/_index.md b/content/rancher/v2.x/en/contributing/_index.md index 1cbf8bd694e..ab70037540f 100644 --- a/content/rancher/v2.x/en/contributing/_index.md +++ b/content/rancher/v2.x/en/contributing/_index.md @@ -1,6 +1,6 @@ --- title: Contributing to Rancher -weight: 9000 +weight: 27 aliases: - /rancher/v2.x/en/faq/contributing/ --- diff --git a/content/rancher/v2.x/en/deploy-across-clusters/_index.md b/content/rancher/v2.x/en/deploy-across-clusters/_index.md new file mode 100644 index 00000000000..4b7ce989eb0 --- /dev/null +++ b/content/rancher/v2.x/en/deploy-across-clusters/_index.md @@ -0,0 +1,16 @@ +--- +title: Deploying Applications across Clusters +weight: 13 +--- + +Rancher v2.5 introduced Fleet, a new way to deploy applications across clusters. + +### Fleet + +_Available in v2.5_ + +Fleet is GitOps at scale. For more information, refer to the [Fleet section.](./fleet) + +### Legacy UI Documentation for Multi-cluster Apps + +In Rancher prior to v2.5, the multi-cluster apps feature was used to deploy applications across clusters. Refer to the documentation [here.](./multi-cluster-apps) \ No newline at end of file diff --git a/content/rancher/v2.x/en/deploy-across-clusters/fleet/_index.md b/content/rancher/v2.x/en/deploy-across-clusters/fleet/_index.md new file mode 100644 index 00000000000..762f2f0bc26 --- /dev/null +++ b/content/rancher/v2.x/en/deploy-across-clusters/fleet/_index.md @@ -0,0 +1,28 @@ +--- +title: Fleet - GitOps at Scale +shortTitle: Fleet +weight: 1 +--- + +_Available as of Rancher v2.5_ + +Fleet is GitOps at scale. Fleet is designed to manage up to a million clusters. It's also lightweight enough that is works great for a [single cluster](https://fleet.rancher.io/single-cluster-install/) too, but it really shines when you get to a [large scale.](https://fleet.rancher.io/multi-cluster-install/) By large scale we mean either a lot of clusters, a lot of deployments, or a lot of teams in a single organization. + +Fleet is a separate project from Rancher, and can be installed on any Kubernetes cluster with Helm. + +![Architecture]({{}}/img/rancher/fleet-architecture.png) + +Fleet can manage deployments from git of raw Kubernetes YAML, Helm charts, or Kustomize or any combination of the three. Regardless of the source, all resources are dynamically turned into Helm charts, and Helm is used as the engine to +deploy everything in the cluster. This give a high degree of control, consistency, and auditability. Fleet focuses not only on the ability to scale, but to give one a high degree of control and visibility to exactly what is installed on the cluster. + +### Accessing Fleet in the Rancher UI + +Fleet comes preinstalled in Rancher v2.5. To access it, go to the **Cluster Explorer** in the Rancher UI. In the top left dropdown menu, click **Cluster Explorer > Fleet.** On this page, you can edit Kubernetes resources and cluster groups managed by Fleet. + +### GitHub Repository + +The Fleet Helm charts are available [here.](https://github.com/rancher/fleet/releases/latest) + +### Documentation + +The Fleet documentation is at [https://fleet.rancher.io/.](https://fleet.rancher.io/) \ No newline at end of file diff --git a/content/rancher/v2.x/en/catalog/multi-cluster-apps/_index.md b/content/rancher/v2.x/en/deploy-across-clusters/multi-cluster-apps/_index.md similarity index 99% rename from content/rancher/v2.x/en/catalog/multi-cluster-apps/_index.md rename to content/rancher/v2.x/en/deploy-across-clusters/multi-cluster-apps/_index.md index 37fe0c6304b..1199650261a 100644 --- a/content/rancher/v2.x/en/catalog/multi-cluster-apps/_index.md +++ b/content/rancher/v2.x/en/deploy-across-clusters/multi-cluster-apps/_index.md @@ -1,7 +1,9 @@ --- -title: Multi-Cluster Apps -weight: 600 +title: Legacy Multi-Cluster App Documentation +shortTitle: Legacy +weight: 2 --- + _Available as of v2.2.0_ Typically, most applications are deployed on a single Kubernetes cluster, but there will be times you might want to deploy multiple copies of the same application across different clusters and/or projects. In Rancher, a _multi-cluster application_, is an application deployed using a Helm chart across multiple clusters. With the ability to deploy the same application across multiple clusters, it avoids the repetition of the same action on each cluster, which could introduce user error during application configuration. With multi-cluster applications, you can customize to have the same configuration across all projects/clusters as well as have the ability to change the configuration based on your target project. Since multi-cluster application is considered a single application, it's easy to manage and maintain this application. diff --git a/content/rancher/v2.x/en/faq/_index.md b/content/rancher/v2.x/en/faq/_index.md index 60f260984b5..de15dd4cb78 100644 --- a/content/rancher/v2.x/en/faq/_index.md +++ b/content/rancher/v2.x/en/faq/_index.md @@ -1,6 +1,6 @@ --- title: FAQ -weight: 8000 +weight: 25 aliases: - /rancher/v2.x/en/about/ --- diff --git a/content/rancher/v2.x/en/helm-charts/_index.md b/content/rancher/v2.x/en/helm-charts/_index.md new file mode 100644 index 00000000000..b2c92e8c114 --- /dev/null +++ b/content/rancher/v2.x/en/helm-charts/_index.md @@ -0,0 +1,12 @@ +--- +title: Helm Charts in Rancher +weight: 12 +--- + +### Apps and Marketplace + +In Rancher v2.5, the [apps and marketplace feature](./apps-marketplace) is used to manage Helm charts, replacing the catalog system. + +### Catalogs + +In Rancher prior to v2.5, the [catalog system](./legacy-catalogs) was used to manage Helm charts. \ No newline at end of file diff --git a/content/rancher/v2.x/en/helm-charts/apps-marketplace/_index.md b/content/rancher/v2.x/en/helm-charts/apps-marketplace/_index.md new file mode 100644 index 00000000000..007941b3f18 --- /dev/null +++ b/content/rancher/v2.x/en/helm-charts/apps-marketplace/_index.md @@ -0,0 +1,44 @@ +--- +title: Apps and Marketplace +weight: 1 +--- + +_Available as of v2.5_ + +In this section, you'll learn how to manage Helm chart repositories and applications in Rancher. + +In the cluster manager Rancher uses a catalog system to import bundles of charts and then uses those charts to either deploy custom helm applications or Rancher's tools such as Monitoring or Istio. Now in the Cluster Explorer, Rancher uses a similar but simplified version of the same system. Repositories can be added in the same way that catalogs were, but are specific to the current cluster. Rancher tools come as pre-loaded repositories which deploy as standalone helm charts. + +### Charts + +From the top-left menu select _"Apps & Marketplace"_ and you will be taken to the Charts page. + +The charts page contains all Rancher, Partner, and Custom Charts. + +* Rancher tools such as Logging or Monitoring are included under the Rancher label +* Partner charts reside under the Partners label +* Custom charts will show up under the name of the repository + +All three types are deployed and managed in the same way. + +### Repositories + +From the left sidebar select _"Repositories"_. + +These items represent helm repositories, and can be either traditional helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository. + + +### Helm compatilbitiy + +The Cluster Explorer only supports Helm 3 compatible charts. + + +### Deployment and Upgrades + +From the _"Charts"_ tab select a Chart to install. Rancher and Partner charts may have extra configurations available through custom pages or questions.yaml files, but all chart installations can modify the values.yaml and other basic settings. Once you click install, a helm operation job is deployed, and the console for the job is displayed. + +To view all recent changes, go to the _"Recent Operations"_ tab. From there you can view the call that was made, conditions, events, and logs. + +After installing a chart, you can find it in the _"Installed Apps"_ tab. In this section you can upgrade or delete the installation, and see further details. When choosing to upgrade, the form and values presented will be the same as installation. + +Most Rancher tools have additional pages located in the toolbar below the _"Apps & Marketplace"_ section to help manage and use the features. These pages include links to dashboards, forms to easily add Custom Resources, and additional information. diff --git a/content/rancher/v2.x/en/catalog/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/_index.md similarity index 98% rename from content/rancher/v2.x/en/catalog/_index.md rename to content/rancher/v2.x/en/helm-charts/legacy-catalogs/_index.md index b6e76fa7b0d..fa9eefa88d6 100644 --- a/content/rancher/v2.x/en/catalog/_index.md +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/_index.md @@ -1,11 +1,13 @@ --- -title: Catalogs, Helm Charts and Apps +title: Legacy Catalog Documentation +shortTitle: Legacy description: Rancher enables the use of catalogs to repeatedly deploy applications easily. Catalogs are GitHub or Helm Chart repositories filled with deployment-ready apps. -weight: 4000 +weight: 1 aliases: - /rancher/v2.x/en/concepts/global-configuration/catalog/ - /rancher/v2.x/en/concepts/catalogs/ - /rancher/v2.x/en/tasks/global-configuration/catalog/ + - /rancher/v2.x/en/catalog --- Rancher provides the ability to use a catalog of Helm charts that make it easy to repeatedly deploy applications. diff --git a/content/rancher/v2.x/en/catalog/adding-catalogs/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/adding-catalogs/_index.md similarity index 99% rename from content/rancher/v2.x/en/catalog/adding-catalogs/_index.md rename to content/rancher/v2.x/en/helm-charts/legacy-catalogs/adding-catalogs/_index.md index d8540b3cf42..9e550715817 100644 --- a/content/rancher/v2.x/en/catalog/adding-catalogs/_index.md +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/adding-catalogs/_index.md @@ -4,6 +4,7 @@ weight: 200 aliases: - /rancher/v2.x/en/tasks/global-configuration/catalog/adding-custom-catalogs/ - /rancher/v2.x/en/catalog/custom/adding + - /rancher/v2.x/en/catalog/adding-catalogs --- Custom catalogs can be added into Rancher at a global scope, cluster scope, or project scope. diff --git a/content/rancher/v2.x/en/catalog/built-in/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/built-in/_index.md similarity index 98% rename from content/rancher/v2.x/en/catalog/built-in/_index.md rename to content/rancher/v2.x/en/helm-charts/legacy-catalogs/built-in/_index.md index 5b86667717b..e3b78b13b2a 100644 --- a/content/rancher/v2.x/en/catalog/built-in/_index.md +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/built-in/_index.md @@ -3,6 +3,7 @@ title: Enabling and Disabling Built-in Global Catalogs weight: 100 aliases: - /rancher/v2.x/en/tasks/global-configuration/catalog/enabling-default-catalogs/ + - /rancher/v2.x/en/catalog/built-in --- There are default global catalogs packaged as part of Rancher. diff --git a/content/rancher/v2.x/en/catalog/catalog-config/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/catalog-config/_index.md similarity index 98% rename from content/rancher/v2.x/en/catalog/catalog-config/_index.md rename to content/rancher/v2.x/en/helm-charts/legacy-catalogs/catalog-config/_index.md index 229f65e1c97..15dca5019eb 100644 --- a/content/rancher/v2.x/en/catalog/catalog-config/_index.md +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/catalog-config/_index.md @@ -3,6 +3,7 @@ title: Custom Catalog Configuration Reference weight: 300 aliases: - /rancher/v2.x/en/catalog/catalog-config + - /rancher/v2.x/en/catalog/catalog-config --- Any user can create custom catalogs to add into Rancher. Besides the content of the catalog, users must ensure their catalogs are able to be added into Rancher. diff --git a/content/rancher/v2.x/en/catalog/creating-apps/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/creating-apps/_index.md similarity index 99% rename from content/rancher/v2.x/en/catalog/creating-apps/_index.md rename to content/rancher/v2.x/en/helm-charts/legacy-catalogs/creating-apps/_index.md index d59893cd9de..ff72fb6bb19 100644 --- a/content/rancher/v2.x/en/catalog/creating-apps/_index.md +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/creating-apps/_index.md @@ -4,6 +4,7 @@ weight: 400 aliases: - /rancher/v2.x/en/tasks/global-configuration/catalog/customizing-charts/ - /rancher/v2.x/en/catalog/custom/creating + - /rancher/v2.x/en/catalog/creating-apps --- Rancher's catalog service requires any custom catalogs to be structured in a specific format for the catalog service to be able to leverage it in Rancher. diff --git a/content/rancher/v2.x/en/catalog/globaldns/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/globaldns/_index.md similarity index 99% rename from content/rancher/v2.x/en/catalog/globaldns/_index.md rename to content/rancher/v2.x/en/helm-charts/legacy-catalogs/globaldns/_index.md index 7be91731f1c..38ab916d26f 100644 --- a/content/rancher/v2.x/en/catalog/globaldns/_index.md +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/globaldns/_index.md @@ -1,6 +1,8 @@ --- title: Global DNS weight: 5010 +aliases: + - /rancher/v2.x/en/catalog/globaldns --- _Available as of v2.2.0_ diff --git a/content/rancher/v2.x/en/catalog/launching-apps/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/launching-apps/_index.md similarity index 100% rename from content/rancher/v2.x/en/catalog/launching-apps/_index.md rename to content/rancher/v2.x/en/helm-charts/legacy-catalogs/launching-apps/_index.md diff --git a/content/rancher/v2.x/en/catalog/managing-apps/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/managing-apps/_index.md similarity index 98% rename from content/rancher/v2.x/en/catalog/managing-apps/_index.md rename to content/rancher/v2.x/en/helm-charts/legacy-catalogs/managing-apps/_index.md index 1351c90b3bc..465f3cd95ce 100644 --- a/content/rancher/v2.x/en/catalog/managing-apps/_index.md +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/managing-apps/_index.md @@ -1,6 +1,8 @@ --- title: Managing Catalog Apps weight: 500 +aliases: + - /rancher/v2.x/en/catalog/managing-apps --- After deploying an application, one of the benefits of using an application versus individual workloads/resources is the ease of being able to manage many workloads/resources applications. Apps can be cloned, upgraded or rolled back. diff --git a/content/rancher/v2.x/en/helm-charts/legacy-catalogs/multi-cluster-apps/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/multi-cluster-apps/_index.md new file mode 100644 index 00000000000..91cb1f44f2c --- /dev/null +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/multi-cluster-apps/_index.md @@ -0,0 +1,9 @@ +--- +title: Multi-Cluster Apps +weight: 600 +aliases: + - /rancher/v2.x/en/catalog/multi-cluster-apps +--- +_Available as of v2.2.0_ + +The documentation about multi-cluster apps has moved [here.]({{}}/rancher/v2.x/en/deploy-across-clusters/multi-cluster-apps) diff --git a/content/rancher/v2.x/en/catalog/tutorial/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/tutorial/_index.md similarity index 98% rename from content/rancher/v2.x/en/catalog/tutorial/_index.md rename to content/rancher/v2.x/en/helm-charts/legacy-catalogs/tutorial/_index.md index 141cc544680..4b861e42f57 100644 --- a/content/rancher/v2.x/en/catalog/tutorial/_index.md +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/tutorial/_index.md @@ -1,6 +1,8 @@ --- title: "Tutorial: Example Custom Chart Creation" weight: 800 +aliases: + - /rancher/v2.x/en/catalog/tutorial --- In this tutorial, you'll learn how to create a Helm chart and deploy it to a repository. The repository can then be used as a source for a custom catalog in Rancher. diff --git a/content/rancher/v2.x/en/installation/_index.md b/content/rancher/v2.x/en/installation/_index.md index 21044984828..1435a639af5 100644 --- a/content/rancher/v2.x/en/installation/_index.md +++ b/content/rancher/v2.x/en/installation/_index.md @@ -1,7 +1,7 @@ --- -title: Installing Rancher +title: Installing/Upgrading Rancher description: Learn how to install Rancher in development and production environments. Read about single node and high availability installation -weight: 50 +weight: 3 aliases: - /rancher/v2.x/en/installation/how-ha-works/ --- @@ -16,14 +16,21 @@ In this section, - **RKE (Rancher Kubernetes Engine)** is a certified Kubernetes distribution and CLI/library which creates and manages a Kubernetes cluster. - **K3s (Lightweight Kubernetes)** is also a fully compliant Kubernetes distribution. It is newer than RKE, easier to use, and more lightweight, with a binary size of less than 100 MB. As of Rancher v2.4, Rancher can be installed on a K3s cluster. +### Changes to Installation in Rancher v2.5 + +In Rancher v2.5, the Rancher management server can be installed on any Kubernetes cluster, including hosted clusters, such as Amazon EKS clusters. + +For Docker installations, a local Kubernetes cluster is installed in the single Docker container, and Rancher is installed on the local cluster. + +The `restrictedAdmin` Helm chart option was added. When this option is set to true, the initial Rancher user has restricted access to the local Kubernetes cluster to prevent privilege escalation. For more information, see the section about the [restricted-admin role.]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#restricted-admin) + ### Overview of Installation Options Rancher can be installed on these main architectures: - **High-availability Kubernetes Install:** We recommend using [Helm,]({{}}/rancher/v2.x/en/overview/concepts/#about-helm) a Kubernetes package manager, to install Rancher on multiple nodes on a dedicated Kubernetes cluster. For RKE clusters, three nodes are required to achieve a high-availability cluster. For K3s clusters, only two nodes are required. - **Single-node Kubernetes Install:** Another option is to install Rancher with Helm on a Kubernetes cluster, but to only use a single node in the cluster. In this case, the Rancher server doesn't have high availability, which is important for running Rancher in production. However, this option is useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path. In the future, you can add nodes to the cluster to get a high-availability Rancher server. -- **Docker Install:** For test and demonstration purposes, Rancher can be installed with Docker on a single node. This installation works out-of-the-box, but there is no migration path from a Docker installation to a high-availability installation on a Kubernetes cluster. Therefore, you may want to use a Kubernetes installation from the start. - +- **Docker Install:** For test and demonstration purposes, Rancher can be installed with Docker on a single node. This installation works out-of-the-box, but there is no migration path from a Docker installation to a high-availability installation. Therefore, you may want to use a Kubernetes installation from the start. There are also separate instructions for installing Rancher in an air gap environment or behind an HTTP proxy: @@ -35,9 +42,15 @@ There are also separate instructions for installing Rancher in an air gap enviro We recommend installing Rancher on a Kubernetes cluster, because in a multi-node cluster, the Rancher management server becomes highly available. This high-availability configuration helps maintain consistent access to the downstream Kubernetes clusters that Rancher will manage. -For that reason, we recommend that for a production-grade architecture, you should set up a high-availability Kubernetes cluster using either RKE or K3s, then install Rancher on it. After Rancher is installed, you can use Rancher to deploy and manage Kubernetes clusters. +For that reason, we recommend that for a production-grade architecture, you should set up a high-availability Kubernetes cluster, then install Rancher on it. After Rancher is installed, you can use Rancher to deploy and manage Kubernetes clusters. -For testing or demonstration purposes, you can install Rancher in single Docker container. In this Docker install, you can use Rancher to set up Kubernetes clusters out-of-the-box. +> The type of cluster that Rancher needs to be installed on depends on the Rancher version. +> +> For Rancher v2.5, any Kubernetes cluster can be used. +> For Rancher v2.4.x, either an RKE Kubernetes cluster or K3s Kubernetes cluster can be used. +> For Rancher prior to v2.4, an RKE cluster must be used. + +For testing or demonstration purposes, you can install Rancher in single Docker container. In this Docker install, you can use Rancher to set up Kubernetes clusters out-of-the-box. The Docker install allows you to explore the Rancher server functionality, but it is intended to be used for development and testing purposes only. Our [instructions for installing Rancher on Kubernetes]({{}}/rancher/v2.x/en/installation/k8s-install) describe how to first use K3s or RKE to create and manage a Kubernetes cluster, then install Rancher onto that cluster. diff --git a/content/rancher/v2.x/en/installation/k8s-install/helm-rancher/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md similarity index 94% rename from content/rancher/v2.x/en/installation/k8s-install/helm-rancher/_index.md rename to content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md index 74aa80b7789..0e789a64740 100644 --- a/content/rancher/v2.x/en/installation/k8s-install/helm-rancher/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md @@ -1,11 +1,18 @@ --- -title: 3. Install Rancher on the Kubernetes Cluster -description: Rancher installation is managed using the Helm Kubernetes package manager. Use Helm to install the prerequisites and charts to install Rancher -weight: 200 -aliases: - - /rancher/v2.x/en/installation/ha/helm-rancher +title: Install Rancher on a Kubernetes Cluster +description: Learn how to install Rancher in development and production environments. Read about single node and high availability installation +weight: 3 --- +> **Prerequisite:** +> Set up the Rancher server's local Kubernetes cluster. +> +> - As of Rancher v2.5, Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS. +> - In Rancher v2.4.x, Rancher needs to be installed on a K3s Kubernetes cluster or an RKE Kubernetes cluster. +> - In Rancher prior to v2.4, Rancher needs to be installed on an RKE Kubernetes cluster. + +# Install the Rancher Helm Chart + Rancher is installed using the Helm package manager for Kubernetes. Helm charts provide templating syntax for Kubernetes YAML manifest documents. With Helm, we can create configurable deployments instead of just using static files. For more information about creating your own catalog of deployments, check out the docs at https://helm.sh/. @@ -263,3 +270,8 @@ That's it. You should have a functional Rancher server. In a web browser, go to the DNS name that forwards traffic to your load balancer. Then you should be greeted by the colorful login page. Doesn't work? Take a look at the [Troubleshooting]({{}}/rancher/v2.x/en/installation/options/troubleshooting/) Page + + +### Optional Next Steps + +Enable the Enterprise Cluster Manager. \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/k8s-install/_index.md b/content/rancher/v2.x/en/installation/k8s-install/_index.md deleted file mode 100644 index 4a51dbf90b3..00000000000 --- a/content/rancher/v2.x/en/installation/k8s-install/_index.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: Installing Rancher on a Kubernetes Cluster -weight: 3 -description: For production environments, install Rancher in a high-availability configuration. Read the guide for setting up a 3-node cluster and still install Rancher using a Helm chart. -aliases: - - /rancher/v2.x/en/installation/ha/ ---- - -For production environments, we recommend installing Rancher in a high-availability configuration so that your user base can always access Rancher Server. When installed in a Kubernetes cluster, Rancher will integrate with the cluster's etcd database and take advantage of Kubernetes scheduling for high-availability. - -This section describes how to create and manage a Kubernetes cluster, then install Rancher onto that cluster. For this type of architecture, you will need to deploy nodes - typically virtual machines - in the infrastructure provider of your choice. You will also need to configure a load balancer to direct front-end traffic to the three VMs. When the VMs are running and fulfill the [node requirements,]({{}}/rancher/v2.x/en/installation/requirements) you can use RKE or K3s to deploy Kubernetes onto them, then use the Helm package manager to deploy Rancher onto Kubernetes. - -### Optional: Installing Rancher on a Single-node Kubernetes Cluster - -If you only have one node, but you want to use the Rancher server in production in the future, it is better to install Rancher on a single-node Kubernetes cluster than to install it with Docker. - -One option is to install Rancher with Helm on a Kubernetes cluster, but to only use a single node in the cluster. In this case, the Rancher server does not have high availability, which is important for running Rancher in production. However, this option is useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path. In the future, you can add nodes to the cluster to get a high-availability Rancher server. - -To set up a single-node RKE cluster, configure only one node in the `cluster.yml` . The single node should have all three roles: `etcd`, `controlplane`, and `worker`. - -To set up a single-node K3s cluster, run the Rancher server installation command on just one node instead of two nodes. - -In both single-node Kubernetes setups, Rancher can be installed with Helm on the Kubernetes cluster in the same way that it would be installed on any other cluster. - -### Important Notes on Architecture - -The Rancher management server can only be run on Kubernetes cluster in an infrastructure provider where Kubernetes is installed using K3s or RKE. Use of Rancher on hosted Kubernetes providers, such as EKS, is not supported. - -For the best performance and security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads. - -For information on how Rancher works, regardless of the installation method, refer to the [architecture section.]({{}}/rancher/v2.x/en/overview/architecture) - -## Installation Outline - -- [Set up Infrastructure]({{}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/) -- [Set up a Kubernetes Cluster]({{}}/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/) -- [Install Rancher]({{}}/rancher/v2.x/en/installation/k8s-install/helm-rancher/) - -## Additional Install Options - -- [Migrating from a high-availability Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) -- [Installing Rancher with Helm 2:]({{}}/rancher/v2.x/en/installation/options/helm2) This section provides a copy of the older high-availability Rancher installation instructions that used Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible. - -## Previous Methods - -[RKE add-on install]({{}}/rancher/v2.x/en/installation/options/rke-add-on/) - -> **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> -> Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). -> -> If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. diff --git a/content/rancher/v2.x/en/installation/options/air-gap-helm2/_index.md b/content/rancher/v2.x/en/installation/options/air-gap-helm2/_index.md deleted file mode 100644 index 1212425742a..00000000000 --- a/content/rancher/v2.x/en/installation/options/air-gap-helm2/_index.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: Installing Rancher in an Air Gapped Environment with Helm 2 -weight: 2 -aliases: - - /rancher/v2.x/en/installation/air-gap-installation/ - - /rancher/v2.x/en/installation/air-gap-high-availability/ - - /rancher/v2.x/en/installation/air-gap-single-node/ ---- - -> After Helm 3 was released, the Rancher installation instructions were updated to use Helm 3. -> -> If you are using Helm 2, we recommend [migrating to Helm 3](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) because it is simpler to use and more secure than Helm 2. -> -> This section provides a copy of the older instructions for installing Rancher on a Kubernetes cluster using Helm 2 in an air air gap environment, and it is intended to be used if upgrading to Helm 3 is not feasible. - -This section is about installations of Rancher server in an air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. - -Throughout the installations instructions, there will be _tabs_ for either a high availability Kubernetes installation or a single-node Docker installation. - -### Air Gapped Kubernetes Installations - -This section covers how to install Rancher on a Kubernetes cluster in an air gapped environment. - -A Kubernetes installation is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails. - -### Air Gapped Docker Installations - -These instructions also cover how to install Rancher on a single node in an air gapped environment. - -The Docker installation is for Rancher users that are wanting to test out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. - -> **Important:** If you install Rancher following the Docker installation guide, there is no upgrade path to transition your Docker Installation to a Kubernetes Installation. - -Instead of running the Docker installation, you have the option to follow the Kubernetes Install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a Kubernetes Installation. - -# Installation Outline - -- [1. Prepare your Node(s)]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/) -- [2. Collect and Publish Images to your Private Registry]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/) -- [3. Launch a Kubernetes Cluster with RKE]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/) -- [4. Install Rancher]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/) - -### [Next: Prepare your Node(s)]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/) diff --git a/content/rancher/v2.x/en/installation/options/air-gap-helm2/install-rancher/_index.md b/content/rancher/v2.x/en/installation/options/air-gap-helm2/install-rancher/_index.md deleted file mode 100644 index c1798f6ac09..00000000000 --- a/content/rancher/v2.x/en/installation/options/air-gap-helm2/install-rancher/_index.md +++ /dev/null @@ -1,333 +0,0 @@ ---- -title: 4. Install Rancher -weight: 400 -aliases: - - /rancher/v2.x/en/installation/air-gap-installation/install-rancher/ - - /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-system-charts/ - - /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-for-private-reg/ - - /rancher/v2.x/en/installation/air-gap-single-node/install-rancher - - /rancher/v2.x/en/installation/air-gap/install-rancher ---- - -This section is about how to deploy Rancher for your air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. There are _tabs_ for either a high availability (recommended) or a Docker installation. - -{{% tabs %}} -{{% tab "Kubernetes Install (Recommended)" %}} - -Rancher recommends installing Rancher on a Kubernetes cluster. A highly available Kubernetes Installation is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails. - -This section describes installing Rancher in five parts: - -- [A. Add the Helm Chart Repository](#a-add-the-helm-chart-repository) -- [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration) -- [C. Render the Rancher Helm Template](#c-render-the-rancher-helm-template) -- [D. Install Rancher](#d-install-rancher) -- [E. For Rancher versions prior to v2.3.0, Configure System Charts](#e-for-rancher-versions-prior-to-v2-3-0-configure-system-charts) - -### A. Add the Helm Chart Repository - -From a system that has access to the internet, fetch the latest Helm chart and copy the resulting manifests to a system that has access to the Rancher server cluster. - -1. If you haven't already, initialize `helm` locally on a workstation that has internet access. Note: Refer to the [Helm version requirements]({{}}/rancher/v2.x/en/installation/options/helm-version) to choose a version of Helm to install Rancher. - ```plain - helm init -c - ``` - -2. Use `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Version of Rancher]({{}}/rancher/v2.x/en/installation/options/server-tags/#helm-chart-repositories). - {{< release-channel >}} - ``` - helm repo add rancher- https://releases.rancher.com/server-charts/ - ``` - -3. Fetch the latest Rancher chart. This will pull down the chart and save it in the current directory as a `.tgz` file. -```plain -helm fetch rancher-/rancher -``` - -> Want additional options? Need help troubleshooting? See [Kubernetes Install: Advanced Options]({{}}/rancher/v2.x/en/installation/k8s-install/helm-rancher/#advanced-configurations). - -### B. Choose your SSL Configuration - -Rancher Server is designed to be secure by default and requires SSL/TLS configuration. - -When Rancher is installed on an air gapped Kubernetes cluster, there are two recommended options for the source of the certificate. - -> **Note:** If you want terminate SSL/TLS externally, see [TLS termination on an External Load Balancer]({{}}/rancher/v2.x/en/installation/options/chart-options/#external-tls-termination). - -| Configuration | Chart option | Description | Requires cert-manager | -| ------------------------------------------ | ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- | -| Rancher Generated Self-Signed Certificates | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)
This is the **default** and does not need to be added when rendering the Helm template. | yes | -| Certificates from Files | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s).
This option must be passed when rendering the Rancher Helm template. | no | - -### C. Render the Rancher Helm Template - -When setting up the Rancher Helm template, there are several options in the Helm chart that are designed specifically for air gap installations. - -| Chart Option | Chart Value | Description | -| ----------------------- | -------------------------------- | ---- | -| `certmanager.version` | "" | Configure proper Rancher TLS issuer depending of running cert-manager version. | -| `systemDefaultRegistry` | `` | Configure Rancher server to always pull from your private registry when provisioning clusters. | -| `useBundledSystemChart` | `true` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. _Available as of v2.3.0_ | - -Based on the choice your made in [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration), complete one of the procedures below. - -{{% accordion id="self-signed" label="Option A-Default Self-Signed Certificate" %}} - -By default, Rancher generates a CA and uses cert-manager to issue the certificate for access to the Rancher server interface. - -> **Note:** -> Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.11.0, please see our [upgrade cert-manager documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). - -1. From a system connected to the internet, add the cert-manager repo to Helm. - ```plain - helm repo add jetstack https://charts.jetstack.io - helm repo update - ``` - -1. Fetch the latest cert-manager chart available from the [Helm chart repository](https://hub.helm.sh/charts/jetstack/cert-manager). - - ```plain - helm fetch jetstack/cert-manager --version v0.12.0 - ``` - -1. Render the cert manager template with the options you would like to use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files. - ```plain - helm template ./cert-manager-v0.12.0.tgz --output-dir . \ - --name cert-manager --namespace cert-manager \ - --set image.repository=/quay.io/jetstack/cert-manager-controller - --set webhook.image.repository=/quay.io/jetstack/cert-manager-webhook - --set cainjector.image.repository=/quay.io/jetstack/cert-manager-cainjector - ``` - -1. Download the required CRD file for cert-manager - ```plain - curl -L -o cert-manager/cert-manager-crd.yaml https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml - ``` -1. Render the Rancher template, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools. - - - Placeholder | Description - ------------|------------- - `` | The version number of the output tarball. - `` | The DNS name you pointed at your load balancer. - `` | The DNS name for your private registry. - `` | Cert-manager version running on k8s cluster. - - ```plain - helm template ./rancher-.tgz --output-dir . \ - --name rancher \ - --namespace cattle-system \ - --set hostname= \ - --set certmanager.version= \ - --set rancherImage=/rancher/rancher \ - --set systemDefaultRegistry= \ # Available as of v2.2.0, set a default private registry to be used in Rancher - --set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts -``` - -{{% /accordion %}} - -{{% accordion id="secret" label="Option B: Certificates From Files using Kubernetes Secrets" %}} - -Create Kubernetes secrets from your own certificates for Rancher to use. The common name for the cert will need to match the `hostname` option in the command below, or the ingress controller will fail to provision the site for Rancher. - -Render the Rancher template, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools. - -| Placeholder | Description | -| -------------------------------- | ----------------------------------------------- | -| `` | The version number of the output tarball. | -| `` | The DNS name you pointed at your load balancer. | -| `` | The DNS name for your private registry. | - -```plain - helm template ./rancher-.tgz --output-dir . \ - --name rancher \ - --namespace cattle-system \ - --set hostname= \ - --set rancherImage=/rancher/rancher \ - --set ingress.tls.source=secret \ - --set systemDefaultRegistry= \ # Available as of v2.2.0, set a default private registry to be used in Rancher - --set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts -``` - -If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`: - -```plain - helm template ./rancher-.tgz --output-dir . \ - --name rancher \ - --namespace cattle-system \ - --set hostname= \ - --set rancherImage=/rancher/rancher \ - --set ingress.tls.source=secret \ - --set privateCA=true \ - --set systemDefaultRegistry= \ # Available as of v2.2.0, set a default private registry to be used in Rancher - --set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts -``` - -Then refer to [Adding TLS Secrets]({{}}/rancher/v2.x/en/installation/options/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them. - -{{% /accordion %}} - -### D. Install Rancher - -Copy the rendered manifest directories to a system that has access to the Rancher server cluster to complete installation. - -Use `kubectl` to create namespaces and apply the rendered manifests. - -If you chose to use self-signed certificates in [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration), install cert-manager. - -{{% accordion id="install-cert-manager" label="Self-Signed Certificate Installs - Install Cert-manager" %}} - -If you are using self-signed certificates, install cert-manager: - -1. Create the namespace for cert-manager. -```plain -kubectl create namespace cert-manager -``` - -1. Create the cert-manager CustomResourceDefinitions (CRDs). -```plain -kubectl apply -f cert-manager/cert-manager-crd.yaml -``` - -> **Important:** -> If you are running Kubernetes v1.15 or below, you will need to add the `--validate=false flag to your kubectl apply command above else you will receive a validation error relating to the x-kubernetes-preserve-unknown-fields field in cert-manager’s CustomResourceDefinition resources. This is a benign error and occurs due to the way kubectl performs resource validation. - -1. Launch cert-manager. -```plain -kubectl apply -R -f ./cert-manager -``` - -{{% /accordion %}} - -Install Rancher: - -```plain -kubectl create namespace cattle-system -kubectl -n cattle-system apply -R -f ./rancher -``` - -**Step Result:** If you are installing Rancher v2.3.0+, the installation is complete. - -### E. For Rancher versions prior to v2.3.0, Configure System Charts - -If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{}}/rancher/v2.x/en/installation/options/local-system-charts/#setting-up-system-charts-for-rancher-prior-to-v2-3-0). - -### Additional Resources - -These resources could be helpful when installing Rancher: - -- [Rancher Helm chart options]({{}}/rancher/v2.x/en/installation/options/chart-options/) -- [Adding TLS secrets]({{}}/rancher/v2.x/en/installation/options/tls-secrets/) -- [Troubleshooting Rancher Kubernetes Installations]({{}}/rancher/v2.x/en/installation/options/troubleshooting/) - -{{% /tab %}} -{{% tab "Docker Install" %}} - -The Docker installation is for Rancher users that are wanting to **test** out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. **Important: If you install Rancher following the Docker installation guide, there is no upgrade path to transition your Docker installation to a Kubernetes Installation.** Instead of running the single node installation, you have the option to follow the Kubernetes Install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a Kubernetes Installation. - -For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster. - -| Environment Variable Key | Environment Variable Value | Description | -| -------------------------------- | -------------------------------- | ---- | -| `CATTLE_SYSTEM_DEFAULT_REGISTRY` | `` | Configure Rancher server to always pull from your private registry when provisioning clusters. | -| `CATTLE_SYSTEM_CATALOG` | `bundled` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. _Available as of v2.3.0_ | - -> **Do you want to...** -> -> - Configure custom CA root certificate to access your services? See [Custom CA root certificate]({{}}/rancher/v2.x/en/installation/options/chart-options/#additional-trusted-cas). -> - Record all transactions with the Rancher API? See [API Auditing]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/#api-audit-log). - -- For Rancher prior to v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher prior to v2.3.0.]({{}}/rancher/v2.x/en/installation/options/local-system-charts/#setting-up-system-charts-for-rancher-prior-to-v2-3-0) - -Choose from the following options: - -{{% accordion id="option-a" label="Option A-Default Self-Signed Certificate" %}} - -If you are installing Rancher in a development or testing environment where identity verification isn't a concern, install Rancher using the self-signed certificate that it generates. This installation option omits the hassle of generating a certificate yourself. - -Log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. - -| Placeholder | Description | -| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | -| `` | Your private registry URL and port. | -| `` | The release tag of the [Rancher version]({{}}/rancher/v2.x/en/installation/options/server-tags/) that you want to install. | - -``` -docker run -d --restart=unless-stopped \ - -p 80:80 -p 443:443 \ - -e CATTLE_SYSTEM_DEFAULT_REGISTRY= \ # Set a default private registry to be used in Rancher - -e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts - /rancher/rancher: -``` - -{{% /accordion %}} -{{% accordion id="option-b" label="Option B-Bring Your Own Certificate: Self-Signed" %}} - -In development or testing environments where your team will access your Rancher server, create a self-signed certificate for use with your install so that your team can verify they're connecting to your instance of Rancher. - -> **Prerequisites:** -> From a computer with an internet connection, create a self-signed certificate using [OpenSSL](https://www.openssl.org/) or another method of your choice. -> -> - The certificate files must be in [PEM format]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/#pem). -> - In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [SSL FAQ / Troubleshooting]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/#cert-order). - -After creating your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Use the `-v` flag and provide the path to your certificates to mount them in your container. - -| Placeholder | Description | -| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | -| `` | The path to the directory containing your certificate files. | -| `` | The path to your full certificate chain. | -| `` | The path to the private key for your certificate. | -| `` | The path to the certificate authority's certificate. | -| `` | Your private registry URL and port. | -| `` | The release tag of the [Rancher version]({{}}/rancher/v2.x/en/installation/options/server-tags/) that you want to install. | - -``` -docker run -d --restart=unless-stopped \ - -p 80:80 -p 443:443 \ - -v //:/etc/rancher/ssl/cert.pem \ - -v //:/etc/rancher/ssl/key.pem \ - -v //:/etc/rancher/ssl/cacerts.pem \ - -e CATTLE_SYSTEM_DEFAULT_REGISTRY= \ # Set a default private registry to be used in Rancher - -e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts - /rancher/rancher: -``` - -{{% /accordion %}} -{{% accordion id="option-c" label="Option C-Bring Your Own Certificate: Signed by Recognized CA" %}} - -In development or testing environments where you're exposing an app publicly, use a certificate signed by a recognized CA so that your user base doesn't encounter security warnings. - -> **Prerequisite:** The certificate files must be in [PEM format]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/#pem). - -After obtaining your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Because your certificate is signed by a recognized CA, mounting an additional CA certificate file is unnecessary. - -| Placeholder | Description | -| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | -| `` | The path to the directory containing your certificate files. | -| `` | The path to your full certificate chain. | -| `` | The path to the private key for your certificate. | -| `` | Your private registry URL and port. | -| `` | The release tag of the [Rancher version]({{}}/rancher/v2.x/en/installation/options/server-tags/) that you want to install. | - -> **Note:** Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher. - -``` -docker run -d --restart=unless-stopped \ - -p 80:80 -p 443:443 \ - --no-cacerts \ - -v //:/etc/rancher/ssl/cert.pem \ - -v //:/etc/rancher/ssl/key.pem \ - -e CATTLE_SYSTEM_DEFAULT_REGISTRY= \ # Set a default private registry to be used in Rancher - -e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts - /rancher/rancher: -``` - -{{% /accordion %}} - -If you are installing Rancher v2.3.0+, the installation is complete. - -If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{}}/rancher/v2.x/en/installation/options/local-system-charts/#setting-up-system-charts-for-rancher-prior-to-v2-3-0). - -{{% /tab %}} -{{% /tabs %}} diff --git a/content/rancher/v2.x/en/installation/options/air-gap-helm2/launch-kubernetes/_index.md b/content/rancher/v2.x/en/installation/options/air-gap-helm2/launch-kubernetes/_index.md deleted file mode 100644 index 3faa3ac73c7..00000000000 --- a/content/rancher/v2.x/en/installation/options/air-gap-helm2/launch-kubernetes/_index.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -title: '3. Install Kubernetes with RKE (Kubernetes Installs Only)' -weight: 300 -aliases: - - /rancher/v2.x/en/installation/air-gap-high-availability/install-kube ---- - -This section is about how to prepare to launch a Kubernetes cluster which is used to deploy Rancher server for your air gapped environment. - -Since a Kubernetes Installation requires a Kubernetes cluster, we will create a Kubernetes cluster using [Rancher Kubernetes Engine]({{}}/rke/latest/en/) (RKE). Before being able to start your Kubernetes cluster, you'll need to [install RKE]({{}}/rke/latest/en/installation/) and create a RKE config file. - -- [A. Create an RKE Config File](#a-create-an-rke-config-file) -- [B. Run RKE](#b-run-rke) -- [C. Save Your Files](#c-save-your-files) - -### A. Create an RKE Config File - -From a system that can access ports 22/tcp and 6443/tcp on your host nodes, use the sample below to create a new file named `rancher-cluster.yml`. This file is a Rancher Kubernetes Engine configuration file (RKE config file), which is a configuration for the cluster you're deploying Rancher to. - -Replace values in the code sample below with help of the _RKE Options_ table. Use the IP address or DNS names of the [3 nodes]({{}}/rancher/v2.x/en/installation/air-gap-high-availability/provision-hosts) you created. - -> **Tip:** For more details on the options available, see the RKE [Config Options]({{}}/rke/latest/en/config-options/). - -
RKE Options
- -| Option | Required | Description | -| ------------------ | -------------------- | --------------------------------------------------------------------------------------- | -| `address` | ✓ | The DNS or IP address for the node within the air gap network. | -| `user` | ✓ | A user that can run docker commands. | -| `role` | ✓ | List of Kubernetes roles assigned to the node. | -| `internal_address` | optional1 | The DNS or IP address used for internal cluster traffic. | -| `ssh_key_path` | | Path to SSH private key used to authenticate to the node (defaults to `~/.ssh/id_rsa`). | - -> 1 Some services like AWS EC2 require setting the `internal_address` if you want to use self-referencing security groups or firewalls. - -```yaml -nodes: - - address: 10.10.3.187 # node air gap network IP - internal_address: 172.31.7.22 # node intra-cluster IP - user: rancher - role: ['controlplane', 'etcd', 'worker'] - ssh_key_path: /home/user/.ssh/id_rsa - - address: 10.10.3.254 # node air gap network IP - internal_address: 172.31.13.132 # node intra-cluster IP - user: rancher - role: ['controlplane', 'etcd', 'worker'] - ssh_key_path: /home/user/.ssh/id_rsa - - address: 10.10.3.89 # node air gap network IP - internal_address: 172.31.3.216 # node intra-cluster IP - user: rancher - role: ['controlplane', 'etcd', 'worker'] - ssh_key_path: /home/user/.ssh/id_rsa - -private_registries: - - url: # private registry url - user: rancher - password: '*********' - is_default: true -``` - -### B. Run RKE - -After configuring `rancher-cluster.yml`, bring up your Kubernetes cluster: - -``` -rke up --config ./rancher-cluster.yml -``` - -### C. Save Your Files - -> **Important** -> The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster. - -Save a copy of the following files in a secure location: - -- `rancher-cluster.yml`: The RKE cluster configuration file. -- `kube_config_rancher-cluster.yml`: The [Kubeconfig file]({{}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster. -- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file]({{}}/rke/latest/en/installation/#kubernetes-cluster-state), this file contains credentials for full access to the cluster.

_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._ - -> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file. - -### [Next: Install Rancher]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher) diff --git a/content/rancher/v2.x/en/installation/options/air-gap-helm2/populate-private-registry/_index.md b/content/rancher/v2.x/en/installation/options/air-gap-helm2/populate-private-registry/_index.md deleted file mode 100644 index b96ca100b47..00000000000 --- a/content/rancher/v2.x/en/installation/options/air-gap-helm2/populate-private-registry/_index.md +++ /dev/null @@ -1,274 +0,0 @@ ---- -title: '2. Collect and Publish Images to your Private Registry' -weight: 200 -aliases: - - /rancher/v2.x/en/installation/air-gap-installation/prepare-private-reg/ - - /rancher/v2.x/en/installation/air-gap-high-availability/prepare-private-registry/ - - /rancher/v2.x/en/installation/air-gap-single-node/prepare-private-registry/ - - /rancher/v2.x/en/installation/air-gap-single-node/config-rancher-for-private-reg/ - - /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-for-private-reg/ ---- - -> **Prerequisites:** You must have a [private registry](https://docs.docker.com/registry/deploying/) available to use. -> -> **Note:** Populating the private registry with images is the same process for HA and Docker installations, the differences in this section is based on whether or not you are planning to provision a Windows cluster or not. - -By default, all images used to [provision Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/) or launch any [tools]({{}}/rancher/v2.x/en/cluster-admin/tools/) in Rancher, e.g. monitoring, pipelines, alerts, are pulled from Docker Hub. In an air gap installation of Rancher, you will need a private registry that is located somewhere accessible by your Rancher server. Then, you will load the registry with all the images. - -This section describes how to set up your private registry so that when you install Rancher, Rancher will pull all the required images from this registry. - -By default, we provide the steps of how to populate your private registry assuming you are provisioning Linux only clusters, but if you plan on provisioning any [Windows clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/), there are separate instructions to support the images needed for a Windows cluster. - -{{% tabs %}} -{{% tab "Linux Only Clusters" %}} - -For Rancher servers that will only provision Linux clusters, these are the steps to populate your private registry. - -A. Find the required assets for your Rancher version
-B. Collect all the required images
-C. Save the images to your workstation
-D. Populate the private registry - -### Prerequisites - -These steps expect you to use a Linux workstation that has internet access, access to your private registry, and at least 20 GB of disk space. - -If you will use ARM64 hosts, the registry must support manifests. As of April 2020, Amazon Elastic Container Registry does not support manifests. - -### A. Find the required assets for your Rancher version - -1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. Click **Assets*.* - -2. From the release's **Assets** section, download the following files: - -| Release File | Description | -| ---------------- | -------------- | -| `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools. | -| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. | -| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. | - -### B. Collect all the required images (For Kubernetes Installs using Rancher Generated Self-Signed Certificate) - -In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates. - -1. Fetch the latest `cert-manager` Helm chart and parse the template for image details: - - > **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). - - ```plain - helm repo add jetstack https://charts.jetstack.io - helm repo update - helm fetch jetstack/cert-manager --version v0.12.0 - helm template ./cert-manager-.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt - ``` - -2. Sort and unique the images list to remove any overlap between the sources: - - ```plain - sort -u rancher-images.txt -o rancher-images.txt - ``` - -### C. Save the images to your workstation - -1. Make `rancher-save-images.sh` an executable: - ``` - chmod +x rancher-save-images.sh - ``` - -1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images: - ```plain - ./rancher-save-images.sh --image-list ./rancher-images.txt - ``` - **Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory. - -### D. Populate the private registry - -Move the images in the `rancher-images.tar.gz` to your private registry using the scripts to load the images. The `rancher-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script. - -1. Log into your private registry if required: - ```plain - docker login - ``` -1. Make `rancher-load-images.sh` an executable: - ``` - chmod +x rancher-load-images.sh - ``` - -1. Use `rancher-load-images.sh` to extract, tag and push `rancher-images.txt` and `rancher-images.tar.gz` to your private registry: - ```plain - ./rancher-load-images.sh --image-list ./rancher-images.txt --registry - ``` -{{% /tab %}} -{{% tab "Linux and Windows Clusters" %}} - -_Available as of v2.3.0_ - -For Rancher servers that will provision Linux and Windows clusters, there are distinctive steps to populate your private registry for the Windows images and the Linux images. Since a Windows cluster is a mix of Linux and Windows nodes, the Linux images pushed into the private registry are manifests. - -### Windows Steps - -The Windows images need to be collected and pushed from a Windows server workstation. - -A. Find the required assets for your Rancher version
-B. Save the images to your Windows Server workstation
-C. Prepare the Docker daemon
-D. Populate the private registry - -{{% accordion label="Collecting and Populating Windows Images into the Private Registry"%}} - -### Prerequisites - -These steps expect you to use a Windows Server 1809 workstation that has internet access, access to your private registry, and at least 50 GB of disk space. - -The workstation must have Docker 18.02+ in order to support manifests, which are required when provisioning Windows clusters. - -Your registry must support manifests. As of April 2020, Amazon Elastic Container Registry does not support manifests. - -### A. Find the required assets for your Rancher version - -1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. - -2. From the release's "Assets" section, download the following files: - -| Release File | Description | -|------------------------|-------------------| -| `rancher-windows-images.txt` | This file contains a list of Windows images needed to provision Windows clusters. | -| `rancher-save-images.ps1` | This script pulls all the images in the `rancher-windows-images.txt` from Docker Hub and saves all of the images as `rancher-windows-images.tar.gz`. | -| `rancher-load-images.ps1` | This script loads the images from the `rancher-windows-images.tar.gz` file and pushes them to your private registry. | - -### B. Save the images to your Windows Server workstation - -1. Using `powershell`, go to the directory that has the files that were downloaded in the previous step. - -1. Run `rancher-save-images.ps1` to create a tarball of all the required images: - - ```plain - ./rancher-save-images.ps1 - ``` - - **Step Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-windows-images.tar.gz`. Check that the output is in the directory. - -### C. Prepare the Docker daemon - -Append your private registry address to the `allow-nondistributable-artifacts` config field in the Docker daemon (`C:\ProgramData\Docker\config\daemon.json`). Since the base image of Windows images are maintained by the `mcr.microsoft.com` registry, this step is required as the layers in the Microsoft registry are missing from Docker Hub and need to be pulled into the private registry. - - ``` - { - ... - "allow-nondistributable-artifacts": [ - ... - "" - ] - ... - } - ``` - -### D. Populate the private registry - -Move the images in the `rancher-windows-images.tar.gz` to your private registry using the scripts to load the images. The `rancher-windows-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.ps1` script. - -1. Using `powershell`, log into your private registry if required: - ```plain - docker login - ``` - -1. Using `powershell`, use `rancher-load-images.ps1` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry: - ```plain - ./rancher-load-images.ps1 --registry - ``` - -{{% /accordion %}} - -### Linux Steps - -The Linux images needs to be collected and pushed from a Linux host, but _must be done after_ populating the Windows images into the private registry. These step are different from the Linux only steps as the Linux images that are pushed will actually manifests that support Windows and Linux images. - -A. Find the required assets for your Rancher version
-B. Collect all the required images
-C. Save the images to your Linux workstation
-D. Populate the private registry - -{{% accordion label="Collecting and Populating Linux Images into the Private Registry" %}} - -### Prerequisites - -You must populate the private registry with the Windows images before populating the private registry with Linux images. If you have already populated the registry with Linux images, you will need to follow these instructions again as they will publish manifests that support Windows and Linux images. - -These steps expect you to use a Linux workstation that has internet access, access to your private registry, and at least 20 GB of disk space. - -The workstation must have Docker 18.02+ in order to support manifests, which are required when provisioning Windows clusters. - -### A. Find the required assets for your Rancher version - -1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. - -2. From the release's **Assets** section, download the following files, which are required to install Rancher in an air gap environment: - -| Release File | Description | -|----------------------------|------| -| `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools. | -| `rancher-windows-images.txt` | This file contains a list of images needed to provision Windows clusters. | -| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. | -| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. | - -### B. Collect all the required images - -**For Kubernetes Installs using Rancher Generated Self-Signed Certificate:** In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates. - - 1. Fetch the latest `cert-manager` Helm chart and parse the template for image details: - > **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). - ```plain - helm repo add jetstack https://charts.jetstack.io - helm repo update - helm fetch jetstack/cert-manager --version v0.12.0 - helm template ./cert-manager-.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt - ``` - - 2. Sort and unique the images list to remove any overlap between the sources: - ```plain - sort -u rancher-images.txt -o rancher-images.txt - ``` - -### C. Save the images to your workstation - -1. Make `rancher-save-images.sh` an executable: - ``` - chmod +x rancher-save-images.sh - ``` - -1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images: - ```plain - ./rancher-save-images.sh --image-list ./rancher-images.txt - ``` - - **Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory. - -### D. Populate the private registry - -Move the images in the `rancher-images.tar.gz` to your private registry using the `rancher-load-images.sh script` to load the images. The `rancher-images.txt` / `rancher-windows-images.txt` image list is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script. - -1. Log into your private registry if required: - ```plain - docker login - ``` - -1. Make `rancher-load-images.sh` an executable: - ``` - chmod +x rancher-load-images.sh - ``` - -1. Use `rancher-load-images.sh` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry: - ```plain - ./rancher-load-images.sh --image-list ./rancher-images.txt \ - --windows-image-list ./rancher-windows-images.txt \ - --registry - ``` - -{{% /accordion %}} - -{{% /tab %}} -{{% /tabs %}} - -### [Next: Kubernetes Installs - Launch a Kubernetes Cluster with RKE]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/) - -### [Next: Docker Installs - Install Rancher]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/) diff --git a/content/rancher/v2.x/en/installation/options/air-gap-helm2/prepare-nodes/_index.md b/content/rancher/v2.x/en/installation/options/air-gap-helm2/prepare-nodes/_index.md deleted file mode 100644 index 554c05bd98b..00000000000 --- a/content/rancher/v2.x/en/installation/options/air-gap-helm2/prepare-nodes/_index.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -title: '1. Prepare your Node(s)' -weight: 100 -aliases: - - /rancher/v2.x/en/installation/air-gap-high-availability/provision-hosts - - /rancher/v2.x/en/installation/air-gap-single-node/provision-host ---- - -This section is about how to prepare your node(s) to install Rancher for your air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. There are _tabs_ for either a high availability (recommended) or a Docker installation. - -# Prerequisites - -{{% tabs %}} -{{% tab "Kubernetes Install (Recommended)" %}} - -### OS, Docker, Hardware, and Networking - -Make sure that your node(s) fulfill the general [installation requirements.]({{}}/rancher/v2.x/en/installation/requirements/) - -### Private Registry - -Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machines. - -If you need help with creating a private registry, please refer to the [Docker documentation](https://docs.docker.com/registry/). - -### CLI Tools - -The following CLI tools are required for the Kubernetes Install. Make sure these tools are installed on your workstation and available in your `$PATH`. - -- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) - Kubernetes command-line tool. -- [rke]({{}}/rke/latest/en/installation/) - Rancher Kubernetes Engine, cli for building Kubernetes clusters. -- [helm](https://docs.helm.sh/using_helm/#installing-helm) - Package management for Kubernetes. Refer to the [Helm version requirements]({{}}/rancher/v2.x/en/installation/options/helm-version) to choose a version of Helm to install Rancher. - -{{% /tab %}} -{{% tab "Docker Install" %}} - -### OS, Docker, Hardware, and Networking - -Make sure that your node(s) fulfill the general [installation requirements.]({{}}/rancher/v2.x/en/installation/requirements/) - -### Private Registry - -Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machines. - -If you need help with creating a private registry, please refer to the [Docker documentation](https://docs.docker.com/registry/). -{{% /tab %}} -{{% /tabs %}} - -# Set up Infrastructure - -{{% tabs %}} -{{% tab "Kubernetes Install (Recommended)" %}} - -Rancher recommends installing Rancher on a Kubernetes cluster. A highly available Kubernetes install is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails. - -### Recommended Architecture - -- DNS for Rancher should resolve to a layer 4 load balancer -- The Load Balancer should forward port TCP/80 and TCP/443 to all 3 nodes in the Kubernetes cluster. -- The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443. -- The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment. - -
Rancher installed on a Kubernetes cluster with layer 4 load balancer, depicting SSL termination at ingress controllers
- -![Rancher HA]({{}}/img/rancher/ha/rancher2ha.svg) - -### A. Provision three air gapped Linux hosts according to our requirements - -These hosts will be disconnected from the internet, but require being able to connect with your private registry. - -View hardware and software requirements for each of your cluster nodes in [Requirements]({{}}/rancher/v2.x/en/installation/requirements). - -### B. Set up your Load Balancer - -When setting up the Kubernetes cluster that will run the Rancher server components, an Ingress controller pod will be deployed on each of your nodes. The Ingress controller pods are bound to ports TCP/80 and TCP/443 on the host network and are the entry point for HTTPS traffic to the Rancher server. - -You will need to configure a load balancer as a basic Layer 4 TCP forwarder to direct traffic to these ingress controller pods. The exact configuration will vary depending on your environment. - -> **Important:** -> Only use this load balancer (i.e, the `local` cluster Ingress) to load balance the Rancher server. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. - -**Load Balancer Configuration Samples:** - -- For an example showing how to set up an NGINX load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nginx) -- For an example showing how to set up an Amazon NLB load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nlb) - -{{% /tab %}} -{{% tab "Docker Install" %}} - -The Docker installation is for Rancher users that are wanting to test out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. - -> **Important:** If you install Rancher following the Docker installation guide, there is no upgrade path to transition your Docker installation to a Kubernetes Installation. - -Instead of running the Docker installation, you have the option to follow the Kubernetes Install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a Kubernetes Installation. - -### A. Provision a single, air gapped Linux host according to our Requirements - -These hosts will be disconnected from the internet, but require being able to connect with your private registry. - -View hardware and software requirements for each of your cluster nodes in [Requirements]({{}}/rancher/v2.x/en/installation/requirements). - -{{% /tab %}} -{{% /tabs %}} - -### [Next: Collect and Publish Images to your Private Registry]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/) diff --git a/content/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-certificate-recognizedca/_index.md b/content/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-certificate-recognizedca/_index.md deleted file mode 100644 index b67ccc5370e..00000000000 --- a/content/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-certificate-recognizedca/_index.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -title: Template for an RKE Cluster with a Certificate Signed by Recognized CA and a Layer 4 Load Balancer -weight: 3 ---- - -RKE uses a cluster.yml file to install and configure your Kubernetes cluster. - -The following template can be used for the cluster.yml if you have a setup with: - -- Certificate signed by a recognized CA -- Layer 4 load balancer -- [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/) - -> For more options, refer to [RKE Documentation: Config Options]({{}}/rke/latest/en/config-options/). - -```yaml -nodes: - - address: # hostname or IP to access nodes - user: # root user (usually 'root') - role: [controlplane,etcd,worker] # K8s roles for node - ssh_key_path: # path to PEM file - - address: - user: - role: [controlplane,etcd,worker] - ssh_key_path: - - address: - user: - role: [controlplane,etcd,worker] - ssh_key_path: - -services: - etcd: - snapshot: true - creation: 6h - retention: 24h - -addons: |- - --- - kind: Namespace - apiVersion: v1 - metadata: - name: cattle-system - --- - kind: ServiceAccount - apiVersion: v1 - metadata: - name: cattle-admin - namespace: cattle-system - --- - kind: ClusterRoleBinding - apiVersion: rbac.authorization.k8s.io/v1 - metadata: - name: cattle-crb - namespace: cattle-system - subjects: - - kind: ServiceAccount - name: cattle-admin - namespace: cattle-system - roleRef: - kind: ClusterRole - name: cluster-admin - apiGroup: rbac.authorization.k8s.io - --- - apiVersion: v1 - kind: Secret - metadata: - name: cattle-keys-ingress - namespace: cattle-system - type: Opaque - data: - tls.crt: # ssl cert for ingress. If self-signed, must be signed by same CA as cattle server - tls.key: # ssl key for ingress. If self-signed, must be signed by same CA as cattle server - --- - apiVersion: v1 - kind: Service - metadata: - namespace: cattle-system - name: cattle-service - labels: - app: cattle - spec: - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - - port: 443 - targetPort: 443 - protocol: TCP - name: https - selector: - app: cattle - --- - apiVersion: extensions/v1beta1 - kind: Ingress - metadata: - namespace: cattle-system - name: cattle-ingress-http - annotations: - nginx.ingress.kubernetes.io/proxy-connect-timeout: "30" - nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open - nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open - spec: - rules: - - host: # FQDN to access cattle server - http: - paths: - - backend: - serviceName: cattle-service - servicePort: 80 - tls: - - secretName: cattle-keys-ingress - hosts: - - # FQDN to access cattle server - --- - kind: Deployment - apiVersion: extensions/v1beta1 - metadata: - namespace: cattle-system - name: cattle - spec: - replicas: 1 - template: - metadata: - labels: - app: cattle - spec: - serviceAccountName: cattle-admin - containers: - # Rancher install via RKE addons is only supported up to v2.0.8 - - image: rancher/rancher:v2.0.8 - args: - - --no-cacerts - imagePullPolicy: Always - name: cattle-server - # env: - # - name: HTTP_PROXY - # value: "http://your_proxy_address:port" - # - name: HTTPS_PROXY - # value: "http://your_proxy_address:port" - # - name: NO_PROXY - # value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,your_network_ranges_that_dont_need_proxy_to_access" - livenessProbe: - httpGet: - path: /ping - port: 80 - initialDelaySeconds: 60 - periodSeconds: 60 - readinessProbe: - httpGet: - path: /ping - port: 80 - initialDelaySeconds: 20 - periodSeconds: 10 - ports: - - containerPort: 80 - protocol: TCP - - containerPort: 443 - protocol: TCP -``` \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-certificate/_index.md b/content/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-certificate/_index.md deleted file mode 100644 index 9b24f5b39d5..00000000000 --- a/content/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-certificate/_index.md +++ /dev/null @@ -1,175 +0,0 @@ ---- -title: Template for an RKE Cluster with a Self-signed Certificate and Layer 4 Load Balancer -weight: 2 ---- -RKE uses a cluster.yml file to install and configure your Kubernetes cluster. - -The following template can be used for the cluster.yml if you have a setup with: - -- Self-signed SSL -- Layer 4 load balancer -- [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/) - -> For more options, refer to [RKE Documentation: Config Options]({{}}/rke/latest/en/config-options/). - -```yaml -nodes: - - address: # hostname or IP to access nodes - user: # root user (usually 'root') - role: [controlplane,etcd,worker] # K8s roles for node - ssh_key_path: # path to PEM file - - address: - user: - role: [controlplane,etcd,worker] - ssh_key_path: - - address: - user: - role: [controlplane,etcd,worker] - ssh_key_path: - -services: - etcd: - snapshot: true - creation: 6h - retention: 24h - -addons: |- - --- - kind: Namespace - apiVersion: v1 - metadata: - name: cattle-system - --- - kind: ServiceAccount - apiVersion: v1 - metadata: - name: cattle-admin - namespace: cattle-system - --- - kind: ClusterRoleBinding - apiVersion: rbac.authorization.k8s.io/v1 - metadata: - name: cattle-crb - namespace: cattle-system - subjects: - - kind: ServiceAccount - name: cattle-admin - namespace: cattle-system - roleRef: - kind: ClusterRole - name: cluster-admin - apiGroup: rbac.authorization.k8s.io - --- - apiVersion: v1 - kind: Secret - metadata: - name: cattle-keys-ingress - namespace: cattle-system - type: Opaque - data: - tls.crt: # ssl cert for ingress. If selfsigned, must be signed by same CA as cattle server - tls.key: # ssl key for ingress. If selfsigned, must be signed by same CA as cattle server - --- - apiVersion: v1 - kind: Secret - metadata: - name: cattle-keys-server - namespace: cattle-system - type: Opaque - data: - cacerts.pem: # CA cert used to sign cattle server cert and key - --- - apiVersion: v1 - kind: Service - metadata: - namespace: cattle-system - name: cattle-service - labels: - app: cattle - spec: - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - - port: 443 - targetPort: 443 - protocol: TCP - name: https - selector: - app: cattle - --- - apiVersion: extensions/v1beta1 - kind: Ingress - metadata: - namespace: cattle-system - name: cattle-ingress-http - annotations: - nginx.ingress.kubernetes.io/proxy-connect-timeout: "30" - nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open - nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open - spec: - rules: - - host: # FQDN to access cattle server - http: - paths: - - backend: - serviceName: cattle-service - servicePort: 80 - tls: - - secretName: cattle-keys-ingress - hosts: - - # FQDN to access cattle server - --- - kind: Deployment - apiVersion: extensions/v1beta1 - metadata: - namespace: cattle-system - name: cattle - spec: - replicas: 1 - template: - metadata: - labels: - app: cattle - spec: - serviceAccountName: cattle-admin - containers: - # Rancher install via RKE addons is only supported up to v2.0.8 - - image: rancher/rancher:v2.0.8 - imagePullPolicy: Always - name: cattle-server - # env: - # - name: HTTP_PROXY - # value: "http://your_proxy_address:port" - # - name: HTTPS_PROXY - # value: "http://your_proxy_address:port" - # - name: NO_PROXY - # value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,your_network_ranges_that_dont_need_proxy_to_access" - livenessProbe: - httpGet: - path: /ping - port: 80 - initialDelaySeconds: 60 - periodSeconds: 60 - readinessProbe: - httpGet: - path: /ping - port: 80 - initialDelaySeconds: 20 - periodSeconds: 10 - ports: - - containerPort: 80 - protocol: TCP - - containerPort: 443 - protocol: TCP - volumeMounts: - - mountPath: /etc/rancher/ssl - name: cattle-keys-volume - readOnly: true - volumes: - - name: cattle-keys-volume - secret: - defaultMode: 420 - secretName: cattle-keys-server -``` \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-externalssl-certificate/_index.md b/content/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-externalssl-certificate/_index.md deleted file mode 100644 index 74f1b8e2e6d..00000000000 --- a/content/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-externalssl-certificate/_index.md +++ /dev/null @@ -1,158 +0,0 @@ ---- -title: Template for an RKE Cluster with a Self-signed Certificate and SSL Termination on Layer 7 Load Balancer -weight: 3 ---- - -RKE uses a cluster.yml file to install and configure your Kubernetes cluster. - -This template is intended to be used for RKE add-on installs, which are only supported up to Rancher v2.0.8. Please use the Rancher Helm chart if you are installing a newer Rancher version. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). - -The following template can be used for the cluster.yml if you have a setup with: - -- Layer 7 load balancer with self-signed SSL termination (HTTPS) -- [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/) - -> For more options, refer to [RKE Documentation: Config Options]({{}}/rke/latest/en/config-options/). - -```yaml -nodes: - - address: # hostname or IP to access nodes - user: # root user (usually 'root') - role: [controlplane,etcd,worker] # K8s roles for node - ssh_key_path: # path to PEM file - - address: - user: - role: [controlplane,etcd,worker] - ssh_key_path: - - address: - user: - role: [controlplane,etcd,worker] - ssh_key_path: - -services: - etcd: - snapshot: true - creation: 6h - retention: 24h - -addons: |- - --- - kind: Namespace - apiVersion: v1 - metadata: - name: cattle-system - --- - kind: ServiceAccount - apiVersion: v1 - metadata: - name: cattle-admin - namespace: cattle-system - --- - kind: ClusterRoleBinding - apiVersion: rbac.authorization.k8s.io/v1 - metadata: - name: cattle-crb - namespace: cattle-system - subjects: - - kind: ServiceAccount - name: cattle-admin - namespace: cattle-system - roleRef: - kind: ClusterRole - name: cluster-admin - apiGroup: rbac.authorization.k8s.io - --- - apiVersion: v1 - kind: Secret - metadata: - name: cattle-keys-server - namespace: cattle-system - type: Opaque - data: - cacerts.pem: # CA cert used to sign cattle server cert and key - --- - apiVersion: v1 - kind: Service - metadata: - namespace: cattle-system - name: cattle-service - labels: - app: cattle - spec: - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - app: cattle - --- - apiVersion: extensions/v1beta1 - kind: Ingress - metadata: - namespace: cattle-system - name: cattle-ingress-http - annotations: - nginx.ingress.kubernetes.io/proxy-connect-timeout: "30" - nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open - nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open - nginx.ingress.kubernetes.io/ssl-redirect: "false" # Disable redirect to ssl - spec: - rules: - - host: - http: - paths: - - backend: - serviceName: cattle-service - servicePort: 80 - --- - kind: Deployment - apiVersion: extensions/v1beta1 - metadata: - namespace: cattle-system - name: cattle - spec: - replicas: 1 - template: - metadata: - labels: - app: cattle - spec: - serviceAccountName: cattle-admin - containers: - # Rancher install via RKE addons is only supported up to v2.0.8 - - image: rancher/rancher:v2.0.8 - imagePullPolicy: Always - name: cattle-server - # env: - # - name: HTTP_PROXY - # value: "http://your_proxy_address:port" - # - name: HTTPS_PROXY - # value: "http://your_proxy_address:port" - # - name: NO_PROXY - # value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,your_network_ranges_that_dont_need_proxy_to_access" - livenessProbe: - httpGet: - path: /ping - port: 80 - initialDelaySeconds: 60 - periodSeconds: 60 - readinessProbe: - httpGet: - path: /ping - port: 80 - initialDelaySeconds: 20 - periodSeconds: 10 - ports: - - containerPort: 80 - protocol: TCP - volumeMounts: - - mountPath: /etc/rancher/ssl - name: cattle-keys-volume - readOnly: true - volumes: - - name: cattle-keys-volume - secret: - defaultMode: 420 - secretName: cattle-keys-server -``` \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-externalssl-recognizedca/_index.md b/content/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-externalssl-recognizedca/_index.md deleted file mode 100644 index 4bb8694b284..00000000000 --- a/content/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-externalssl-recognizedca/_index.md +++ /dev/null @@ -1,142 +0,0 @@ ---- -title: Template for an RKE Cluster with a Recognized CA Certificate and SSL Termination on Layer 7 Load Balancer -weight: 4 ---- - -RKE uses a cluster.yml file to install and configure your Kubernetes cluster. - -This template is intended to be used for RKE add-on installs, which are only supported up to Rancher v2.0.8. Please use the Rancher Helm chart if you are installing a newer Rancher version. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). - -The following template can be used for the cluster.yml if you have a setup with: - -- Layer 7 load balancer with SSL termination (HTTPS) -- [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/) - -> For more options, refer to [RKE Documentation: Config Options]({{}}/rke/latest/en/config-options/). - -```yaml -nodes: - - address: # hostname or IP to access nodes - user: # root user (usually 'root') - role: [controlplane,etcd,worker] # K8s roles for node - ssh_key_path: # path to PEM file - - address: - user: - role: [controlplane,etcd,worker] - ssh_key_path: - - address: - user: - role: [controlplane,etcd,worker] - ssh_key_path: - -services: - etcd: - snapshot: true - creation: 6h - retention: 24h - -addons: |- - --- - kind: Namespace - apiVersion: v1 - metadata: - name: cattle-system - --- - kind: ServiceAccount - apiVersion: v1 - metadata: - name: cattle-admin - namespace: cattle-system - --- - kind: ClusterRoleBinding - apiVersion: rbac.authorization.k8s.io/v1 - metadata: - name: cattle-crb - namespace: cattle-system - subjects: - - kind: ServiceAccount - name: cattle-admin - namespace: cattle-system - roleRef: - kind: ClusterRole - name: cluster-admin - apiGroup: rbac.authorization.k8s.io - --- - apiVersion: v1 - kind: Service - metadata: - namespace: cattle-system - name: cattle-service - labels: - app: cattle - spec: - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - app: cattle - --- - apiVersion: extensions/v1beta1 - kind: Ingress - metadata: - namespace: cattle-system - name: cattle-ingress-http - annotations: - nginx.ingress.kubernetes.io/proxy-connect-timeout: "30" - nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open - nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open - nginx.ingress.kubernetes.io/ssl-redirect: "false" # Disable redirect to ssl - spec: - rules: - - host: - http: - paths: - - backend: - serviceName: cattle-service - servicePort: 80 - --- - kind: Deployment - apiVersion: extensions/v1beta1 - metadata: - namespace: cattle-system - name: cattle - spec: - replicas: 1 - template: - metadata: - labels: - app: cattle - spec: - serviceAccountName: cattle-admin - containers: - # Rancher install via RKE addons is only supported up to v2.0.8 - - image: rancher/rancher:v2.0.8 - args: - - --no-cacerts - imagePullPolicy: Always - name: cattle-server - # env: - # - name: HTTP_PROXY - # value: "http://your_proxy_address:port" - # - name: HTTPS_PROXY - # value: "http://your_proxy_address:port" - # - name: NO_PROXY - # value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,your_network_ranges_that_dont_need_proxy_to_access" - livenessProbe: - httpGet: - path: /ping - port: 80 - initialDelaySeconds: 60 - periodSeconds: 60 - readinessProbe: - httpGet: - path: /ping - port: 80 - initialDelaySeconds: 20 - periodSeconds: 10 - ports: - - containerPort: 80 - protocol: TCP -``` \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/options/cluster-yml-templates/_index.md b/content/rancher/v2.x/en/installation/options/cluster-yml-templates/_index.md deleted file mode 100644 index 2e4c7d93f60..00000000000 --- a/content/rancher/v2.x/en/installation/options/cluster-yml-templates/_index.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: cluster.yml Templates -weight: 1 ---- - -RKE uses a cluster.yml file to install and configure your Kubernetes cluster. This section provides templates that can be used to create the cluster.yml. - -> For more cluster.yml options, refer to the[RKE configuration reference.]({{}}/rke/latest/en/config-options/). \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/options/feature-flags/_index.md b/content/rancher/v2.x/en/installation/options/feature-flags/_index.md deleted file mode 100644 index 7df5e71056c..00000000000 --- a/content/rancher/v2.x/en/installation/options/feature-flags/_index.md +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Enabling Experimental Features -weight: 8000 ---- - -_Available as of v2.3.0_ - -Rancher includes some features that are experimental and disabled by default. You might want to enable these features, for example, if you decide that the benefits of using an [unsupported storage type]({{}}/rancher/v2.x/en/installation/options/feature-flags/enable-not-default-storage-drivers) outweighs the risk of using an untested feature. Feature flags were introduced to allow you to try these features that are not enabled by default. - -The features can be enabled in three ways: - -- [Enable features when starting Rancher.](#enabling-features-when-starting-rancher) When installing Rancher with a CLI, you can use a feature flag to enable a feature by default. -- [Enable features from the Rancher UI](#enabling-features-with-the-rancher-ui) in Rancher v2.3.3+ by going to the **Settings** page. -- [Enable features with the Rancher API](#enabling-features-with-the-rancher-api) after installing Rancher. - -Each feature has two values: - -- A default value, which can be configured with a flag or environment variable from the command line -- A set value, which can be configured with the Rancher API or UI - -If no value has been set, Rancher uses the default value. - -Because the API sets the actual value and the command line sets the default value, that means that if you enable or disable a feature with the API or UI, it will override any value set with the command line. - -For example, if you install Rancher, then set a feature flag to true with the Rancher API, then upgrade Rancher with a command that sets the feature flag to false, the default value will still be false, but the feature will still be enabled because it was set with the Rancher API. If you then deleted the set value (true) with the Rancher API, setting it to NULL, the default value (false) would take effect. - -> **Note:** As of v2.4.0, there are some feature flags that may require a restart of the Rancher server container. These features that require a restart are marked in the table of these docs and in the UI. - -The following is a list of the feature flags available in Rancher: - -- `dashboard`: This feature enables the new experimental UI that has a new look and feel. The dashboard also leverages a new API in Rancher which allows the UI to access the default Kubernetes resources without any intervention from Rancher. -- `istio-virtual-service-ui`: This feature enables a [UI to create, read, update, and delete Istio virtual services and destination rules]({{}}/rancher/v2.x/en/installation/options/feature-flags/istio-virtual-service-ui), which are traffic management features of Istio. -- `proxy`: This feature enables Rancher to use a new simplified code base for the proxy, which can help enhance performance and security. The proxy feature is known to have issues with Helm deployments, which prevents any catalog applications to be deployed which includes Rancher's tools like monitoring, logging, Istio, etc. -- `unsupported-storage-drivers`: This feature [allows unsupported storage drivers.]({{}}/rancher/v2.x/en/installation/options/feature-flags/enable-not-default-storage-drivers) In other words, it enables types for storage providers and provisioners that are not enabled by default. - -The below table shows the availability and default value for feature flags in Rancher: - -| Feature Flag Name | Default Value | Status | Available as of | Rancher Restart Required? | -| ----------------------------- | ------------- | ------------ | --------------- |---| -| `dashboard` | `true` | Experimental | v2.4.0 | x | -| `istio-virtual-service-ui` | `false` | Experimental | v2.3.0 | | -| `istio-virtual-service-ui` | `true` | GA | v2.3.2 | | -| `proxy` | `false` | Experimental | v2.4.0 | | -| `unsupported-storage-drivers` | `false` | Experimental | v2.3.0 | | - -# Enabling Features when Starting Rancher - -When you install Rancher, enable the feature you want with a feature flag. The command is different depending on whether you are installing Rancher on a single node or if you are doing a Kubernetes Installation of Rancher. - -> **Note:** Values set from the Rancher API will override the value passed in through the command line. - -{{% tabs %}} -{{% tab "Kubernetes Install" %}} -When installing Rancher with a Helm chart, use the `--features` option. In the below example, two features are enabled by passing the feature flag names names in a comma separated list: - -``` -helm install rancher-latest/rancher \ - --name rancher \ - --namespace cattle-system \ - --set hostname=rancher.my.org \ - --set 'extraEnv[0].name=CATTLE_FEATURES' # Available as of v2.3.0 - --set 'extraEnv[0].value==true,=true' # Available as of v2.3.0 -``` - -Note: If you are installing an alpha version, Helm requires adding the `--devel` option to the command. - -### Rendering the Helm Chart for Air Gap Installations - -For an air gap installation of Rancher, you need to add a Helm chart repository and render a Helm template before installing Rancher with Helm. For details, refer to the [air gap installation documentation.]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher) - -Here is an example of a command for passing in the feature flag names when rendering the Helm template. In the below example, two features are enabled by passing the feature flag names in a comma separated list. - -The Helm 3 command is as follows: - -``` -helm template rancher ./rancher-.tgz --output-dir . \ - --namespace cattle-system \ - --set hostname= \ - --set rancherImage=/rancher/rancher \ - --set ingress.tls.source=secret \ - --set systemDefaultRegistry= \ # Available as of v2.2.0, set a default private registry to be used in Rancher - --set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts - --set 'extraEnv[0].name=CATTLE_FEATURES' # Available as of v2.3.0 - --set 'extraEnv[0].value==true,=true' # Available as of v2.3.0 -``` - -The Helm 2 command is as follows: - -``` -helm template ./rancher-.tgz --output-dir . \ - --name rancher \ - --namespace cattle-system \ - --set hostname= \ - --set rancherImage=/rancher/rancher \ - --set ingress.tls.source=secret \ - --set systemDefaultRegistry= \ # Available as of v2.2.0, set a default private registry to be used in Rancher - --set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts - --set 'extraEnv[0].name=CATTLE_FEATURES' # Available as of v2.3.0 - --set 'extraEnv[0].value==true,=true' # Available as of v2.3.0 -``` - -{{% /tab %}} -{{% tab "Docker Install" %}} -When installing Rancher with Docker, use the `--features` option. In the below example, two features are enabled by passing the feature flag names in a comma separated list: - -``` -docker run -d -p 80:80 -p 443:443 \ - --restart=unless-stopped \ - rancher/rancher:rancher-latest \ - --features==true,=true # Available as of v2.3.0 -``` - -{{% /tab %}} -{{% /tabs %}} - -# Enabling Features with the Rancher UI - -_Available as of Rancher v2.3.3_ - -1. Go to the **Global** view and click **Settings.** -1. Click the **Feature Flags** tab. You will see a list of experimental features. -1. To enable a feature, go to the disabled feature you want to enable and click **⋮ > Activate.** - -**Result:** The feature is enabled. - -### Disabling Features with the Rancher UI - -1. Go to the **Global** view and click **Settings.** -1. Click the **Feature Flags** tab. You will see a list of experimental features. -1. To disable a feature, go to the enabled feature you want to disable and click **⋮ > Deactivate.** - -**Result:** The feature is disabled. - -# Enabling Features with the Rancher API - -1. Go to `/v3/features`. -1. In the `data` section, you will see an array containing all of the features that can be turned on with feature flags. The name of the feature is in the `id` field. Click the name of the feature you want to enable. -1. In the upper left corner of the screen, under **Operations,** click **Edit.** -1. In the **Value** drop-down menu, click **True.** -1. Click **Show Request.** -1. Click **Send Request.** -1. Click **Close.** - -**Result:** The feature is enabled. - -### Disabling Features with the Rancher API - -1. Go to `/v3/features`. -1. In the `data` section, you will see an array containing all of the features that can be turned on with feature flags. The name of the feature is in the `id` field. Click the name of the feature you want to enable. -1. In the upper left corner of the screen, under **Operations,** click **Edit.** -1. In the **Value** drop-down menu, click **False.** -1. Click **Show Request.** -1. Click **Send Request.** -1. Click **Close.** - -**Result:** The feature is disabled. diff --git a/content/rancher/v2.x/en/installation/options/feature-flags/enable-not-default-storage-drivers/_index.md b/content/rancher/v2.x/en/installation/options/feature-flags/enable-not-default-storage-drivers/_index.md deleted file mode 100644 index 8d254d8bd6d..00000000000 --- a/content/rancher/v2.x/en/installation/options/feature-flags/enable-not-default-storage-drivers/_index.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: Allow Unsupported Storage Drivers -weight: 1 -aliases: - - /rancher/v2.x/en/admin-settings/feature-flags/enable-not-default-storage-drivers ---- -_Available as of v2.3.0_ - -This feature allows you to use types for storage providers and provisioners that are not enabled by default. - -To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.]({{}}/rancher/v2.x/en/installation/options/feature-flags/) - -Environment Variable Key | Default Value | Description ----|---|--- - `unsupported-storage-drivers` | `false` | This feature enables types for storage providers and provisioners that are not enabled by default. - -### Types for Persistent Volume Plugins that are Enabled by Default -Below is a list of storage types for persistent volume plugins that are enabled by default. When enabling this feature flag, any persistent volume plugins that are not on this list are considered experimental and unsupported: - -Name | Plugin ---------|---------- -Amazon EBS Disk | `aws-ebs` -AzureFile | `azure-file` -AzureDisk | `azure-disk` -Google Persistent Disk | `gce-pd` -Longhorn | `flex-volume-longhorn` -VMware vSphere Volume | `vsphere-volume` -Local | `local` -Network File System | `nfs` -hostPath | `host-path` - -### Types for StorageClass that are Enabled by Default -Below is a list of storage types for a StorageClass that are enabled by default. When enabling this feature flag, any persistent volume plugins that are not on this list are considered experimental and unsupported: - -Name | Plugin ---------|-------- -Amazon EBS Disk | `aws-ebs` -AzureFile | `azure-file` -AzureDisk | `azure-disk` -Google Persistent Disk | `gce-pd` -Longhorn | `flex-volume-longhorn` -VMware vSphere Volume | `vsphere-volume` -Local | `local` \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/options/feature-flags/istio-virtual-service-ui/_index.md b/content/rancher/v2.x/en/installation/options/feature-flags/istio-virtual-service-ui/_index.md deleted file mode 100644 index f631b54d2ee..00000000000 --- a/content/rancher/v2.x/en/installation/options/feature-flags/istio-virtual-service-ui/_index.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: UI for Istio Virtual Services and Destination Rules -weight: 2 -aliases: - - /rancher/v2.x/en/admin-settings/feature-flags/istio-virtual-service-ui ---- -_Available as of v2.3.0_ - -This feature enables a UI that lets you create, read, update and delete virtual services and destination rules, which are traffic management features of Istio. - -> **Prerequisite:** Turning on this feature does not enable Istio. A cluster administrator needs to [enable Istio for the cluster]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup) in order to use the feature. - -To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.]({{}}/rancher/v2.x/en/installation/options/feature-flags/) - -Environment Variable Key | Default Value | Status | Available as of ----|---|---|--- -`istio-virtual-service-ui` |`false` | Experimental | v2.3.0 -`istio-virtual-service-ui` | `true` | GA | v2.3.2 - -# About this Feature - -A central advantage of Istio's traffic management features is that they allow dynamic request routing, which is useful for canary deployments, blue/green deployments, or A/B testing. - -When enabled, this feature turns on a page that lets you configure some traffic management features of Istio using the Rancher UI. Without this feature, you need to use `kubectl` to manage traffic with Istio. - -The feature enables two UI tabs: one tab for **Virtual Services** and another for **Destination Rules.** - -- **Virtual services** intercept and direct traffic to your Kubernetes services, allowing you to direct percentages of traffic from a request to different services. You can use them to define a set of routing rules to apply when a host is addressed. For details, refer to the [Istio documentation.](https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/) -- **Destination rules** serve as the single source of truth about which service versions are available to receive traffic from virtual services. You can use these resources to define policies that apply to traffic that is intended for a service after routing has occurred. For details, refer to the [Istio documentation.](https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule) - -To see these tabs, - -1. Go to the project view in Rancher and click **Resources > Istio.** -1. You will see tabs for **Traffic Graph,** which has the Kiali network visualization integrated into the UI, and **Traffic Metrics,** which shows metrics for the success rate and request volume of traffic to your services, among other metrics. Next to these tabs, you should see the tabs for **Virtual Services** and **Destination Rules.** \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/options/helm2/_index.md b/content/rancher/v2.x/en/installation/options/helm2/_index.md deleted file mode 100644 index cb60fb18d6d..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/_index.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -title: Kubernetes Installation Using Helm 2 -weight: 1 ---- - -> After Helm 3 was released, the Rancher installation instructions were updated to use Helm 3. -> -> If you are using Helm 2, we recommend [migrating to Helm 3](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) because it is simpler to use and more secure than Helm 2. -> -> This section provides a copy of the older high-availability Kubernetes Rancher installation instructions that used Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible. - -For production environments, we recommend installing Rancher in a high-availability configuration so that your user base can always access Rancher Server. When installed in a Kubernetes cluster, Rancher will integrate with the cluster's etcd database and take advantage of Kubernetes scheduling for high-availability. - -This procedure walks you through setting up a 3-node cluster with Rancher Kubernetes Engine (RKE) and installing the Rancher chart with the Helm package manager. - -> **Important:** The Rancher management server can only be run on an RKE-managed Kubernetes cluster. Use of Rancher on hosted Kubernetes or other providers is not supported. - -> **Important:** For the best performance, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads. - -## Recommended Architecture - -- DNS for Rancher should resolve to a Layer 4 load balancer (TCP) -- The Load Balancer should forward port TCP/80 and TCP/443 to all 3 nodes in the Kubernetes cluster. -- The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443. -- The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment. - -
Kubernetes Rancher install with layer 4 load balancer, depicting SSL termination at ingress controllers
-![High-availability Kubernetes Install]({{}}/img/rancher/ha/rancher2ha.svg) -Kubernetes Rancher install with Layer 4 load balancer (TCP), depicting SSL termination at ingress controllers - -## Required Tools - -The following CLI tools are required for this install. Please make sure these tools are installed and available in your `$PATH` - -- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) - Kubernetes command-line tool. -- [rke]({{}}/rke/latest/en/installation/) - Rancher Kubernetes Engine, cli for building Kubernetes clusters. -- [helm](https://docs.helm.sh/using_helm/#installing-helm) - Package management for Kubernetes. Refer to the [Helm version requirements]({{}}/rancher/v2.x/en/installation/options/helm-version) to choose a version of Helm to install Rancher. - -## Installation Outline - -- [Create Nodes and Load Balancer]({{}}/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/) -- [Install Kubernetes with RKE]({{}}/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/) -- [Initialize Helm (tiller)]({{}}/rancher/v2.x/en/installation/options/helm2/helm-init/) -- [Install Rancher]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/) - -## Additional Install Options - -- [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) - -## Previous Methods - -[RKE add-on install]({{}}/rancher/v2.x/en/installation/options/helm2/rke-add-on/) - -> **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> -> Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/options/helm2/#installation-outline). -> -> If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the Helm chart. diff --git a/content/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/_index.md b/content/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/_index.md deleted file mode 100644 index cd5da9e8763..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/_index.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: "1. Create Nodes and Load Balancer" -weight: 185 ---- - -Use your provider of choice to provision 3 nodes and a Load Balancer endpoint for your RKE install. - -> **Note:** These nodes must be in the same region/datacenter. You may place these servers in separate availability zones. - -### Node Requirements - -View the supported operating systems and hardware/software/networking requirements for nodes running Rancher at [Node Requirements]({{}}/rancher/v2.x/en/installation/requirements). - -View the OS requirements for RKE at [RKE Requirements]({{}}/rke/latest/en/os/) - -### Load Balancer - -RKE will configure an Ingress controller pod, on each of your nodes. The Ingress controller pods are bound to ports TCP/80 and TCP/443 on the host network and are the entry point for HTTPS traffic to the Rancher server. - -Configure a load balancer as a basic Layer 4 TCP forwarder. The exact configuration will vary depending on your environment. - ->**Important:** ->Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications. - -#### Examples - -* [Nginx]({{}}/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/nginx/) -* [Amazon NLB]({{}}/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/nlb/) - -### [Next: Install Kubernetes with RKE]({{}}/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/) diff --git a/content/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/nginx/_index.md b/content/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/nginx/_index.md deleted file mode 100644 index 253ca02b735..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/nginx/_index.md +++ /dev/null @@ -1,79 +0,0 @@ ---- -title: NGINX -weight: 270 ---- -NGINX will be configured as Layer 4 load balancer (TCP) that forwards connections to one of your Rancher nodes. - ->**Note:** -> In this configuration, the load balancer is positioned in front of your nodes. The load balancer can be any host capable of running NGINX. -> -> One caveat: do not use one of your Rancher nodes as the load balancer. - -## Install NGINX - -Start by installing NGINX on the node you want to use as a load balancer. NGINX has packages available for all known operating systems. The versions tested are `1.14` and `1.15`. For help installing NGINX, refer to their [install documentation](https://www.nginx.com/resources/wiki/start/topics/tutorials/install/). - -The `stream` module is required, which is present when using the official NGINX packages. Please refer to your OS documentation on how to install and enable the NGINX `stream` module on your operating system. - -## Create NGINX Configuration - -After installing NGINX, you need to update the NGINX configuration file, `nginx.conf`, with the IP addresses for your nodes. - -1. Copy and paste the code sample below into your favorite text editor. Save it as `nginx.conf`. - -2. From `nginx.conf`, replace both occurrences (port 80 and port 443) of ``, ``, and `` with the IPs of your [nodes]({{}}/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/). - - >**Note:** See [NGINX Documentation: TCP and UDP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/) for all configuration options. - -
Example NGINX config
- ``` - worker_processes 4; - worker_rlimit_nofile 40000; - - events { - worker_connections 8192; - } - - stream { - upstream rancher_servers_http { - least_conn; - server :80 max_fails=3 fail_timeout=5s; - server :80 max_fails=3 fail_timeout=5s; - server :80 max_fails=3 fail_timeout=5s; - } - server { - listen 80; - proxy_pass rancher_servers_http; - } - - upstream rancher_servers_https { - least_conn; - server :443 max_fails=3 fail_timeout=5s; - server :443 max_fails=3 fail_timeout=5s; - server :443 max_fails=3 fail_timeout=5s; - } - server { - listen 443; - proxy_pass rancher_servers_https; - } - } - ``` - -3. Save `nginx.conf` to your load balancer at the following path: `/etc/nginx/nginx.conf`. - -4. Load the updates to your NGINX configuration by running the following command: - - ``` - # nginx -s reload - ``` - -## Option - Run NGINX as Docker container - -Instead of installing NGINX as a package on the operating system, you can rather run it as a Docker container. Save the edited **Example NGINX config** as `/etc/nginx.conf` and run the following command to launch the NGINX container: - -``` -docker run -d --restart=unless-stopped \ - -p 80:80 -p 443:443 \ - -v /etc/nginx.conf:/etc/nginx/nginx.conf \ - nginx:1.14 -``` diff --git a/content/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/nlb/_index.md b/content/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/nlb/_index.md deleted file mode 100644 index c22d03a5739..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/create-nodes-lb/nlb/_index.md +++ /dev/null @@ -1,175 +0,0 @@ ---- -title: Amazon NLB -weight: 277 ---- -## Objectives - -Configuring an Amazon NLB is a multistage process. We've broken it down into multiple tasks so that it's easy to follow. - -1. [Create Target Groups](#create-target-groups) - - Begin by creating two target groups for the **TCP** protocol, one regarding TCP port 443 and one regarding TCP port 80 (providing redirect to TCP port 443). You'll add your Linux nodes to these groups. - -2. [Register Targets](#register-targets) - - Add your Linux nodes to the target groups. - -3. [Create Your NLB](#create-your-nlb) - - Use Amazon's Wizard to create an Network Load Balancer. As part of this process, you'll add the target groups you created in **1. Create Target Groups**. - -> **Note:** Rancher only supports using the Amazon NLB when terminating traffic in `tcp` mode for port 443 rather than `tls` mode. This is due to the fact that the NLB does not inject the correct headers into requests when terminated at the NLB. This means that if you want to use certificates managed by the Amazon Certificate Manager (ACM), you should use an ELB or ALB. - -## Create Target Groups - -Your first NLB configuration step is to create two target groups. Technically, only port 443 is needed to access Rancher, but its convenient to add a listener for port 80 which will be redirected to port 443 automatically. The NGINX ingress controller on the nodes will make sure that port 80 gets redirected to port 443. - -Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started, make sure to select the **Region** where your EC2 instances (Linux nodes) are created. - -The Target Groups configuration resides in the **Load Balancing** section of the **EC2** service. Select **Services** and choose **EC2**, find the section **Load Balancing** and open **Target Groups**. - -{{< img "/img/rancher/ha/nlb/ec2-loadbalancing.png" "EC2 Load Balancing section">}} - -Click **Create target group** to create the first target group, regarding TCP port 443. - -### Target Group (TCP port 443) - -Configure the first target group according to the table below. Screenshots of the configuration are shown just below the table. - -Option | Setting ---------------------------------------|------------------------------------ -Target Group Name | `rancher-tcp-443` -Protocol | `TCP` -Port | `443` -Target type | `instance` -VPC | Choose your VPC -Protocol
(Health Check) | `HTTP` -Path
(Health Check) | `/healthz` -Port (Advanced health check) | `override`,`80` -Healthy threshold (Advanced health) | `3` -Unhealthy threshold (Advanced) | `3` -Timeout (Advanced) | `6 seconds` -Interval (Advanced) | `10 second` -Success codes | `200-399` - -
-**Screenshot Target group TCP port 443 settings**
-{{< img "/img/rancher/ha/nlb/create-targetgroup-443.png" "Target group 443">}} - -
-**Screenshot Target group TCP port 443 Advanced settings**
-{{< img "/img/rancher/ha/nlb/create-targetgroup-443-advanced.png" "Target group 443 Advanced">}} - -
- -Click **Create target group** to create the second target group, regarding TCP port 80. - -### Target Group (TCP port 80) - -Configure the second target group according to the table below. Screenshots of the configuration are shown just below the table. - -Option | Setting ---------------------------------------|------------------------------------ -Target Group Name | `rancher-tcp-80` -Protocol | `TCP` -Port | `80` -Target type | `instance` -VPC | Choose your VPC -Protocol
(Health Check) | `HTTP` -Path
(Health Check) | `/healthz` -Port (Advanced health check) | `traffic port` -Healthy threshold (Advanced health) | `3` -Unhealthy threshold (Advanced) | `3` -Timeout (Advanced) | `6 seconds` -Interval (Advanced) | `10 second` -Success codes | `200-399` - -
-**Screenshot Target group TCP port 80 settings**
-{{< img "/img/rancher/ha/nlb/create-targetgroup-80.png" "Target group 80">}} - -
-**Screenshot Target group TCP port 80 Advanced settings**
-{{< img "/img/rancher/ha/nlb/create-targetgroup-80-advanced.png" "Target group 80 Advanced">}} - -
- -## Register Targets - -Next, add your Linux nodes to both target groups. - -Select the target group named **rancher-tcp-443**, click the tab **Targets** and choose **Edit**. - -{{< img "/img/rancher/ha/nlb/edit-targetgroup-443.png" "Edit target group 443">}} - -Select the instances (Linux nodes) you want to add, and click **Add to registered**. - -
-**Screenshot Add targets to target group TCP port 443**
- -{{< img "/img/rancher/ha/nlb/add-targets-targetgroup-443.png" "Add targets to target group 443">}} - -
-**Screenshot Added targets to target group TCP port 443**
- -{{< img "/img/rancher/ha/nlb/added-targets-targetgroup-443.png" "Added targets to target group 443">}} - -When the instances are added, click **Save** on the bottom right of the screen. - -Repeat those steps, replacing **rancher-tcp-443** with **rancher-tcp-80**. The same instances need to be added as targets to this target group. - -## Create Your NLB - -Use Amazon's Wizard to create an Network Load Balancer. As part of this process, you'll add the target groups you created in [Create Target Groups](#create-target-groups). - -1. From your web browser, navigate to the [Amazon EC2 Console](https://console.aws.amazon.com/ec2/). - -2. From the navigation pane, choose **LOAD BALANCING** > **Load Balancers**. - -3. Click **Create Load Balancer**. - -4. Choose **Network Load Balancer** and click **Create**. - -5. Complete the **Step 1: Configure Load Balancer** form. - - **Basic Configuration** - - - Name: `rancher` - - Scheme: `internal` or `internet-facing` - - The Scheme that you choose for your NLB is dependent on the configuration of your instances/VPC. If your instances do not have public IPs associated with them, or you will only be accessing Rancher internally, you should set your NLB Scheme to `internal` rather than `internet-facing`. - - **Listeners** - - Add the **Load Balancer Protocols** and **Load Balancer Ports** below. - - `TCP`: `443` - - - **Availability Zones** - - - Select Your **VPC** and **Availability Zones**. - -6. Complete the **Step 2: Configure Routing** form. - - - From the **Target Group** drop-down, choose **Existing target group**. - - - From the **Name** drop-down, choose `rancher-tcp-443`. - - - Open **Advanced health check settings**, and configure **Interval** to `10 seconds`. - -7. Complete **Step 3: Register Targets**. Since you registered your targets earlier, all you have to do is click **Next: Review**. - -8. Complete **Step 4: Review**. Look over the load balancer details and click **Create** when you're satisfied. - -9. After AWS creates the NLB, click **Close**. - -## Add listener to NLB for TCP port 80 - -1. Select your newly created NLB and select the **Listeners** tab. - -2. Click **Add listener**. - -3. Use `TCP`:`80` as **Protocol** : **Port** - -4. Click **Add action** and choose **Forward to...** - -5. From the **Forward to** drop-down, choose `rancher-tcp-80`. - -6. Click **Save** in the top right of the screen. diff --git a/content/rancher/v2.x/en/installation/options/helm2/helm-init/_index.md b/content/rancher/v2.x/en/installation/options/helm2/helm-init/_index.md deleted file mode 100644 index 3eefe165e48..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/helm-init/_index.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -title: "Initialize Helm: Install the Tiller Service" -description: "With Helm, you can create configurable deployments instead of using static files. In order to use Helm, the Tiller service needs to be installed on your cluster." -weight: 195 ---- - -Helm is the package management tool of choice for Kubernetes. Helm "charts" provide templating syntax for Kubernetes YAML manifest documents. With Helm we can create configurable deployments instead of just using static files. For more information about creating your own catalog of deployments, check out the docs at [https://helm.sh/](https://helm.sh/). To be able to use Helm, the server-side component `tiller` needs to be installed on your cluster. - -For systems without direct internet access, see [Helm - Air Gap]({{}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#helm) for install details. - -Refer to the [Helm version requirements]({{}}/rancher/v2.x/en/installation/options/helm-version) to choose a version of Helm to install Rancher. - -> **Note:** The installation instructions assume you are using Helm 2. The instructions will be updated for Helm 3 soon. In the meantime, if you want to use Helm 3, refer to [these instructions.](https://github.com/ibrokethecloud/rancher-helm3) - -### Install Tiller on the Cluster - -> **Important:** Due to an issue with Helm v2.12.0 and cert-manager, please use Helm v2.12.1 or higher. - -Helm installs the `tiller` service on your cluster to manage charts. Since RKE enables RBAC by default we will need to use `kubectl` to create a `serviceaccount` and `clusterrolebinding` so `tiller` has permission to deploy to the cluster. - -* Create the `ServiceAccount` in the `kube-system` namespace. -* Create the `ClusterRoleBinding` to give the `tiller` account access to the cluster. -* Finally use `helm` to install the `tiller` service - -```plain -kubectl -n kube-system create serviceaccount tiller - -kubectl create clusterrolebinding tiller \ - --clusterrole=cluster-admin \ - --serviceaccount=kube-system:tiller - -helm init --service-account tiller - -# Users in China: You will need to specify a specific tiller-image in order to initialize tiller. -# The list of tiller image tags are available here: https://dev.aliyun.com/detail.html?spm=5176.1972343.2.18.ErFNgC&repoId=62085. -# When initializing tiller, you'll need to pass in --tiller-image - -helm init --service-account tiller \ ---tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller: -``` - -> **Note:** This`tiller`install has full cluster access, which should be acceptable if the cluster is dedicated to Rancher server. Check out the [helm docs](https://docs.helm.sh/using_helm/#role-based-access-control) for restricting `tiller` access to suit your security requirements. - -### Test your Tiller installation - -Run the following command to verify the installation of `tiller` on your cluster: - -``` -kubectl -n kube-system rollout status deploy/tiller-deploy -Waiting for deployment "tiller-deploy" rollout to finish: 0 of 1 updated replicas are available... -deployment "tiller-deploy" successfully rolled out -``` - -And run the following command to validate Helm can talk to the `tiller` service: - -``` -helm version -Client: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"} -Server: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"} -``` - -### Issues or errors? - -See the [Troubleshooting]({{}}/rancher/v2.x/en/installation/options/helm2/helm-init/troubleshooting/) page. - -### [Next: Install Rancher]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/) diff --git a/content/rancher/v2.x/en/installation/options/helm2/helm-init/troubleshooting/_index.md b/content/rancher/v2.x/en/installation/options/helm2/helm-init/troubleshooting/_index.md deleted file mode 100644 index 6dd085454eb..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/helm-init/troubleshooting/_index.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Troubleshooting -weight: 276 ---- - -### Helm commands show forbidden - -When Helm is initiated in the cluster without specifying the correct `ServiceAccount`, the command `helm init` will succeed but you won't be able to execute most of the other `helm` commands. The following error will be shown: - -``` -Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system" -``` - -To resolve this, the server component (`tiller`) needs to be removed and added with the correct `ServiceAccount`. You can use `helm reset --force` to remove the `tiller` from the cluster. Please check if it is removed using `helm version --server`. - -``` -helm reset --force -Tiller (the Helm server-side component) has been uninstalled from your Kubernetes Cluster. -helm version --server -Error: could not find tiller -``` - -When you have confirmed that `tiller` has been removed, please follow the steps provided in [Initialize Helm (Install tiller)]({{}}/rancher/v2.x/en/installation/options/helm2/helm-init/) to install `tiller` with the correct `ServiceAccount`. diff --git a/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/_index.md b/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/_index.md deleted file mode 100644 index ddd8d3db46e..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/_index.md +++ /dev/null @@ -1,218 +0,0 @@ ---- -title: "4. Install Rancher" -weight: 200 ---- - -Rancher installation is managed using the Helm package manager for Kubernetes. Use `helm` to install the prerequisite and charts to install Rancher. - -For systems without direct internet access, see [Air Gap: Kubernetes install]({{}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/). - -Refer to the [Helm version requirements]({{}}/rancher/v2.x/en/installation/options/helm-version) to choose a version of Helm to install Rancher. - -> **Note:** The installation instructions assume you are using Helm 2. The instructions will be updated for Helm 3 soon. In the meantime, if you want to use Helm 3, refer to [these instructions.](https://github.com/ibrokethecloud/rancher-helm3) - -### Add the Helm Chart Repository - -Use `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Version of Rancher]({{}}/rancher/v2.x/en/installation/options/server-tags/#helm-chart-repositories). - -{{< release-channel >}} - -``` -helm repo add rancher- https://releases.rancher.com/server-charts/ -``` - -### Choose your SSL Configuration - -Rancher Server is designed to be secure by default and requires SSL/TLS configuration. - -There are three recommended options for the source of the certificate. - -> **Note:** If you want terminate SSL/TLS externally, see [TLS termination on an External Load Balancer]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/#external-tls-termination). - -| Configuration | Chart option | Description | Requires cert-manager | -|-----|-----|-----|-----| -| [Rancher Generated Certificates](#rancher-generated-certificates) | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)
This is the **default** | [yes](#optional-install-cert-manager) | -| [Let’s Encrypt](#let-s-encrypt) | `ingress.tls.source=letsEncrypt` | Use [Let's Encrypt](https://letsencrypt.org/) to issue a certificate | [yes](#optional-install-cert-manager) | -| [Certificates from Files](#certificates-from-files) | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s) | no | - -### Optional: Install cert-manager - -**Note:** cert-manager is only required for certificates issued by Rancher's generated CA (`ingress.tls.source=rancher`) and Let's Encrypt issued certificates (`ingress.tls.source=letsEncrypt`). You should skip this step if you are using your own certificate files (option `ingress.tls.source=secret`) or if you use [TLS termination on an External Load Balancer]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/#external-tls-termination). - -> **Important:** -> Due to an issue with Helm v2.12.0 and cert-manager, please use Helm v2.12.1 or higher. - -> Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). - -Rancher relies on [cert-manager](https://github.com/jetstack/cert-manager) to issue certificates from Rancher's own generated CA or to request Let's Encrypt certificates. - -These instructions are adapted from the [official cert-manager documentation](https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html#installing-with-helm). - - -1. Install the CustomResourceDefinition resources separately - ```plain - kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml - ``` - -1. Create the namespace for cert-manager - ```plain - kubectl create namespace cert-manager - ``` - -1. Label the cert-manager namespace to disable resource validation - ```plain - kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true - ``` - -1. Add the Jetstack Helm repository - ```plain - helm repo add jetstack https://charts.jetstack.io - ``` - -1. Update your local Helm chart repository cache - ```plain - helm repo update - ``` - -1. Install the cert-manager Helm chart - ```plain - helm install \ - --name cert-manager \ - --namespace cert-manager \ - --version v0.12.0 \ - jetstack/cert-manager - ``` - -Once you’ve installed cert-manager, you can verify it is deployed correctly by checking the cert-manager namespace for running pods: - -``` -kubectl get pods --namespace cert-manager - -NAME READY STATUS RESTARTS AGE -cert-manager-7cbdc48784-rpgnt 1/1 Running 0 3m -cert-manager-webhook-5b5dd6999-kst4x 1/1 Running 0 3m -cert-manager-cainjector-3ba5cd2bcd-de332x 1/1 Running 0 3m -``` - -If the ‘webhook’ pod (2nd line) is in a ContainerCreating state, it may still be waiting for the Secret to be mounted into the pod. Wait a couple of minutes for this to happen but if you experience problems, please check the [troubleshooting](https://docs.cert-manager.io/en/latest/getting-started/troubleshooting.html) guide. - -
- -#### Rancher Generated Certificates - -> **Note:** You need to have [cert-manager](#optional-install-cert-manager) installed before proceeding. - -The default is for Rancher to generate a CA and uses `cert-manager` to issue the certificate for access to the Rancher server interface. Because `rancher` is the default option for `ingress.tls.source`, we are not specifying `ingress.tls.source` when running the `helm install` command. - -- Set the `hostname` to the DNS name you pointed at your load balancer. -- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. - -``` -helm install rancher-/rancher \ - --name rancher \ - --namespace cattle-system \ - --set hostname=rancher.my.org -``` - -Wait for Rancher to be rolled out: - -``` -kubectl -n cattle-system rollout status deploy/rancher -Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available... -deployment "rancher" successfully rolled out -``` - -#### Let's Encrypt - -> **Note:** You need to have [cert-manager](#optional-install-cert-manager) installed before proceeding. - -This option uses `cert-manager` to automatically request and renew [Let's Encrypt](https://letsencrypt.org/) certificates. This is a free service that provides you with a valid certificate as Let's Encrypt is a trusted CA. This configuration uses HTTP validation (`HTTP-01`) so the load balancer must have a public DNS record and be accessible from the internet. - -- Set `hostname` to the public DNS record, set `ingress.tls.source` to `letsEncrypt` and `letsEncrypt.email` to the email address used for communication about your certificate (for example, expiry notices) -- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. - -``` -helm install rancher-/rancher \ - --name rancher \ - --namespace cattle-system \ - --set hostname=rancher.my.org \ - --set ingress.tls.source=letsEncrypt \ - --set letsEncrypt.email=me@example.org -``` - -Wait for Rancher to be rolled out: - -``` -kubectl -n cattle-system rollout status deploy/rancher -Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available... -deployment "rancher" successfully rolled out -``` - -#### Certificates from Files - -Create Kubernetes secrets from your own certificates for Rancher to use. - - -> **Note:** The `Common Name` or a `Subject Alternative Names` entry in the server certificate must match the `hostname` option, or the ingress controller will fail to configure correctly. Although an entry in the `Subject Alternative Names` is technically required, having a matching `Common Name` maximizes compatibility with older browsers/applications. If you want to check if your certificates are correct, see [How do I check Common Name and Subject Alternative Names in my server certificate?]({{}}/rancher/v2.x/en/faq/technical/#how-do-i-check-common-name-and-subject-alternative-names-in-my-server-certificate) - -- Set `hostname` and set `ingress.tls.source` to `secret`. -- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. - -``` -helm install rancher-/rancher \ - --name rancher \ - --namespace cattle-system \ - --set hostname=rancher.my.org \ - --set ingress.tls.source=secret -``` - -If you are using a Private CA signed certificate , add `--set privateCA=true` to the command: - -``` -helm install rancher-/rancher \ - --name rancher \ - --namespace cattle-system \ - --set hostname=rancher.my.org \ - --set ingress.tls.source=secret - --set privateCA=true -``` - -Now that Rancher is deployed, see [Adding TLS Secrets]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them. - -After adding the secrets, check if Rancher was rolled out successfully: - -``` -kubectl -n cattle-system rollout status deploy/rancher -Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available... -deployment "rancher" successfully rolled out -``` - -If you see the following error: `error: deployment "rancher" exceeded its progress deadline`, you can check the status of the deployment by running the following command: - -``` -kubectl -n cattle-system get deploy rancher -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -rancher 3 3 3 3 3m -``` - -It should show the same count for `DESIRED` and `AVAILABLE`. - -### Advanced Configurations - -The Rancher chart configuration has many options for customizing the install to suit your specific environment. Here are some common advanced scenarios. - -* [HTTP Proxy]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/#http-proxy) -* [Private Docker Image Registry]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/#private-registry-and-air-gap-installs) -* [TLS Termination on an External Load Balancer]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/#external-tls-termination) - -See the [Chart Options]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/) for the full list of options. - -### Save your options - -Make sure you save the `--set` options you used. You will need to use the same options when you upgrade Rancher to new versions with Helm. - -### Finishing Up - -That's it you should have a functional Rancher server. Point a browser at the hostname you picked and you should be greeted by the colorful login page. - -Doesn't work? Take a look at the [Troubleshooting]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/troubleshooting/) Page diff --git a/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/_index.md b/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/_index.md deleted file mode 100644 index 261365488ac..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/_index.md +++ /dev/null @@ -1,245 +0,0 @@ ---- -title: Chart Options -weight: 276 ---- - -### Common Options - -| Option | Default Value | Description | -| --- | --- | --- | -| `hostname` | " " | `string` - the Fully Qualified Domain Name for your Rancher Server | -| `ingress.tls.source` | "rancher" | `string` - Where to get the cert for the ingress. - "rancher, letsEncrypt, secret" | -| `letsEncrypt.email` | " " | `string` - Your email address | -| `letsEncrypt.environment` | "production" | `string` - Valid options: "staging, production" | -| `privateCA` | false | `bool` - Set to true if your cert is signed by a private CA | - -
- -### Advanced Options - -| Option | Default Value | Description | -| --- | --- | --- | -| `additionalTrustedCAs` | false | `bool` - See [Additional Trusted CAs](#additional-trusted-cas) | -| `addLocal` | "auto" | `string` - Have Rancher detect and import the "local" Rancher server cluster [Import "local Cluster](#import-local-cluster) | -| `antiAffinity` | "preferred" | `string` - AntiAffinity rule for Rancher pods - "preferred, required" | -| `auditLog.destination` | "sidecar" | `string` - Stream to sidecar container console or hostPath volume - "sidecar, hostPath" | -| `auditLog.hostPath` | "/var/log/rancher/audit" | `string` - log file destination on host (only applies when `auditLog.destination` is set to `hostPath`) | -| `auditLog.level` | 0 | `int` - set the [API Audit Log]({{}}/rancher/v2.x/en/installation/api-auditing) level. 0 is off. [0-3] | -| `auditLog.maxAge` | 1 | `int` - maximum number of days to retain old audit log files (only applies when `auditLog.destination` is set to `hostPath`) | -| `auditLog.maxBackups` | 1 | `int` - maximum number of audit log files to retain (only applies when `auditLog.destination` is set to `hostPath`) | -| `auditLog.maxSize` | 100 | `int` - maximum size in megabytes of the audit log file before it gets rotated (only applies when `auditLog.destination` is set to `hostPath`) | -| `busyboxImage` | "busybox" | `string` - Image location for busybox image used to collect audit logs _Note: Available as of v2.2.0_ | -| `debug` | false | `bool` - set debug flag on rancher server | -| `extraEnv` | [] | `list` - set additional environment variables for Rancher _Note: Available as of v2.2.0_ | -| `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials | -| `ingress.extraAnnotations` | {} | `map` - additional annotations to customize the ingress | -| `ingress.configurationSnippet` | "" | `string` - Add additional Nginx configuration. Can be used for proxy configuration. _Note: Available as of v2.0.15, v2.1.10 and v2.2.4_ | -| `proxy` | "" | `string` - HTTP[S] proxy server for Rancher | -| `noProxy` | "127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16" | `string` - comma separated list of hostnames or ip address not to use the proxy | -| `resources` | {} | `map` - rancher pod resource requests & limits | -| `rancherImage` | "rancher/rancher" | `string` - rancher image source | -| `rancherImageTag` | same as chart version | `string` - rancher/rancher image tag | -| `tls` | "ingress" | `string` - See [External TLS Termination](#external-tls-termination) for details. - "ingress, external" | -| `systemDefaultRegistry` | "" | `string` - private registry to be used for all system Docker images, e.g., http://registry.example.com/ _Available as of v2.3.0_ | -| `useBundledSystemChart` | `false` | `bool` - select to use the system-charts packaged with Rancher server. This option is used for air gapped installations. _Available as of v2.3.0_ - -
- -### API Audit Log - -Enabling the [API Audit Log]({{}}/rancher/v2.x/en/installation/api-auditing/). - -You can collect this log as you would any container log. Enable the [Logging service under Rancher Tools]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/) for the `System` Project on the Rancher server cluster. - -```plain ---set auditLog.level=1 -``` - -By default enabling Audit Logging will create a sidecar container in the Rancher pod. This container (`rancher-audit-log`) will stream the log to `stdout`. You can collect this log as you would any container log. When using the sidecar as the audit log destination, the `hostPath`, `maxAge`, `maxBackups`, and `maxSize` options do not apply. It's advised to use your OS or Docker daemon's log rotation features to control disk space use. Enable the [Logging service under Rancher Tools]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/) for the Rancher server cluster or System Project. - -Set the `auditLog.destination` to `hostPath` to forward logs to volume shared with the host system instead of streaming to a sidecar container. When setting the destination to `hostPath` you may want to adjust the other auditLog parameters for log rotation. - -### Setting Extra Environment Variables - -_Available as of v2.2.0_ - -You can set extra environment variables for Rancher server using `extraEnv`. This list uses the same `name` and `value` keys as the container manifest definitions. Remember to quote the values. - -```plain ---set 'extraEnv[0].name=CATTLE_TLS_MIN_VERSION' ---set 'extraEnv[0].value=1.0' -``` - -### TLS settings - -_Available as of v2.2.0_ - -To set a different TLS configuration, you can use the `CATTLE_TLS_MIN_VERSION` and `CATTLE_TLS_CIPHERS` environment variables. For example, to configure TLS 1.0 as minimum accepted TLS version: - -```plain ---set 'extraEnv[0].name=CATTLE_TLS_MIN_VERSION' ---set 'extraEnv[0].value=1.0' -``` - -See [TLS settings]({{}}/rancher/v2.x/en/admin-settings/tls-settings) for more information and options. - -### Import `local` Cluster - -By default Rancher server will detect and import the `local` cluster it's running on. User with access to the `local` cluster will essentially have "root" access to all the clusters managed by Rancher server. - -If this is a concern in your environment you can set this option to "false" on your initial install. - -> Note: This option is only effective on the initial Rancher install. See [Issue 16522](https://github.com/rancher/rancher/issues/16522) for more information. - -```plain ---set addLocal="false" -``` - -### Customizing your Ingress - -To customize or use a different ingress with Rancher server you can set your own Ingress annotations. - -Example on setting a custom certificate issuer: - -```plain ---set ingress.extraAnnotations.'certmanager\.k8s\.io/cluster-issuer'=ca-key-pair -``` - -_Available as of v2.0.15, v2.1.10 and v2.2.4_ - -Example on setting a static proxy header with `ingress.configurationSnippet`. This value is parsed like a template so variables can be used. - -```plain ---set ingress.configurationSnippet='more_set_input_headers X-Forwarded-Host {{ .Values.hostname }};' -``` - -### HTTP Proxy - -Rancher requires internet access for some functionality (helm charts). Use `proxy` to set your proxy server. - -Add your IP exceptions to the `noProxy` list. Make sure you add the Service cluster IP range (default: 10.43.0.1/16) and any worker cluster `controlplane` nodes. Rancher supports CIDR notation ranges in this list. - -```plain ---set proxy="http://:@:/" ---set noProxy="127.0.0.0/8\,10.0.0.0/8\,172.16.0.0/12\,192.168.0.0/16" -``` - -### Additional Trusted CAs - -If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add additional trusted CAs to Rancher. - -```plain ---set additionalTrustedCAs=true -``` - -Once the Rancher deployment is created, copy your CA certs in pem format into a file named `ca-additional.pem` and use `kubectl` to create the `tls-ca-additional` secret in the `cattle-system` namespace. - -```plain -kubectl -n cattle-system create secret generic tls-ca-additional --from-file=ca-additional.pem=./ca-additional.pem -``` - -### Private Registry and Air Gap Installs - -For details on installing Rancher with a private registry, see: - -- [Air Gap: Docker Install]({{}}/rancher/v2.x/en/installation/air-gap-single-node/) -- [Air Gap: Kubernetes Install]({{}}/rancher/v2.x/en/installation/air-gap-high-availability/) - - -### External TLS Termination - -We recommend configuring your load balancer as a Layer 4 balancer, forwarding plain 80/tcp and 443/tcp to the Rancher Management cluster nodes. The Ingress Controller on the cluster will redirect http traffic on port 80 to https on port 443. - -You may terminate the SSL/TLS on a L7 load balancer external to the Rancher cluster (ingress). Use the `--set tls=external` option and point your load balancer at port http 80 on all of the Rancher cluster nodes. This will expose the Rancher interface on http port 80. Be aware that clients that are allowed to connect directly to the Rancher cluster will not be encrypted. If you choose to do this we recommend that you restrict direct access at the network level to just your load balancer. - -> **Note:** If you are using a Private CA signed certificate, add `--set privateCA=true` and see [Adding TLS Secrets - Using a Private CA Signed Certificate]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/tls-secrets/#using-a-private-ca-signed-certificate) to add the CA cert for Rancher. - -Your load balancer must support long lived websocket connections and will need to insert proxy headers so Rancher can route links correctly. - -#### Configuring Ingress for External TLS when Using NGINX v0.25 - -In NGINX v0.25, the behavior of NGINX has [changed](https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md#0220) regarding forwarding headers and external TLS termination. Therefore, in the scenario that you are using external TLS termination configuration with NGINX v0.25, you must edit the `cluster.yml` to enable the `use-forwarded-headers` option for ingress: - -```yaml -ingress: - provider: nginx - options: - use-forwarded-headers: "true" -``` - -#### Required Headers - -* `Host` -* `X-Forwarded-Proto` -* `X-Forwarded-Port` -* `X-Forwarded-For` - -#### Recommended Timeouts - -* Read Timeout: `1800 seconds` -* Write Timeout: `1800 seconds` -* Connect Timeout: `30 seconds` - -#### Health Checks - -Rancher will respond `200` to health checks on the `/healthz` endpoint. - - -#### Example NGINX config - -This NGINX configuration is tested on NGINX 1.14. - - >**Note:** This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - HTTP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/). - -* Replace `IP_NODE1`, `IP_NODE2` and `IP_NODE3` with the IP addresses of the nodes in your cluster. -* Replace both occurrences of `FQDN` to the DNS name for Rancher. -* Replace `/certs/fullchain.pem` and `/certs/privkey.pem` to the location of the server certificate and the server certificate key respectively. - -``` -worker_processes 4; -worker_rlimit_nofile 40000; - -events { - worker_connections 8192; -} - -http { - upstream rancher { - server IP_NODE_1:80; - server IP_NODE_2:80; - server IP_NODE_3:80; - } - - map $http_upgrade $connection_upgrade { - default Upgrade; - '' close; - } - - server { - listen 443 ssl http2; - server_name FQDN; - ssl_certificate /certs/fullchain.pem; - ssl_certificate_key /certs/privkey.pem; - - location / { - proxy_set_header Host $host; - proxy_set_header X-Forwarded-Proto $scheme; - proxy_set_header X-Forwarded-Port $server_port; - proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; - proxy_pass http://rancher; - proxy_http_version 1.1; - proxy_set_header Upgrade $http_upgrade; - proxy_set_header Connection $connection_upgrade; - # This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close. - proxy_read_timeout 900s; - proxy_buffering off; - } - } - - server { - listen 80; - server_name FQDN; - return 301 https://$server_name$request_uri; - } -} -``` diff --git a/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/tls-secrets/_index.md b/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/tls-secrets/_index.md deleted file mode 100644 index 65a9b8435fa..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/tls-secrets/_index.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Adding Kubernetes TLS Secrets -description: Read about how to populate the Kubernetes TLS secret for a Rancher installation -weight: 276 ---- - -Kubernetes will create all the objects and services for Rancher, but it will not become available until we populate the `tls-rancher-ingress` secret in the `cattle-system` namespace with the certificate and key. - -Combine the server certificate followed by any intermediate certificate(s) needed into a file named `tls.crt`. Copy your certificate key into a file named `tls.key`. - -For example, [acme.sh](https://acme.sh) provides server certificate and CA chains in `fullchain.cer` file. -This `fullchain.cer` should be renamed to `tls.crt` & certificate key file as `tls.key`. - -Use `kubectl` with the `tls` secret type to create the secrets. - -``` -kubectl -n cattle-system create secret tls tls-rancher-ingress \ - --cert=tls.crt \ - --key=tls.key -``` - -> **Note:** If you want to replace the certificate, you can delete the `tls-rancher-ingress` secret using `kubectl -n cattle-system delete secret tls-rancher-ingress` and add a new one using the command shown above. If you are using a private CA signed certificate, replacing the certificate is only possible if the new certificate is signed by the same CA as the certificate currently in use. - -### Using a Private CA Signed Certificate - -If you are using a private CA, Rancher requires a copy of the CA certificate which is used by the Rancher Agent to validate the connection to the server. - -Copy the CA certificate into a file named `cacerts.pem` and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace. - -``` -kubectl -n cattle-system create secret generic tls-ca \ - --from-file=cacerts.pem=./cacerts.pem -``` diff --git a/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/troubleshooting/_index.md b/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/troubleshooting/_index.md deleted file mode 100644 index d5ef3d045f6..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/troubleshooting/_index.md +++ /dev/null @@ -1,133 +0,0 @@ ---- -title: Troubleshooting -weight: 276 ---- - -### Where is everything - -Most of the troubleshooting will be done on objects in these 3 namespaces. - -* `cattle-system` - `rancher` deployment and pods. -* `ingress-nginx` - Ingress controller pods and services. -* `kube-system` - `tiller` and `cert-manager` pods. - -### "default backend - 404" - -A number of things can cause the ingress-controller not to forward traffic to your rancher instance. Most of the time its due to a bad ssl configuration. - -Things to check - -* [Is Rancher Running](#is-rancher-running) -* [Cert CN is "Kubernetes Ingress Controller Fake Certificate"](#cert-cn-is-kubernetes-ingress-controller-fake-certificate) - -### Is Rancher Running - -Use `kubectl` to check the `cattle-system` system namespace and see if the Rancher pods are in a Running state. - -``` -kubectl -n cattle-system get pods - -NAME READY STATUS RESTARTS AGE -pod/rancher-784d94f59b-vgqzh 1/1 Running 0 10m -``` - -If the state is not `Running`, run a `describe` on the pod and check the Events. - -``` -kubectl -n cattle-system describe pod - -... -Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal Scheduled 11m default-scheduler Successfully assigned rancher-784d94f59b-vgqzh to localhost - Normal SuccessfulMountVolume 11m kubelet, localhost MountVolume.SetUp succeeded for volume "rancher-token-dj4mt" - Normal Pulling 11m kubelet, localhost pulling image "rancher/rancher:v2.0.4" - Normal Pulled 11m kubelet, localhost Successfully pulled image "rancher/rancher:v2.0.4" - Normal Created 11m kubelet, localhost Created container - Normal Started 11m kubelet, localhost Started container -``` - -### Checking the rancher logs - -Use `kubectl` to list the pods. - -``` -kubectl -n cattle-system get pods - -NAME READY STATUS RESTARTS AGE -pod/rancher-784d94f59b-vgqzh 1/1 Running 0 10m -``` - -Use `kubectl` and the pod name to list the logs from the pod. - -``` -kubectl -n cattle-system logs -f rancher-784d94f59b-vgqzh -``` - -### Cert CN is "Kubernetes Ingress Controller Fake Certificate" - -Use your browser to check the certificate details. If it says the Common Name is "Kubernetes Ingress Controller Fake Certificate", something may have gone wrong with reading or issuing your SSL cert. - -> **Note:** if you are using LetsEncrypt to issue certs it can sometimes take a few minuets to issue the cert. - -#### cert-manager issued certs (Rancher Generated or LetsEncrypt) - -`cert-manager` has 3 parts. - -* `cert-manager` pod in the `kube-system` namespace. -* `Issuer` object in the `cattle-system` namespace. -* `Certificate` object in the `cattle-system` namespace. - -Work backwards and do a `kubectl describe` on each object and check the events. You can track down what might be missing. - -For example there is a problem with the Issuer: - -``` -kubectl -n cattle-system describe certificate -... -Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Warning IssuerNotReady 18s (x23 over 19m) cert-manager Issuer rancher not ready -``` - -``` -kubectl -n cattle-system describe issuer -... -Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Warning ErrInitIssuer 19m (x12 over 19m) cert-manager Error initializing issuer: secret "tls-rancher" not found - Warning ErrGetKeyPair 9m (x16 over 19m) cert-manager Error getting keypair for CA issuer: secret "tls-rancher" not found -``` - -#### Bring Your Own SSL Certs - -Your certs get applied directly to the Ingress object in the `cattle-system` namespace. - -Check the status of the Ingress object and see if its ready. - -``` -kubectl -n cattle-system describe ingress -``` - -If its ready and the SSL is still not working you may have a malformed cert or secret. - -Check the nginx-ingress-controller logs. Because the nginx-ingress-controller has multiple containers in its pod you will need to specify the name of the container. - -``` -kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller -... -W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found -``` - -### no matches for kind "Issuer" - -The [SSL configuration]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/#choose-your-ssl-configuration) option you have chosen requires [cert-manager]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/#optional-install-cert-manager) to be installed before installing Rancher or else the following error is shown: - -``` -Error: validation failed: unable to recognize "": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1" -``` - -Install [cert-manager]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/#optional-install-cert-manager) and try installing Rancher again. diff --git a/content/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/_index.md b/content/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/_index.md deleted file mode 100644 index 10efe3341a3..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/_index.md +++ /dev/null @@ -1,132 +0,0 @@ ---- -title: "2. Install Kubernetes with RKE" -weight: 190 ---- - -Use RKE to install Kubernetes with a high availability etcd configuration. - ->**Note:** For systems without direct internet access see [Air Gap: Kubernetes install]({{}}/rancher/v2.x/en/installation/air-gap-high-availability/) for install details. - -### Create the `rancher-cluster.yml` File - -Using the sample below create the `rancher-cluster.yml` file. Replace the IP Addresses in the `nodes` list with the IP address or DNS names of the 3 nodes you created. - -> **Note:** If your node has public and internal addresses, it is recommended to set the `internal_address:` so Kubernetes will use it for intra-cluster communication. Some services like AWS EC2 require setting the `internal_address:` if you want to use self-referencing security groups or firewalls. - - -```yaml -nodes: - - address: 165.227.114.63 - internal_address: 172.16.22.12 - user: ubuntu - role: [controlplane,worker,etcd] - - address: 165.227.116.167 - internal_address: 172.16.32.37 - user: ubuntu - role: [controlplane,worker,etcd] - - address: 165.227.127.226 - internal_address: 172.16.42.73 - user: ubuntu - role: [controlplane,worker,etcd] - -services: - etcd: - snapshot: true - creation: 6h - retention: 24h -``` - -#### Common RKE Nodes Options - -| Option | Required | Description | -| --- | --- | --- | -| `address` | yes | The public DNS or IP address | -| `user` | yes | A user that can run docker commands | -| `role` | yes | List of Kubernetes roles assigned to the node | -| `internal_address` | no | The private DNS or IP address for internal cluster traffic | -| `ssh_key_path` | no | Path to SSH private key used to authenticate to the node (defaults to `~/.ssh/id_rsa`) | - -#### Advanced Configurations - -RKE has many configuration options for customizing the install to suit your specific environment. - -Please see the [RKE Documentation]({{}}/rke/latest/en/config-options/) for the full list of options and capabilities. - -For tuning your etcd cluster for larger Rancher installations see the [etcd settings guide]({{}}/rancher/v2.x/en/installation/options/etcd/). - -### Run RKE - -``` -rke up --config ./rancher-cluster.yml -``` - -When finished, it should end with the line: `Finished building Kubernetes cluster successfully`. - -### Testing Your Cluster - -RKE should have created a file `kube_config_rancher-cluster.yml`. This file has the credentials for `kubectl` and `helm`. - -> **Note:** If you have used a different file name from `rancher-cluster.yml`, then the kube config file will be named `kube_config_.yml`. - -You can copy this file to `$HOME/.kube/config` or if you are working with multiple Kubernetes clusters, set the `KUBECONFIG` environmental variable to the path of `kube_config_rancher-cluster.yml`. - -``` -export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml -``` - -Test your connectivity with `kubectl` and see if all your nodes are in `Ready` state. - -``` -kubectl get nodes - -NAME STATUS ROLES AGE VERSION -165.227.114.63 Ready controlplane,etcd,worker 11m v1.13.5 -165.227.116.167 Ready controlplane,etcd,worker 11m v1.13.5 -165.227.127.226 Ready controlplane,etcd,worker 11m v1.13.5 -``` - -### Check the Health of Your Cluster Pods - -Check that all the required pods and containers are healthy are ready to continue. - -* Pods are in `Running` or `Completed` state. -* `READY` column shows all the containers are running (i.e. `3/3`) for pods with `STATUS` `Running` -* Pods with `STATUS` `Completed` are run-once Jobs. For these pods `READY` should be `0/1`. - -``` -kubectl get pods --all-namespaces - -NAMESPACE NAME READY STATUS RESTARTS AGE -ingress-nginx nginx-ingress-controller-tnsn4 1/1 Running 0 30s -ingress-nginx nginx-ingress-controller-tw2ht 1/1 Running 0 30s -ingress-nginx nginx-ingress-controller-v874b 1/1 Running 0 30s -kube-system canal-jp4hz 3/3 Running 0 30s -kube-system canal-z2hg8 3/3 Running 0 30s -kube-system canal-z6kpw 3/3 Running 0 30s -kube-system kube-dns-7588d5b5f5-sf4vh 3/3 Running 0 30s -kube-system kube-dns-autoscaler-5db9bbb766-jz2k6 1/1 Running 0 30s -kube-system metrics-server-97bc649d5-4rl2q 1/1 Running 0 30s -kube-system rke-ingress-controller-deploy-job-bhzgm 0/1 Completed 0 30s -kube-system rke-kubedns-addon-deploy-job-gl7t4 0/1 Completed 0 30s -kube-system rke-metrics-addon-deploy-job-7ljkc 0/1 Completed 0 30s -kube-system rke-network-plugin-deploy-job-6pbgj 0/1 Completed 0 30s -``` - -### Save Your Files - -> **Important** -> The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster. - -Save a copy of the following files in a secure location: - -- `rancher-cluster.yml`: The RKE cluster configuration file. -- `kube_config_rancher-cluster.yml`: The [Kubeconfig file]({{}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster. -- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file]({{}}/rke/latest/en/installation/#kubernetes-cluster-state), this file contains credentials for full access to the cluster.

_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._ - -> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file. - -### Issues or errors? - -See the [Troubleshooting]({{}}/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/troubleshooting/) page. - -### [Next: Initialize Helm (Install tiller)]({{}}/rancher/v2.x/en/installation/options/helm2/helm-init/) diff --git a/content/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/troubleshooting/_index.md b/content/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/troubleshooting/_index.md deleted file mode 100644 index 275fbb7c94b..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/troubleshooting/_index.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: Troubleshooting -weight: 276 ---- - -### canal Pods show READY 2/3 - -The most common cause of this issue is port 8472/UDP is not open between the nodes. Check your local firewall, network routing or security groups. - -Once the network issue is resolved, the `canal` pods should timeout and restart to establish their connections. - -### nginx-ingress-controller Pods show RESTARTS - -The most common cause of this issue is the `canal` pods have failed to establish the overlay network. See [canal Pods show READY `2/3`](#canal-pods-show-ready-2-3) for troubleshooting. - -### Failed to set up SSH tunneling for host [xxx.xxx.xxx.xxx]: Can't retrieve Docker Info - -#### Failed to dial to /var/run/docker.sock: ssh: rejected: administratively prohibited (open failed) - -* User specified to connect with does not have permission to access the Docker socket. This can be checked by logging into the host and running the command `docker ps`: - -``` -$ ssh user@server -user@server$ docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -``` - -See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) how to set this up properly. - -* When using RedHat/CentOS as operating system, you cannot use the user `root` to connect to the nodes because of [Bugzilla #1527565](https://bugzilla.redhat.com/show_bug.cgi?id=1527565). You will need to add a separate user and configure it to access the Docker socket. See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) how to set this up properly. - -* SSH server version is not version 6.7 or higher. This is needed for socket forwarding to work, which is used to connect to the Docker socket over SSH. This can be checked using `sshd -V` on the host you are connecting to, or using netcat: -``` -$ nc xxx.xxx.xxx.xxx 22 -SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.10 -``` - -#### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: Error configuring SSH: ssh: no key found - -* The key file specified as `ssh_key_path` cannot be accessed. Make sure that you specified the private key file (not the public key, `.pub`), and that the user that is running the `rke` command can access the private key file. - -#### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain - -* The key file specified as `ssh_key_path` is not correct for accessing the node. Double-check if you specified the correct `ssh_key_path` for the node and if you specified the correct user to connect with. - -#### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: Error configuring SSH: ssh: cannot decode encrypted private keys - -* If you want to use encrypted private keys, you should use `ssh-agent` to load your keys with your passphrase. If the `SSH_AUTH_SOCK` environment variable is found in the environment where the `rke` command is run, it will be used automatically to connect to the node. - -#### Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? - -* The node is not reachable on the configured `address` and `port`. diff --git a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/_index.md b/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/_index.md deleted file mode 100644 index d540cb07167..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/_index.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: RKE Add-On Install -weight: 276 ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/options/helm2/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - - -* [Kubernetes installation with External Load Balancer (TCP/Layer 4)]({{}}/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-4-lb) -* [Kubernetes installation with External Load Balancer (HTTPS/Layer 7)]({{}}/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-7-lb) -* [HTTP Proxy Configuration for a Kubernetes installation]({{}}/rancher/v2.x/en/installation/options/helm2/rke-add-on/proxy/) -* [Troubleshooting RKE Add-on Installs]({{}}/rancher/v2.x/en/installation/options/helm2/rke-add-on/troubleshooting/) diff --git a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/api-auditing/_index.md b/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/api-auditing/_index.md deleted file mode 100644 index a99f7966714..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/api-auditing/_index.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: Enable API Auditing -weight: 300 -aliases: - - /rke/latest/en/config-options/add-ons/api-auditing/ ---- - ->**Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/options/helm2/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -If you're using RKE to install Rancher, you can use directives to enable API Auditing for your Rancher install. You can know what happened, when it happened, who initiated it, and what cluster it affected. API auditing records all requests and responses to and from the Rancher API, which includes use of the Rancher UI and any other use of the Rancher API through programmatic use. - -## In-line Arguments - -Enable API Auditing using RKE by adding arguments to your Rancher container. - -To enable API auditing: - -- Add API Auditing arguments (`args`) to your Rancher container. -- Declare a `mountPath` in the `volumeMounts` directive of the container. -- Declare a `path` in the `volumes` directive. - -For more information about each argument, its syntax, and how to view API Audit logs, see [Rancher v2.0 Documentation: API Auditing]({{}}/rancher/v2.x/en/installation/api-auditing). - -```yaml -... -containers: - - image: rancher/rancher:latest - imagePullPolicy: Always - name: cattle-server - args: ["--audit-log-path", "/var/log/auditlog/rancher-api-audit.log", "--audit-log-maxbackup", "5", "--audit-log-maxsize", "50", "--audit-level", "2"] - ports: - - containerPort: 80 - protocol: TCP - - containerPort: 443 - protocol: TCP - volumeMounts: - - mountPath: /etc/rancher/ssl - name: cattle-keys-volume - readOnly: true - - mountPath: /var/log/auditlog - name: audit-log-dir - volumes: - - name: cattle-keys-volume - secret: - defaultMode: 420 - secretName: cattle-keys-server - - name: audit-log-dir - hostPath: - path: /var/log/rancher/auditlog - type: Directory -``` diff --git a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-4-lb/_index.md b/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-4-lb/_index.md deleted file mode 100644 index 6b88e593545..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-4-lb/_index.md +++ /dev/null @@ -1,400 +0,0 @@ ---- -title: Kubernetes Install with External Load Balancer (TCP/Layer 4) -weight: 275 ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/options/helm2/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a High-availability Kubernetes install with an RKE add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the Helm chart. - -This procedure walks you through setting up a 3-node cluster using the Rancher Kubernetes Engine (RKE). The cluster's sole purpose is running pods for Rancher. The setup is based on: - -- Layer 4 load balancer (TCP) -- [NGINX ingress controller with SSL termination (HTTPS)](https://kubernetes.github.io/ingress-nginx/) - -In a Kubernetes setup that uses a layer 4 load balancer, the load balancer accepts Rancher client connections over the TCP/UDP protocols (i.e., the transport level). The load balancer then forwards these connections to individual cluster nodes without reading the request itself. Because the load balancer cannot read the packets it's forwarding, the routing decisions it can make are limited. - -Kubernetes Rancher install with layer 4 load balancer, depicting SSL termination at ingress controllers -![High-availability Kubernetes installation of Rancher]({{}}/img/rancher/ha/rancher2ha.svg) - -## Installation Outline - -Installation of Rancher in a high-availability configuration involves multiple procedures. Review this outline to learn about each procedure you need to complete. - - - -- [1. Provision Linux Hosts](#1-provision-linux-hosts) -- [2. Configure Load Balancer](#2-configure-load-balancer) -- [3. Configure DNS](#3-configure-dns) -- [4. Install RKE](#4-install-rke) -- [5. Download RKE Config File Template](#5-download-rke-config-file-template) -- [6. Configure Nodes](#6-configure-nodes) -- [7. Configure Certificates](#7-configure-certificates) -- [8. Configure FQDN](#8-configure-fqdn) -- [9. Configure Rancher version](#9-configure-rancher-version) -- [10. Back Up Your RKE Config File](#10-back-up-your-rke-config-file) -- [11. Run RKE](#11-run-rke) -- [12. Back Up Auto-Generated Config File](#12-back-up-auto-generated-config-file) - - - -
- -## 1. Provision Linux Hosts - -Provision three Linux hosts according to our [Requirements]({{}}/rancher/v2.x/en/installation/requirements). - -## 2. Configure Load Balancer - -We will be using NGINX as our Layer 4 Load Balancer (TCP). NGINX will forward all connections to one of your Rancher nodes. If you want to use Amazon NLB, you can skip this step and use [Amazon NLB configuration]({{}}/rancher/v2.x/en/installation/k8s-install-server-install/nlb/) - ->**Note:** -> In this configuration, the load balancer is positioned in front of your Linux hosts. The load balancer can be any host that you have available that's capable of running NGINX. -> ->One caveat: do not use one of your Rancher nodes as the load balancer. - -### A. Install NGINX - -Start by installing NGINX on your load balancer host. NGINX has packages available for all known operating systems. For help installing NGINX, refer to their [install documentation](https://www.nginx.com/resources/wiki/start/topics/tutorials/install/). - -The `stream` module is required, which is present when using the official NGINX packages. Please refer to your OS documentation how to install and enable the NGINX `stream` module on your operating system. - -### B. Create NGINX Configuration - -After installing NGINX, you need to update the NGINX config file, `nginx.conf`, with the IP addresses for your nodes. - -1. Copy and paste the code sample below into your favorite text editor. Save it as `nginx.conf`. - -2. From `nginx.conf`, replace `IP_NODE_1`, `IP_NODE_2`, and `IP_NODE_3` with the IPs of your [Linux hosts](#1-provision-linux-hosts). - - >**Note:** This Nginx configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - TCP and UDP Load Balancer](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/). - - **Example NGINX config:** - ``` - worker_processes 4; - worker_rlimit_nofile 40000; - - events { - worker_connections 8192; - } - - http { - server { - listen 80; - return 301 https://$host$request_uri; - } - } - - stream { - upstream rancher_servers { - least_conn; - server IP_NODE_1:443 max_fails=3 fail_timeout=5s; - server IP_NODE_2:443 max_fails=3 fail_timeout=5s; - server IP_NODE_3:443 max_fails=3 fail_timeout=5s; - } - server { - listen 443; - proxy_pass rancher_servers; - } - } - ``` - -3. Save `nginx.conf` to your load balancer at the following path: `/etc/nginx/nginx.conf`. - -4. Load the updates to your NGINX configuration by running the following command: - - ``` - # nginx -s reload - ``` - -### Option - Run NGINX as Docker container - -Instead of installing NGINX as a package on the operating system, you can rather run it as a Docker container. Save the edited **Example NGINX config** as `/etc/nginx.conf` and run the following command to launch the NGINX container: - -``` -docker run -d --restart=unless-stopped \ - -p 80:80 -p 443:443 \ - -v /etc/nginx.conf:/etc/nginx/nginx.conf \ - nginx:1.14 -``` - -## 3. Configure DNS - -Choose a fully qualified domain name (FQDN) that you want to use to access Rancher (e.g., `rancher.yourdomain.com`).

- -1. Log into your DNS server a create a `DNS A` record that points to the IP address of your [load balancer](#2-configure-load-balancer). - -2. Validate that the `DNS A` is working correctly. Run the following command from any terminal, replacing `HOSTNAME.DOMAIN.COM` with your chosen FQDN: - - `nslookup HOSTNAME.DOMAIN.COM` - - **Step Result:** Terminal displays output similar to the following: - - ``` - $ nslookup rancher.yourdomain.com - Server: YOUR_HOSTNAME_IP_ADDRESS - Address: YOUR_HOSTNAME_IP_ADDRESS#53 - - Non-authoritative answer: - Name: rancher.yourdomain.com - Address: HOSTNAME.DOMAIN.COM - ``` - -
- -## 4. Install RKE - -RKE (Rancher Kubernetes Engine) is a fast, versatile Kubernetes installer that you can use to install Kubernetes on your Linux hosts. We will use RKE to setup our cluster and run Rancher. - -1. Follow the [RKE Install]({{}}/rke/latest/en/installation) instructions. - -2. Confirm that RKE is now executable by running the following command: - - ``` - rke --version - ``` - -## 5. Download RKE Config File Template - -RKE uses a `.yml` config file to install and configure your Kubernetes cluster. There are 2 templates to choose from, depending on the SSL certificate you want to use. - -1. Download one of following templates, depending on the SSL certificate you're using. - - - [Template for self-signed certificate
`3-node-certificate.yml`]({{}}/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-certificate) - - [Template for certificate signed by recognized CA
`3-node-certificate-recognizedca.yml`]({{}}/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-certificate-recognizedca) - - >**Advanced Config Options:** - > - >- Want records of all transactions with the Rancher API? Enable the [API Auditing]({{}}/rancher/v2.x/en/installation/api-auditing) feature by editing your RKE config file. For more information, see how to enable it in [your RKE config file]({{}}/rancher/v2.x/en/installation/options/helm2/rke-add-on/api-auditing/). - >- Want to know the other config options available for your RKE template? See the [RKE Documentation: Config Options]({{}}/rke/latest/en/config-options/). - - -2. Rename the file to `rancher-cluster.yml`. - -## 6. Configure Nodes - -Once you have the `rancher-cluster.yml` config file template, edit the nodes section to point toward your Linux hosts. - -1. Open `rancher-cluster.yml` in your favorite text editor. - -1. Update the `nodes` section with the information of your [Linux hosts](#1-provision-linux-hosts). - - For each node in your cluster, update the following placeholders: `IP_ADDRESS_X` and `USER`. The specified user should be able to access the Docket socket, you can test this by logging in with the specified user and run `docker ps`. - - >**Note:** - > When using RHEL/CentOS, the SSH user can't be root due to https://bugzilla.redhat.com/show_bug.cgi?id=1527565. See [Operating System Requirements]({{}}/rke/latest/en/installation/os#redhat-enterprise-linux-rhel-centos) >for RHEL/CentOS specific requirements. - - nodes: - # The IP address or hostname of the node - - address: IP_ADDRESS_1 - # User that can login to the node and has access to the Docker socket (i.e. can execute `docker ps` on the node) - # When using RHEL/CentOS, this can't be root due to https://bugzilla.redhat.com/show_bug.cgi?id=1527565 - user: USER - role: [controlplane,etcd,worker] - # Path the SSH key that can be used to access to node with the specified user - ssh_key_path: ~/.ssh/id_rsa - - address: IP_ADDRESS_2 - user: USER - role: [controlplane,etcd,worker] - ssh_key_path: ~/.ssh/id_rsa - - address: IP_ADDRESS_3 - user: USER - role: [controlplane,etcd,worker] - ssh_key_path: ~/.ssh/id_rsa - -1. **Optional:** By default, `rancher-cluster.yml` is configured to take backup snapshots of your data. To disable these snapshots, change the `backup` directive setting to `false`, as depicted below. - - services: - etcd: - backup: false - - -## 7. Configure Certificates - -For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster. - -Choose from the following options: - -{{% accordion id="option-a" label="Option A—Bring Your Own Certificate: Self-Signed" %}} - ->**Prerequisites:** ->Create a self-signed certificate. -> ->- The certificate files must be in [PEM format](#pem). ->- The certificate files must be encoded in [base64](#base64). ->- In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Intermediate Certificates](#cert-order). - -1. In `kind: Secret` with `name: cattle-keys-ingress`: - - * Replace `` with the base64 encoded string of the Certificate file (usually called `cert.pem` or `domain.crt`) - * Replace `` with the base64 encoded string of the Certificate Key file (usually called `key.pem` or `domain.key`) - - >**Note:** - > The base64 encoded string should be on the same line as `tls.crt` or `tls.key`, without any newline at the beginning, in between or at the end. - - **Step Result:** After replacing the values, the file should look like the example below (the base64 encoded strings should be different): - - ```yaml - --- - apiVersion: v1 - kind: Secret - metadata: - name: cattle-keys-ingress - namespace: cattle-system - type: Opaque - data: - tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1RENDQWN5Z0F3SUJBZ0lKQUlHc25NeG1LeGxLTUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIzUmxjM1F0WTJFd0hoY05NVGd3TlRBMk1qRXdOREE1V2hjTk1UZ3dOekExTWpFd05EQTVXakFXTVJRdwpFZ1lEVlFRRERBdG9ZUzV5Ym1Ob2NpNXViRENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DCmdnRUJBTFJlMXdzekZSb2Rib2pZV05DSHA3UkdJaUVIMENDZ1F2MmdMRXNkUUNKZlcrUFEvVjM0NnQ3bSs3TFEKZXJaV3ZZMWpuY2VuWU5JSGRBU0VnU0ducWExYnhUSU9FaE0zQXpib3B0WDhjSW1OSGZoQlZETGdiTEYzUk0xaQpPM1JLTGdIS2tYSTMxZndjbU9zWGUwaElYQnpUbmxnM20vUzlXL3NTc0l1dDVwNENDUWV3TWlpWFhuUElKb21lCmpkS3VjSHFnMTlzd0YvcGVUalZrcVpuMkJHazZRaWFpMU41bldRV0pjcThTenZxTTViZElDaWlwYU9hWWQ3RFEKYWRTejV5dlF0YkxQNW4wTXpnOU43S3pGcEpvUys5QWdkWDI5cmZqV2JSekp3RzM5R3dRemN6VWtLcnZEb05JaQo0UFJHc01yclFNVXFSYjRSajNQOEJodEMxWXNDQXdFQUFhTTVNRGN3Q1FZRFZSMFRCQUl3QURBTEJnTlZIUThFCkJBTUNCZUF3SFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdJR0NDc0dBUVVGQndNQk1BMEdDU3FHU0liM0RRRUIKQ3dVQUE0SUJBUUNKZm5PWlFLWkowTFliOGNWUW5Vdi9NZkRZVEJIQ0pZcGM4MmgzUGlXWElMQk1jWDhQRC93MgpoOUExNkE4NGNxODJuQXEvaFZYYy9JNG9yaFY5WW9jSEg5UlcvbGthTUQ2VEJVR0Q1U1k4S292MHpHQ1ROaDZ6Ci9wZTNqTC9uU0pYSjRtQm51czJheHFtWnIvM3hhaWpYZG9kMmd3eGVhTklvRjNLbHB2aGU3ZjRBNmpsQTM0MmkKVVlCZ09iN1F5KytRZWd4U1diSmdoSzg1MmUvUUhnU2FVSkN6NW1sNGc1WndnNnBTUXhySUhCNkcvREc4dElSYwprZDMxSk1qY25Fb1Rhc1Jyc1NwVmNGdXZyQXlXN2liakZyYzhienBNcE1obDVwYUZRcEZzMnIwaXpZekhwakFsCk5ZR2I2OHJHcjBwQkp3YU5DS2ErbCtLRTk4M3A3NDYwCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K - tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdEY3WEN6TVZHaDF1aU5oWTBJZW50RVlpSVFmUUlLQkMvYUFzU3gxQUlsOWI0OUQ5ClhmanEzdWI3c3RCNnRsYTlqV09keDZkZzBnZDBCSVNCSWFlcHJWdkZNZzRTRXpjRE51aW0xZnh3aVkwZCtFRlUKTXVCc3NYZEV6V0k3ZEVvdUFjcVJjamZWL0J5WTZ4ZDdTRWhjSE5PZVdEZWI5TDFiK3hLd2k2M21uZ0lKQjdBeQpLSmRlYzhnbWlaNk4wcTV3ZXFEWDJ6QVgrbDVPTldTcG1mWUVhVHBDSnFMVTNtZFpCWWx5cnhMTytvemx0MGdLCktLbG81cGgzc05CcDFMUG5LOUMxc3MvbWZRek9EMDNzck1Xa21oTDcwQ0IxZmIydCtOWnRITW5BYmYwYkJETnoKTlNRcXU4T2cwaUxnOUVhd3l1dEF4U3BGdmhHUGMvd0dHMExWaXdJREFRQUJBb0lCQUJKYUErOHp4MVhjNEw0egpwUFd5bDdHVDRTMFRLbTNuWUdtRnZudjJBZXg5WDFBU2wzVFVPckZyTnZpK2xYMnYzYUZoSFZDUEN4N1RlMDVxClhPa2JzZnZkZG5iZFQ2RjgyMnJleVByRXNINk9TUnBWSzBmeDVaMDQwVnRFUDJCWm04eTYyNG1QZk1vbDdya2MKcm9Kd09rOEVpUHZZekpsZUd0bTAwUm1sRysyL2c0aWJsOTVmQXpyc1MvcGUyS3ZoN2NBVEtIcVh6MjlpUmZpbApiTGhBamQwcEVSMjNYU0hHR1ZqRmF3amNJK1c2L2RtbDZURDhrSzFGaUtldmJKTlREeVNXQnpPbXRTYUp1K01JCm9iUnVWWG4yZVNoamVGM1BYcHZRMWRhNXdBa0dJQWxOWjRHTG5QU2ZwVmJyU0plU3RrTGNzdEJheVlJS3BWZVgKSVVTTHM0RUNnWUVBMmNnZUE2WHh0TXdFNU5QWlNWdGhzbXRiYi9YYmtsSTdrWHlsdk5zZjFPdXRYVzkybVJneQpHcEhUQ0VubDB0Z1p3T081T1FLNjdFT3JUdDBRWStxMDJzZndwcmgwNFZEVGZhcW5QNTBxa3BmZEJLQWpmanEyCjFoZDZMd2hLeDRxSm9aelp2VkowV0lvR1ZLcjhJSjJOWGRTUVlUanZUZHhGczRTamdqNFFiaEVDZ1lFQTFBWUUKSEo3eVlza2EvS2V2OVVYbmVrSTRvMm5aYjJ1UVZXazRXSHlaY2NRN3VMQVhGY3lJcW5SZnoxczVzN3RMTzJCagozTFZNUVBzazFNY25oTTl4WE4vQ3ZDTys5b2t0RnNaMGJqWFh6NEJ5V2lFNHJPS1lhVEFwcDVsWlpUT3ZVMWNyCm05R3NwMWJoVDVZb2RaZ3IwUHQyYzR4U2krUVlEWnNFb2lFdzNkc0NnWUVBcVJLYWNweWZKSXlMZEJjZ0JycGkKQTRFalVLMWZsSjR3enNjbGFKUDVoM1NjZUFCejQzRU1YT0kvSXAwMFJsY3N6em83N3cyMmpud09mOEJSM0RBMwp6ZTRSWDIydWw4b0hGdldvdUZOTTNOZjNaNExuYXpVc0F0UGhNS2hRWGMrcEFBWGthUDJkZzZ0TU5PazFxaUNHCndvU212a1BVVE84b1ViRTB1NFZ4ZmZFQ2dZQUpPdDNROVNadUlIMFpSSitIV095enlOQTRaUEkvUkhwN0RXS1QKajVFS2Y5VnR1OVMxY1RyOTJLVVhITXlOUTNrSjg2OUZPMnMvWk85OGg5THptQ2hDTjhkOWN6enI5SnJPNUFMTApqWEtBcVFIUlpLTFgrK0ZRcXZVVlE3cTlpaHQyMEZPb3E5OE5SZDMzSGYxUzZUWDNHZ3RWQ21YSml6dDAxQ3ZHCmR4VnVnd0tCZ0M2Mlp0b0RLb3JyT2hvdTBPelprK2YwQS9rNDJBOENiL29VMGpwSzZtdmxEWmNYdUF1QVZTVXIKNXJCZjRVYmdVYndqa1ZWSFR6LzdDb1BWSjUvVUxJWk1Db1RUNFprNTZXWDk4ZE93Q3VTVFpZYnlBbDZNS1BBZApTZEpuVVIraEpnSVFDVGJ4K1dzYnh2d0FkbWErWUhtaVlPRzZhSklXMXdSd1VGOURLUEhHCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== - ``` - -2. In `kind: Secret` with `name: cattle-keys-server`, replace `` with the base64 encoded string of the CA Certificate file (usually called `ca.pem` or `ca.crt`). - - >**Note:** - > The base64 encoded string should be on the same line as `cacerts.pem`, without any newline at the beginning, in between or at the end. - - - **Step Result:** The file should look like the example below (the base64 encoded string should be different): - - ```yaml - --- - apiVersion: v1 - kind: Secret - metadata: - name: cattle-keys-server - namespace: cattle-system - type: Opaque - data: - cacerts.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNvRENDQVlnQ0NRRHVVWjZuMEZWeU16QU5CZ2txaGtpRzl3MEJBUXNGQURBU01SQXdEZ1lEVlFRRERBZDAKWlhOMExXTmhNQjRYRFRFNE1EVXdOakl4TURRd09Wb1hEVEU0TURjd05USXhNRFF3T1Zvd0VqRVFNQTRHQTFVRQpBd3dIZEdWemRDMWpZVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFNQmpBS3dQCndhRUhwQTdaRW1iWWczaTNYNlppVmtGZFJGckJlTmFYTHFPL2R0RUdmWktqYUF0Wm45R1VsckQxZUlUS3UzVHgKOWlGVlV4Mmo1Z0tyWmpwWitCUnFiZ1BNbk5hS1hocmRTdDRtUUN0VFFZdGRYMVFZS0pUbWF5NU45N3FoNTZtWQprMllKRkpOWVhHWlJabkdMUXJQNk04VHZramF0ZnZOdmJ0WmtkY2orYlY3aWhXanp2d2theHRUVjZlUGxuM2p5CnJUeXBBTDliYnlVcHlad3E2MWQvb0Q4VUtwZ2lZM1dOWmN1YnNvSjhxWlRsTnN6UjVadEFJV0tjSE5ZbE93d2oKaG41RE1tSFpwZ0ZGNW14TU52akxPRUc0S0ZRU3laYlV2QzlZRUhLZTUxbGVxa1lmQmtBZWpPY002TnlWQUh1dApuay9DMHpXcGdENkIwbkVDQXdFQUFUQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFHTCtaNkRzK2R4WTZsU2VBClZHSkMvdzE1bHJ2ZXdia1YxN3hvcmlyNEMxVURJSXB6YXdCdFJRSGdSWXVtblVqOGo4T0hFWUFDUEthR3BTVUsKRDVuVWdzV0pMUUV0TDA2eTh6M3A0MDBrSlZFZW9xZlVnYjQrK1JLRVJrWmowWXR3NEN0WHhwOVMzVkd4NmNOQQozZVlqRnRQd2hoYWVEQmdma1hXQWtISXFDcEsrN3RYem9pRGpXbi8walI2VDcrSGlaNEZjZ1AzYnd3K3NjUDIyCjlDQVZ1ZFg4TWpEQ1hTcll0Y0ZINllBanlCSTJjbDhoSkJqa2E3aERpVC9DaFlEZlFFVFZDM3crQjBDYjF1NWcKdE03Z2NGcUw4OVdhMnp5UzdNdXk5bEthUDBvTXl1Ty82Tm1wNjNsVnRHeEZKSFh4WTN6M0lycGxlbTNZQThpTwpmbmlYZXc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== - ``` - -{{% /accordion %}} - -{{% accordion id="option-b" label="Option B—Bring Your Own Certificate: Signed by Recognized CA" %}} ->**Note:** -> If you are using Self Signed Certificate, [click here](#option-a-bring-your-own-certificate-self-signed) to proceed. - -If you are using a Certificate Signed By A Recognized Certificate Authority, you will need to generate a base64 encoded string for the Certificate file and the Certificate Key file. Make sure that your certificate file includes all the [intermediate certificates](#cert-order) in the chain, the order of certificates in this case is first your own certificate, followed by the intermediates. Please refer to the documentation of your CSP (Certificate Service Provider) to see what intermediate certificate(s) need to be included. - -In the `kind: Secret` with `name: cattle-keys-ingress`: - -* Replace `` with the base64 encoded string of the Certificate file (usually called `cert.pem` or `domain.crt`) -* Replace `` with the base64 encoded string of the Certificate Key file (usually called `key.pem` or `domain.key`) - -After replacing the values, the file should look like the example below (the base64 encoded strings should be different): - ->**Note:** -> The base64 encoded string should be on the same line as `tls.crt` or `tls.key`, without any newline at the beginning, in between or at the end. - -```yaml ---- -apiVersion: v1 -kind: Secret -metadata: - name: cattle-keys-ingress - namespace: cattle-system -type: Opaque -data: - tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1RENDQWN5Z0F3SUJBZ0lKQUlHc25NeG1LeGxLTUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIzUmxjM1F0WTJFd0hoY05NVGd3TlRBMk1qRXdOREE1V2hjTk1UZ3dOekExTWpFd05EQTVXakFXTVJRdwpFZ1lEVlFRRERBdG9ZUzV5Ym1Ob2NpNXViRENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DCmdnRUJBTFJlMXdzekZSb2Rib2pZV05DSHA3UkdJaUVIMENDZ1F2MmdMRXNkUUNKZlcrUFEvVjM0NnQ3bSs3TFEKZXJaV3ZZMWpuY2VuWU5JSGRBU0VnU0ducWExYnhUSU9FaE0zQXpib3B0WDhjSW1OSGZoQlZETGdiTEYzUk0xaQpPM1JLTGdIS2tYSTMxZndjbU9zWGUwaElYQnpUbmxnM20vUzlXL3NTc0l1dDVwNENDUWV3TWlpWFhuUElKb21lCmpkS3VjSHFnMTlzd0YvcGVUalZrcVpuMkJHazZRaWFpMU41bldRV0pjcThTenZxTTViZElDaWlwYU9hWWQ3RFEKYWRTejV5dlF0YkxQNW4wTXpnOU43S3pGcEpvUys5QWdkWDI5cmZqV2JSekp3RzM5R3dRemN6VWtLcnZEb05JaQo0UFJHc01yclFNVXFSYjRSajNQOEJodEMxWXNDQXdFQUFhTTVNRGN3Q1FZRFZSMFRCQUl3QURBTEJnTlZIUThFCkJBTUNCZUF3SFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdJR0NDc0dBUVVGQndNQk1BMEdDU3FHU0liM0RRRUIKQ3dVQUE0SUJBUUNKZm5PWlFLWkowTFliOGNWUW5Vdi9NZkRZVEJIQ0pZcGM4MmgzUGlXWElMQk1jWDhQRC93MgpoOUExNkE4NGNxODJuQXEvaFZYYy9JNG9yaFY5WW9jSEg5UlcvbGthTUQ2VEJVR0Q1U1k4S292MHpHQ1ROaDZ6Ci9wZTNqTC9uU0pYSjRtQm51czJheHFtWnIvM3hhaWpYZG9kMmd3eGVhTklvRjNLbHB2aGU3ZjRBNmpsQTM0MmkKVVlCZ09iN1F5KytRZWd4U1diSmdoSzg1MmUvUUhnU2FVSkN6NW1sNGc1WndnNnBTUXhySUhCNkcvREc4dElSYwprZDMxSk1qY25Fb1Rhc1Jyc1NwVmNGdXZyQXlXN2liakZyYzhienBNcE1obDVwYUZRcEZzMnIwaXpZekhwakFsCk5ZR2I2OHJHcjBwQkp3YU5DS2ErbCtLRTk4M3A3NDYwCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K - tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdEY3WEN6TVZHaDF1aU5oWTBJZW50RVlpSVFmUUlLQkMvYUFzU3gxQUlsOWI0OUQ5ClhmanEzdWI3c3RCNnRsYTlqV09keDZkZzBnZDBCSVNCSWFlcHJWdkZNZzRTRXpjRE51aW0xZnh3aVkwZCtFRlUKTXVCc3NYZEV6V0k3ZEVvdUFjcVJjamZWL0J5WTZ4ZDdTRWhjSE5PZVdEZWI5TDFiK3hLd2k2M21uZ0lKQjdBeQpLSmRlYzhnbWlaNk4wcTV3ZXFEWDJ6QVgrbDVPTldTcG1mWUVhVHBDSnFMVTNtZFpCWWx5cnhMTytvemx0MGdLCktLbG81cGgzc05CcDFMUG5LOUMxc3MvbWZRek9EMDNzck1Xa21oTDcwQ0IxZmIydCtOWnRITW5BYmYwYkJETnoKTlNRcXU4T2cwaUxnOUVhd3l1dEF4U3BGdmhHUGMvd0dHMExWaXdJREFRQUJBb0lCQUJKYUErOHp4MVhjNEw0egpwUFd5bDdHVDRTMFRLbTNuWUdtRnZudjJBZXg5WDFBU2wzVFVPckZyTnZpK2xYMnYzYUZoSFZDUEN4N1RlMDVxClhPa2JzZnZkZG5iZFQ2RjgyMnJleVByRXNINk9TUnBWSzBmeDVaMDQwVnRFUDJCWm04eTYyNG1QZk1vbDdya2MKcm9Kd09rOEVpUHZZekpsZUd0bTAwUm1sRysyL2c0aWJsOTVmQXpyc1MvcGUyS3ZoN2NBVEtIcVh6MjlpUmZpbApiTGhBamQwcEVSMjNYU0hHR1ZqRmF3amNJK1c2L2RtbDZURDhrSzFGaUtldmJKTlREeVNXQnpPbXRTYUp1K01JCm9iUnVWWG4yZVNoamVGM1BYcHZRMWRhNXdBa0dJQWxOWjRHTG5QU2ZwVmJyU0plU3RrTGNzdEJheVlJS3BWZVgKSVVTTHM0RUNnWUVBMmNnZUE2WHh0TXdFNU5QWlNWdGhzbXRiYi9YYmtsSTdrWHlsdk5zZjFPdXRYVzkybVJneQpHcEhUQ0VubDB0Z1p3T081T1FLNjdFT3JUdDBRWStxMDJzZndwcmgwNFZEVGZhcW5QNTBxa3BmZEJLQWpmanEyCjFoZDZMd2hLeDRxSm9aelp2VkowV0lvR1ZLcjhJSjJOWGRTUVlUanZUZHhGczRTamdqNFFiaEVDZ1lFQTFBWUUKSEo3eVlza2EvS2V2OVVYbmVrSTRvMm5aYjJ1UVZXazRXSHlaY2NRN3VMQVhGY3lJcW5SZnoxczVzN3RMTzJCagozTFZNUVBzazFNY25oTTl4WE4vQ3ZDTys5b2t0RnNaMGJqWFh6NEJ5V2lFNHJPS1lhVEFwcDVsWlpUT3ZVMWNyCm05R3NwMWJoVDVZb2RaZ3IwUHQyYzR4U2krUVlEWnNFb2lFdzNkc0NnWUVBcVJLYWNweWZKSXlMZEJjZ0JycGkKQTRFalVLMWZsSjR3enNjbGFKUDVoM1NjZUFCejQzRU1YT0kvSXAwMFJsY3N6em83N3cyMmpud09mOEJSM0RBMwp6ZTRSWDIydWw4b0hGdldvdUZOTTNOZjNaNExuYXpVc0F0UGhNS2hRWGMrcEFBWGthUDJkZzZ0TU5PazFxaUNHCndvU212a1BVVE84b1ViRTB1NFZ4ZmZFQ2dZQUpPdDNROVNadUlIMFpSSitIV095enlOQTRaUEkvUkhwN0RXS1QKajVFS2Y5VnR1OVMxY1RyOTJLVVhITXlOUTNrSjg2OUZPMnMvWk85OGg5THptQ2hDTjhkOWN6enI5SnJPNUFMTApqWEtBcVFIUlpLTFgrK0ZRcXZVVlE3cTlpaHQyMEZPb3E5OE5SZDMzSGYxUzZUWDNHZ3RWQ21YSml6dDAxQ3ZHCmR4VnVnd0tCZ0M2Mlp0b0RLb3JyT2hvdTBPelprK2YwQS9rNDJBOENiL29VMGpwSzZtdmxEWmNYdUF1QVZTVXIKNXJCZjRVYmdVYndqa1ZWSFR6LzdDb1BWSjUvVUxJWk1Db1RUNFprNTZXWDk4ZE93Q3VTVFpZYnlBbDZNS1BBZApTZEpuVVIraEpnSVFDVGJ4K1dzYnh2d0FkbWErWUhtaVlPRzZhSklXMXdSd1VGOURLUEhHCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== -``` - -{{% /accordion %}} - - - -## 8. Configure FQDN - -There are two references to `` in the config file (one in this step and one in the next). Both need to be replaced with the FQDN chosen in [Configure DNS](#3-configure-dns). - -In the `kind: Ingress` with `name: cattle-ingress-http`: - -* Replace `` with the FQDN chosen in [Configure DNS](#3-configure-dns). - -After replacing `` with the FQDN chosen in [Configure DNS](#3-configure-dns), the file should look like the example below (`rancher.yourdomain.com` is the FQDN used in this example): - -```yaml - --- - apiVersion: extensions/v1beta1 - kind: Ingress - metadata: - namespace: cattle-system - name: cattle-ingress-http - annotations: - nginx.ingress.kubernetes.io/proxy-connect-timeout: "30" - nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open - nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open - spec: - rules: - - host: rancher.yourdomain.com - http: - paths: - - backend: - serviceName: cattle-service - servicePort: 80 - tls: - - secretName: cattle-keys-ingress - hosts: - - rancher.yourdomain.com -``` - -Save the `.yml` file and close it. - -## 9. Configure Rancher version - -The last reference that needs to be replaced is ``. This needs to be replaced with a Rancher version which is marked as stable. The latest stable release of Rancher can be found in the [GitHub README](https://github.com/rancher/rancher/blob/master/README.md). Make sure the version is an actual version number, and not a named tag like `stable` or `latest`. The example below shows the version configured to `v2.0.6`. - -``` - spec: - serviceAccountName: cattle-admin - containers: - - image: rancher/rancher:v2.0.6 - imagePullPolicy: Always -``` - -## 10. Back Up Your RKE Config File - -After you close your `.yml` file, back it up to a secure location. You can use this file again when it's time to upgrade Rancher. - -## 11. Run RKE - -With all configuration in place, use RKE to launch Rancher. You can complete this action by running the `rke up` command and using the `--config` parameter to point toward your config file. - -1. From your workstation, make sure `rancher-cluster.yml` and the downloaded `rke` binary are in the same directory. - -2. Open a Terminal instance. Change to the directory that contains your config file and `rke`. - -3. Enter one of the `rke up` commands listen below. - -``` -rke up --config rancher-cluster.yml -``` - -**Step Result:** The output should be similar to the snippet below: - -``` -INFO[0000] Building Kubernetes cluster -INFO[0000] [dialer] Setup tunnel for host [1.1.1.1] -INFO[0000] [network] Deploying port listener containers -INFO[0000] [network] Pulling image [alpine:latest] on host [1.1.1.1] -... -INFO[0101] Finished building Kubernetes cluster successfully -``` - -## 12. Back Up Auto-Generated Config File - -During installation, RKE automatically generates a config file named `kube_config_rancher-cluster.yml` in the same directory as the RKE binary. Copy this file and back it up to a safe location. You'll use this file later when upgrading Rancher Server. - -## What's Next? - -You have a couple of options: - -- Create a backup of your Rancher Server in case of a disaster scenario: [High Availability Back Up and Restoration]({{}}/rancher/v2.x/en/installation/backups-and-restoration/ha-backup-and-restoration). -- Create a Kubernetes cluster: [Provisioning Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-provisioning/). - -
- -## FAQ and Troubleshooting - -{{< ssl_faq_ha >}} diff --git a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-4-lb/nlb/_index.md b/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-4-lb/nlb/_index.md deleted file mode 100644 index 1e6bdcffe44..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-4-lb/nlb/_index.md +++ /dev/null @@ -1,181 +0,0 @@ ---- -title: Amazon NLB Configuration -weight: 277 -aliases: -- /rancher/v2.x/en/installation/ha-server-install/nlb/ ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/options/helm2/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a High-availability Kubernetes install with an RKE add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -## Objectives - -Configuring an Amazon NLB is a multistage process. We've broken it down into multiple tasks so that it's easy to follow. - -1. [Create Target Groups](#create-target-groups) - - Begin by creating two target groups for the **TCP** protocol, one regarding TCP port 443 and one regarding TCP port 80 (providing redirect to TCP port 443). You'll add your Linux nodes to these groups. - -2. [Register Targets](#register-targets) - - Add your Linux nodes to the target groups. - -3. [Create Your NLB](#create-your-nlb) - - Use Amazon's Wizard to create an Network Load Balancer. As part of this process, you'll add the target groups you created in **1. Create Target Groups**. - - -## Create Target Groups - -Your first NLB configuration step is to create two target groups. Technically, only port 443 is needed to access Rancher, but its convenient to add a listener for port 80 which will be redirected to port 443 automatically. The NGINX controller on the nodes will make sure that port 80 gets redirected to port 443. - -Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started, make sure to select the **Region** where your EC2 instances (Linux nodes) are created. - -The Target Groups configuration resides in the **Load Balancing** section of the **EC2** service. Select **Services** and choose **EC2**, find the section **Load Balancing** and open **Target Groups**. - -{{< img "/img/rancher/ha/nlb/ec2-loadbalancing.png" "EC2 Load Balancing section">}} - -Click **Create target group** to create the first target group, regarding TCP port 443. - -### Target Group (TCP port 443) - -Configure the first target group according to the table below. Screenshots of the configuration are shown just below the table. - -Option | Setting ---------------------------------------|------------------------------------ -Target Group Name | `rancher-tcp-443` -Protocol | `TCP` -Port | `443` -Target type | `instance` -VPC | Choose your VPC -Protocol
(Health Check) | `HTTP` -Path
(Health Check) | `/healthz` -Port (Advanced health check) | `override`,`80` -Healthy threshold (Advanced health) | `3` -Unhealthy threshold (Advanced) | `3` -Timeout (Advanced) | `6 seconds` -Interval (Advanced) | `10 second` -Success codes | `200-399` - -
-**Screenshot Target group TCP port 443 settings**
-{{< img "/img/rancher/ha/nlb/create-targetgroup-443.png" "Target group 443">}} - -
-**Screenshot Target group TCP port 443 Advanced settings**
-{{< img "/img/rancher/ha/nlb/create-targetgroup-443-advanced.png" "Target group 443 Advanced">}} - -
- -Click **Create target group** to create the second target group, regarding TCP port 80. - -### Target Group (TCP port 80) - -Configure the second target group according to the table below. Screenshots of the configuration are shown just below the table. - -Option | Setting ---------------------------------------|------------------------------------ -Target Group Name | `rancher-tcp-80` -Protocol | `TCP` -Port | `80` -Target type | `instance` -VPC | Choose your VPC -Protocol
(Health Check) | `HTTP` -Path
(Health Check) | `/healthz` -Port (Advanced health check) | `traffic port` -Healthy threshold (Advanced health) | `3` -Unhealthy threshold (Advanced) | `3` -Timeout (Advanced) | `6 seconds` -Interval (Advanced) | `10 second` -Success codes | `200-399` - -
-**Screenshot Target group TCP port 80 settings**
-{{< img "/img/rancher/ha/nlb/create-targetgroup-80.png" "Target group 80">}} - -
-**Screenshot Target group TCP port 80 Advanced settings**
-{{< img "/img/rancher/ha/nlb/create-targetgroup-80-advanced.png" "Target group 80 Advanced">}} - -
- -## Register Targets - -Next, add your Linux nodes to both target groups. - -Select the target group named **rancher-tcp-443**, click the tab **Targets** and choose **Edit**. - -{{< img "/img/rancher/ha/nlb/edit-targetgroup-443.png" "Edit target group 443">}} - -Select the instances (Linux nodes) you want to add, and click **Add to registered**. - -
-**Screenshot Add targets to target group TCP port 443**
- -{{< img "/img/rancher/ha/nlb/add-targets-targetgroup-443.png" "Add targets to target group 443">}} - -
-**Screenshot Added targets to target group TCP port 443**
- -{{< img "/img/rancher/ha/nlb/added-targets-targetgroup-443.png" "Added targets to target group 443">}} - -When the instances are added, click **Save** on the bottom right of the screen. - -Repeat those steps, replacing **rancher-tcp-443** with **rancher-tcp-80**. The same instances need to be added as targets to this target group. - -## Create Your NLB - -Use Amazon's Wizard to create an Network Load Balancer. As part of this process, you'll add the target groups you created in [Create Target Groups](#create-target-groups). - -1. From your web browser, navigate to the [Amazon EC2 Console](https://console.aws.amazon.com/ec2/). - -2. From the navigation pane, choose **LOAD BALANCING** > **Load Balancers**. - -3. Click **Create Load Balancer**. - -4. Choose **Network Load Balancer** and click **Create**. - -5. Complete the **Step 1: Configure Load Balancer** form. - - **Basic Configuration** - - - Name: `rancher` - - Scheme: `internet-facing` - - **Listeners** - - Add the **Load Balancer Protocols** and **Load Balancer Ports** below. - - `TCP`: `443` - - - **Availability Zones** - - - Select Your **VPC** and **Availability Zones**. - -6. Complete the **Step 2: Configure Routing** form. - - - From the **Target Group** drop-down, choose **Existing target group**. - - - From the **Name** drop-down, choose `rancher-tcp-443`. - - - Open **Advanced health check settings**, and configure **Interval** to `10 seconds`. - -7. Complete **Step 3: Register Targets**. Since you registered your targets earlier, all you have to do is click **Next: Review**. - -8. Complete **Step 4: Review**. Look over the load balancer details and click **Create** when you're satisfied. - -9. After AWS creates the NLB, click **Close**. - -## Add listener to NLB for TCP port 80 - -1. Select your newly created NLB and select the **Listeners** tab. - -2. Click **Add listener**. - -3. Use `TCP`:`80` as **Protocol** : **Port** - -4. Click **Add action** and choose **Forward to...** - -5. From the **Forward to** drop-down, choose `rancher-tcp-80`. - -6. Click **Save** in the top right of the screen. diff --git a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-7-lb/_index.md b/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-7-lb/_index.md deleted file mode 100644 index f5485b14c82..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-7-lb/_index.md +++ /dev/null @@ -1,288 +0,0 @@ ---- -title: Kubernetes Install with External Load Balancer (HTTPS/Layer 7) -weight: 276 -aliases: -- /rancher/v2.x/en/installation/ha-server-install-external-lb/ ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/options/helm2/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -This procedure walks you through setting up a 3-node cluster using the Rancher Kubernetes Engine (RKE). The cluster's sole purpose is running pods for Rancher. The setup is based on: - -- Layer 7 Loadbalancer with SSL termination (HTTPS) -- [NGINX Ingress controller (HTTP)](https://kubernetes.github.io/ingress-nginx/) - -In an Kubernetes setup that uses a layer 7 load balancer, the load balancer accepts Rancher client connections over the HTTP protocol (i.e., the application level). This application-level access allows the load balancer to read client requests and then redirect to them to cluster nodes using logic that optimally distributes load. - -Kubernetes Rancher install with layer 7 load balancer, depicting SSL termination at load balancer -![Rancher HA]({{}}/img/rancher/ha/rancher2ha-l7.svg) - -## Installation Outline - -Installation of Rancher in a high-availability configuration involves multiple procedures. Review this outline to learn about each procedure you need to complete. - - - -- [1. Provision Linux Hosts](#1-provision-linux-hosts) -- [2. Configure Load Balancer](#2-configure-load-balancer) -- [3. Configure DNS](#3-configure-dns) -- [4. Install RKE](#4-install-rke) -- [5. Download RKE Config File Template](#5-download-rke-config-file-template) -- [6. Configure Nodes](#6-configure-nodes) -- [7. Configure Certificates](#7-configure-certificates) -- [8. Configure FQDN](#8-configure-fqdn) -- [9. Configure Rancher version](#9-configure-rancher-version) -- [10. Back Up Your RKE Config File](#10-back-up-your-rke-config-file) -- [11. Run RKE](#11-run-rke) -- [12. Back Up Auto-Generated Config File](#12-back-up-auto-generated-config-file) - - - -## 1. Provision Linux Hosts - -Provision three Linux hosts according to our [Requirements]({{}}/rancher/v2.x/en/installation/requirements). - -## 2. Configure Load Balancer - -When using a load balancer in front of Rancher, there's no need for the container to redirect port communication from port 80 or port 443. By passing the header `X-Forwarded-Proto: https`, this redirect is disabled. This is the expected configuration when terminating SSL externally. - -The load balancer has to be configured to support the following: - -* **WebSocket** connections -* **SPDY** / **HTTP/2** protocols -* Passing / setting the following headers: - -| Header | Value | Description | -|---------------------|----------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `Host` | FQDN used to reach Rancher. | To identify the server requested by the client. | -| `X-Forwarded-Proto` | `https` | To identify the protocol that a client used to connect to the load balancer.

**Note:** If this header is present, `rancher/rancher` does not redirect HTTP to HTTPS. | -| `X-Forwarded-Port` | Port used to reach Rancher. | To identify the protocol that client used to connect to the load balancer. | -| `X-Forwarded-For` | IP of the client connection. | To identify the originating IP address of a client. | - -Health checks can be executed on the `/healthz` endpoint of the node, this will return HTTP 200. - -We have example configurations for the following load balancers: - -* [Amazon ALB configuration](alb/) -* [NGINX configuration](nginx/) - -## 3. Configure DNS - -Choose a fully qualified domain name (FQDN) that you want to use to access Rancher (e.g., `rancher.yourdomain.com`).

- -1. Log into your DNS server a create a `DNS A` record that points to the IP address of your [load balancer](#2-configure-load-balancer). - -2. Validate that the `DNS A` is working correctly. Run the following command from any terminal, replacing `HOSTNAME.DOMAIN.COM` with your chosen FQDN: - - `nslookup HOSTNAME.DOMAIN.COM` - - **Step Result:** Terminal displays output similar to the following: - - ``` - $ nslookup rancher.yourdomain.com - Server: YOUR_HOSTNAME_IP_ADDRESS - Address: YOUR_HOSTNAME_IP_ADDRESS#53 - - Non-authoritative answer: - Name: rancher.yourdomain.com - Address: HOSTNAME.DOMAIN.COM - ``` - -
- -## 4. Install RKE - -RKE (Rancher Kubernetes Engine) is a fast, versatile Kubernetes installer that you can use to install Kubernetes on your Linux hosts. We will use RKE to setup our cluster and run Rancher. - -1. Follow the [RKE Install]({{}}/rke/latest/en/installation) instructions. - -2. Confirm that RKE is now executable by running the following command: - - ``` - rke --version - ``` - -## 5. Download RKE Config File Template - -RKE uses a YAML config file to install and configure your Kubernetes cluster. There are 2 templates to choose from, depending on the SSL certificate you want to use. - -1. Download one of following templates, depending on the SSL certificate you're using. - - - [Template for self-signed certificate
`3-node-externalssl-certificate.yml`]({{}}/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-externalssl-certificate) - - [Template for certificate signed by recognized CA
`3-node-externalssl-recognizedca.yml`]({{}}/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-externalssl-recognizedca) - - >**Advanced Config Options:** - > - >- Want records of all transactions with the Rancher API? Enable the [API Auditing]({{}}/rancher/v2.x/en/installation/api-auditing) feature by editing your RKE config file. For more information, see how to enable it in [your RKE config file]({{}}/rancher/v2.x/en/installation/options/helm2/rke-add-on/api-auditing/). - >- Want to know the other config options available for your RKE template? See the [RKE Documentation: Config Options]({{}}/rke/latest/en/config-options/). - - -2. Rename the file to `rancher-cluster.yml`. - -## 6. Configure Nodes - -Once you have the `rancher-cluster.yml` config file template, edit the nodes section to point toward your Linux hosts. - -1. Open `rancher-cluster.yml` in your favorite text editor. - -1. Update the `nodes` section with the information of your [Linux hosts](#1-provision-linux-hosts). - - For each node in your cluster, update the following placeholders: `IP_ADDRESS_X` and `USER`. The specified user should be able to access the Docket socket, you can test this by logging in with the specified user and run `docker ps`. - - >**Note:** - > - >When using RHEL/CentOS, the SSH user can't be root due to https://bugzilla.redhat.com/show_bug.cgi?id=1527565. See [Operating System Requirements]({{}}/rke/latest/en/installation/os#redhat-enterprise-linux-rhel-centos) for RHEL/CentOS specific requirements. - - nodes: - # The IP address or hostname of the node - - address: IP_ADDRESS_1 - # User that can login to the node and has access to the Docker socket (i.e. can execute `docker ps` on the node) - # When using RHEL/CentOS, this can't be root due to https://bugzilla.redhat.com/show_bug.cgi?id=1527565 - user: USER - role: [controlplane,etcd,worker] - # Path the SSH key that can be used to access to node with the specified user - ssh_key_path: ~/.ssh/id_rsa - - address: IP_ADDRESS_2 - user: USER - role: [controlplane,etcd,worker] - ssh_key_path: ~/.ssh/id_rsa - - address: IP_ADDRESS_3 - user: USER - role: [controlplane,etcd,worker] - ssh_key_path: ~/.ssh/id_rsa - -1. **Optional:** By default, `rancher-cluster.yml` is configured to take backup snapshots of your data. To disable these snapshots, change the `backup` directive setting to `false`, as depicted below. - - services: - etcd: - backup: false - -## 7. Configure Certificates - -For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster. - -Choose from the following options: - -{{% accordion id="option-a" label="Option A—Bring Your Own Certificate: Self-Signed" %}} ->**Prerequisites:** ->Create a self-signed certificate. -> ->- The certificate files must be in [PEM format](#pem). ->- The certificate files must be encoded in [base64](#base64). ->- In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [SSL FAQ / Troubleshooting](#cert-order). - -In `kind: Secret` with `name: cattle-keys-ingress`, replace `` with the base64 encoded string of the CA Certificate file (usually called `ca.pem` or `ca.crt`) - ->**Note:** The base64 encoded string should be on the same line as `cacerts.pem`, without any newline at the beginning, in between or at the end. - -After replacing the values, the file should look like the example below (the base64 encoded strings should be different): - - --- - apiVersion: v1 - kind: Secret - metadata: - name: cattle-keys-server - namespace: cattle-system - type: Opaque - data: - cacerts.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNvRENDQVlnQ0NRRHVVWjZuMEZWeU16QU5CZ2txaGtpRzl3MEJBUXNGQURBU01SQXdEZ1lEVlFRRERBZDAKWlhOMExXTmhNQjRYRFRFNE1EVXdOakl4TURRd09Wb1hEVEU0TURjd05USXhNRFF3T1Zvd0VqRVFNQTRHQTFVRQpBd3dIZEdWemRDMWpZVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFNQmpBS3dQCndhRUhwQTdaRW1iWWczaTNYNlppVmtGZFJGckJlTmFYTHFPL2R0RUdmWktqYUF0Wm45R1VsckQxZUlUS3UzVHgKOWlGVlV4Mmo1Z0tyWmpwWitCUnFiZ1BNbk5hS1hocmRTdDRtUUN0VFFZdGRYMVFZS0pUbWF5NU45N3FoNTZtWQprMllKRkpOWVhHWlJabkdMUXJQNk04VHZramF0ZnZOdmJ0WmtkY2orYlY3aWhXanp2d2theHRUVjZlUGxuM2p5CnJUeXBBTDliYnlVcHlad3E2MWQvb0Q4VUtwZ2lZM1dOWmN1YnNvSjhxWlRsTnN6UjVadEFJV0tjSE5ZbE93d2oKaG41RE1tSFpwZ0ZGNW14TU52akxPRUc0S0ZRU3laYlV2QzlZRUhLZTUxbGVxa1lmQmtBZWpPY002TnlWQUh1dApuay9DMHpXcGdENkIwbkVDQXdFQUFUQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFHTCtaNkRzK2R4WTZsU2VBClZHSkMvdzE1bHJ2ZXdia1YxN3hvcmlyNEMxVURJSXB6YXdCdFJRSGdSWXVtblVqOGo4T0hFWUFDUEthR3BTVUsKRDVuVWdzV0pMUUV0TDA2eTh6M3A0MDBrSlZFZW9xZlVnYjQrK1JLRVJrWmowWXR3NEN0WHhwOVMzVkd4NmNOQQozZVlqRnRQd2hoYWVEQmdma1hXQWtISXFDcEsrN3RYem9pRGpXbi8walI2VDcrSGlaNEZjZ1AzYnd3K3NjUDIyCjlDQVZ1ZFg4TWpEQ1hTcll0Y0ZINllBanlCSTJjbDhoSkJqa2E3aERpVC9DaFlEZlFFVFZDM3crQjBDYjF1NWcKdE03Z2NGcUw4OVdhMnp5UzdNdXk5bEthUDBvTXl1Ty82Tm1wNjNsVnRHeEZKSFh4WTN6M0lycGxlbTNZQThpTwpmbmlYZXc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== - -{{% /accordion %}} -{{% accordion id="option-b" label="Option B—Bring Your Own Certificate: Signed by Recognized CA" %}} -If you are using a Certificate Signed By A Recognized Certificate Authority, you don't need to perform any step in this part. -{{% /accordion %}} - -## 8. Configure FQDN - -There is one reference to `` in the RKE config file. Replace this reference with the FQDN you chose in [3. Configure DNS](#3-configure-dns). - -1. Open `rancher-cluster.yml`. - -2. In the `kind: Ingress` with `name: cattle-ingress-http:` - - Replace `` with the FQDN chosen in [3. Configure DNS](#3-configure-dns). - - **Step Result:** After replacing the values, the file should look like the example below (the base64 encoded strings should be different): - - ``` - apiVersion: extensions/v1beta1 - kind: Ingress - metadata: - namespace: cattle-system - name: cattle-ingress-http - annotations: - nginx.ingress.kubernetes.io/proxy-connect-timeout: "30" - nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open - nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open - spec: - rules: - - host: rancher.yourdomain.com - http: - paths: - - backend: - serviceName: cattle-service - servicePort: 80 - ``` - - -3. Save the file and close it. - -## 9. Configure Rancher version - -The last reference that needs to be replaced is ``. This needs to be replaced with a Rancher version which is marked as stable. The latest stable release of Rancher can be found in the [GitHub README](https://github.com/rancher/rancher/blob/master/README.md). Make sure the version is an actual version number, and not a named tag like `stable` or `latest`. The example below shows the version configured to `v2.0.6`. - -``` - spec: - serviceAccountName: cattle-admin - containers: - - image: rancher/rancher:v2.0.6 - imagePullPolicy: Always -``` - -## 10. Back Up Your RKE Config File - -After you close your RKE config file, `rancher-cluster.yml`, back it up to a secure location. You can use this file again when it's time to upgrade Rancher. - -## 11. Run RKE - -With all configuration in place, use RKE to launch Rancher. You can complete this action by running the `rke up` command and using the `--config` parameter to point toward your config file. - -1. From your workstation, make sure `rancher-cluster.yml` and the downloaded `rke` binary are in the same directory. - -2. Open a Terminal instance. Change to the directory that contains your config file and `rke`. - -3. Enter one of the `rke up` commands listen below. - - ``` - rke up --config rancher-cluster.yml - ``` - - **Step Result:** The output should be similar to the snippet below: - - ``` - INFO[0000] Building Kubernetes cluster - INFO[0000] [dialer] Setup tunnel for host [1.1.1.1] - INFO[0000] [network] Deploying port listener containers - INFO[0000] [network] Pulling image [alpine:latest] on host [1.1.1.1] - ... - INFO[0101] Finished building Kubernetes cluster successfully - ``` - -## 12. Back Up Auto-Generated Config File - -During installation, RKE automatically generates a config file named `kube_config_rancher-cluster.yml` in the same directory as the `rancher-cluster.yml` file. Copy this file and back it up to a safe location. You'll use this file later when upgrading Rancher Server. - -## What's Next? - -- **Recommended:** Review [Creating Backups—High Availability Back Up and Restoration]({{}}/rancher/v2.x/en/backups/backups/ha-backups/) to learn how to backup your Rancher Server in case of a disaster scenario. -- Create a Kubernetes cluster: [Creating a Cluster]({{}}/rancher/v2.x/en/tasks/clusters/creating-a-cluster/). - -
- -## FAQ and Troubleshooting - -{{< ssl_faq_ha >}} diff --git a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-7-lb/alb/_index.md b/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-7-lb/alb/_index.md deleted file mode 100644 index 3741167921f..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-7-lb/alb/_index.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -title: Amazon ALB Configuration -weight: 277 -aliases: -- /rancher/v2.x/en/installation/ha-server-install-external-lb/alb/ ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher helm chart to install Kubernetes Rancher. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/options/helm2/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -## Objectives - -Configuring an Amazon ALB is a multistage process. We've broken it down into multiple tasks so that it's easy to follow. - -1. [Create Target Group](#create-target-group) - - Begin by creating one target group for the http protocol. You'll add your Linux nodes to this group. - -2. [Register Targets](#register-targets) - - Add your Linux nodes to the target group. - -3. [Create Your ALB](#create-your-alb) - - Use Amazon's Wizard to create an Application Load Balancer. As part of this process, you'll add the target groups you created in **1. Create Target Groups**. - - -## Create Target Group - -Your first ALB configuration step is to create one target group for HTTP. - -Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started. - -The document below will guide you through this process. Use the data in the tables below to complete the procedure. - -[Amazon Documentation: Create a Target Group](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-target-group.html) - -### Target Group (HTTP) - -Option | Setting -----------------------------|------------------------------------ -Target Group Name | `rancher-http-80` -Protocol | `HTTP` -Port | `80` -Target type | `instance` -VPC | Choose your VPC -Protocol
(Health Check) | `HTTP` -Path
(Health Check) | `/healthz` - -## Register Targets - -Next, add your Linux nodes to your target group. - -[Amazon Documentation: Register Targets with Your Target Group](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-register-targets.html) - -### Create Your ALB - -Use Amazon's Wizard to create an Application Load Balancer. As part of this process, you'll add the target group you created in [Create Target Group](#create-target-group). - -1. From your web browser, navigate to the [Amazon EC2 Console](https://console.aws.amazon.com/ec2/). - -2. From the navigation pane, choose **LOAD BALANCING** > **Load Balancers**. - -3. Click **Create Load Balancer**. - -4. Choose **Application Load Balancer**. - -5. Complete the **Step 1: Configure Load Balancer** form. - - **Basic Configuration** - - - Name: `rancher-http` - - Scheme: `internet-facing` - - IP address type: `ipv4` - - **Listeners** - - Add the **Load Balancer Protocols** and **Load Balancer Ports** below. - - `HTTP`: `80` - - `HTTPS`: `443` - - - **Availability Zones** - - - Select Your **VPC** and **Availability Zones**. - -6. Complete the **Step 2: Configure Security Settings** form. - - Configure the certificate you want to use for SSL termination. - -7. Complete the **Step 3: Configure Security Groups** form. - -8. Complete the **Step 4: Configure Routing** form. - - - From the **Target Group** drop-down, choose **Existing target group**. - - - Add target group `rancher-http-80`. - -9. Complete **Step 5: Register Targets**. Since you registered your targets earlier, all you have to do it click **Next: Review**. - -10. Complete **Step 6: Review**. Look over the load balancer details and click **Create** when you're satisfied. - -11. After AWS creates the ALB, click **Close**. diff --git a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-7-lb/nginx/_index.md b/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-7-lb/nginx/_index.md deleted file mode 100644 index 00ed78da136..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-7-lb/nginx/_index.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: NGINX Configuration -weight: 277 -aliases: -- /rancher/v2.x/en/installation/ha-server-install-external-lb/nginx/ ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/options/helm2/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -## Install NGINX - -Start by installing NGINX on your load balancer host. NGINX has packages available for all known operating systems. - -For help installing NGINX, refer to their [install documentation](https://www.nginx.com/resources/wiki/start/topics/tutorials/install/). - -## Create NGINX Configuration - -See [Example NGINX config]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/#example-nginx-config). - -## Run NGINX - -* Reload or restart NGINX - - ```` - # Reload NGINX - nginx -s reload - - # Restart NGINX - # Depending on your Linux distribution - service nginx restart - systemctl restart nginx - ```` - -## Browse to Rancher UI - -You should now be to able to browse to `https://FQDN`. diff --git a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/proxy/_index.md b/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/proxy/_index.md deleted file mode 100644 index 5e7eb1f4a80..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/proxy/_index.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -title: HTTP Proxy Configuration -weight: 277 ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/options/helm2/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -If you operate Rancher behind a proxy and you want to access services through the proxy (such as retrieving catalogs), you must provide Rancher information about your proxy. As Rancher is written in Go, it uses the common proxy environment variables as shown below. - -Make sure `NO_PROXY` contains the network addresses, network address ranges and domains that should be excluded from using the proxy. - -Environment variable | Purpose ---------------------------|--------- -HTTP_PROXY | Proxy address to use when initiating HTTP connection(s) -HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) -NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s) - -> **Note** NO_PROXY must be in uppercase to use network range (CIDR) notation. - -## Kubernetes installation - -When using Kubernetes installation, the environment variables need to be added to the RKE Config File template. - -* [Kubernetes Installation with External Load Balancer (TCP/Layer 4) RKE Config File Template]({{}}/rancher/v2.x/en/installation/k8s-install-server-install/#5-download-rke-config-file-template) -* [Kubernetes Installation with External Load Balancer (HTTPS/Layer 7) RKE Config File Template]({{}}/rancher/v2.x/en/installation/k8s-install-server-install-external-lb/#5-download-rke-config-file-template) - -The environment variables should be defined in the `Deployment` inside the RKE Config File Template. You only have to add the part starting with `env:` to (but not including) `ports:`. Make sure the indentation is identical to the preceding `name:`. Required values for `NO_PROXY` are: - -* `localhost` -* `127.0.0.1` -* `0.0.0.0` -* Configured `service_cluster_ip_range` (default: `10.43.0.0/16`) - -The example below is based on a proxy server accessible at `http://192.168.0.1:3128`, and excluding usage of the proxy when accessing network range `192.168.10.0/24`, the configured `service_cluster_ip_range` (`10.43.0.0/16`) and every hostname under the domain `example.com`. If you have changed the `service_cluster_ip_range`, you have to update the value below accordingly. - -```yaml -... ---- - kind: Deployment - apiVersion: extensions/v1beta1 - metadata: - namespace: cattle-system - name: cattle - spec: - replicas: 1 - template: - metadata: - labels: - app: cattle - spec: - serviceAccountName: cattle-admin - containers: - - image: rancher/rancher:latest - imagePullPolicy: Always - name: cattle-server - env: - - name: HTTP_PROXY - value: "http://192.168.10.1:3128" - - name: HTTPS_PROXY - value: "http://192.168.10.1:3128" - - name: NO_PROXY - value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,192.168.10.0/24,example.com" - ports: -... -``` diff --git a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/troubleshooting/404-default-backend/_index.md b/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/troubleshooting/404-default-backend/_index.md deleted file mode 100644 index 4571ade2775..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/troubleshooting/404-default-backend/_index.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -title: 404 - default backend -weight: 30 -aliases: -- /rancher/v2.x/en/installation/troubleshooting-ha/404-default-backend/ ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/options/helm2/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -To debug issues around this error, you will need to download the command-line tool `kubectl`. See [Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) how to download `kubectl` for your platform. - -When you have made changes to `rancher-cluster.yml`, you will have to run `rke remove --config rancher-cluster.yml` to clean the nodes, so it cannot conflict with previous configuration errors. - -### Possible causes - -The nginx ingress controller is not able to serve the configured host in `rancher-cluster.yml`. This should be the FQDN you configured to access Rancher. You can check if it is properly configured by viewing the ingress that is created by running the following command: - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml get ingress -n cattle-system -o wide -``` - -Check if the `HOSTS` column is displaying the FQDN you configured in the template, and that the used nodes are listed in the `ADDRESS` column. If that is configured correctly, we can check the logging of the nginx ingress controller. - -The logging of the nginx ingress controller will show why it cannot serve the requested host. To view the logs, you can run the following command - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l app=ingress-nginx -n ingress-nginx -``` - -Errors - -* `x509: certificate is valid for fqdn, not your_configured_fqdn` - -The used certificates do not contain the correct hostname. Generate new certificates that contain the chosen FQDN to access Rancher and redeploy. - -* `Port 80 is already in use. Please check the flag --http-port` - -There is a process on the node occupying port 80, this port is needed for the nginx ingress controller to route requests to Rancher. You can find the process by running the command: `netstat -plant | grep \:80`. - -Stop/kill the process and redeploy. - -* `unexpected error creating pem file: no valid PEM formatted block found` - -The base64 encoded string configured in the template is not valid. Please check if you can decode the configured string using `base64 -D STRING`, this should return the same output as the content of the file you used to generate the string. If this is correct, please check if the base64 encoded string is placed directly after the key, without any newlines before, in between or after. (For example: `tls.crt: LS01..`) diff --git a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/troubleshooting/_index.md b/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/troubleshooting/_index.md deleted file mode 100644 index 45201a0dc19..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/troubleshooting/_index.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: Troubleshooting HA RKE Add-On Install -weight: 370 -aliases: -- /rancher/v2.x/en/installation/troubleshooting-ha/ ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/options/helm2/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -This section contains common errors seen when setting up a Kubernetes installation. - -Choose from the following options: - -- [Generic troubleshooting](generic-troubleshooting/) - - In this section, you can find generic ways to debug your Kubernetes cluster. - -- [Failed to set up SSH tunneling for host]({{}}/rke/latest/en/troubleshooting/ssh-connectivity-errors/) - - In this section, you can find errors related to SSH tunneling when you run the `rke` command to setup your nodes. - -- [Failed to get job complete status](job-complete-status/) - - In this section, you can find errors related to deploying addons. - -- [404 - default backend](404-default-backend/) - - In this section, you can find errors related to the `404 - default backend` page that is shown when trying to access Rancher. diff --git a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/troubleshooting/generic-troubleshooting/_index.md b/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/troubleshooting/generic-troubleshooting/_index.md deleted file mode 100644 index bffba0352bd..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/troubleshooting/generic-troubleshooting/_index.md +++ /dev/null @@ -1,161 +0,0 @@ ---- -title: Generic troubleshooting -weight: 5 -aliases: -- /rancher/v2.x/en/installation/troubleshooting-ha/generic-troubleshooting/ ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/options/helm2/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -Below are steps that you can follow to determine what is wrong in your cluster. - -### Double check if all the required ports are opened in your (host) firewall - -Double check if all the [required ports]({{}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#networking-requirements/) are opened in your (host) firewall. - -### All nodes should be present and in **Ready** state - -To check, run the command: - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml get nodes -``` - -If a node is not shown in this output or a node is not in **Ready** state, you can check the logging of the `kubelet` container. Login to the node and run `docker logs kubelet`. - -### All pods/jobs should be in **Running**/**Completed** state - -To check, run the command: - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml get pods --all-namespaces -``` - -If a pod is not in **Running** state, you can dig into the root cause by running: - -#### Describe pod - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml describe pod POD_NAME -n NAMESPACE -``` - -#### Pod container logs - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml logs POD_NAME -n NAMESPACE -``` - -If a job is not in **Completed** state, you can dig into the root cause by running: - -#### Describe job - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml describe job JOB_NAME -n NAMESPACE -``` - -#### Logs from the containers of pods of the job - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l job-name=JOB_NAME -n NAMESPACE -``` - -### Check ingress - -Ingress should have the correct `HOSTS` (showing the configured FQDN) and `ADDRESS` (address(es) it will be routed to). - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml get ingress --all-namespaces -``` - -### List all Kubernetes cluster events - -Kubernetes cluster events are stored, and can be retrieved by running: - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml get events --all-namespaces -``` - -### Check Rancher container logging - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l app=cattle -n cattle-system -``` - -### Check NGINX ingress controller logging - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l app=ingress-nginx -n ingress-nginx -``` - -### Check if overlay network is functioning correctly - -The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. - -To test the overlay network, you can launch the following `DaemonSet` definition. This will run an `alpine` container on every host, which we will use to run a `ping` test between containers on all hosts. - -1. Save the following file as `ds-alpine.yml` - - ``` - apiVersion: apps/v1 - kind: DaemonSet - metadata: - name: alpine - spec: - selector: - matchLabels: - name: alpine - template: - metadata: - labels: - name: alpine - spec: - tolerations: - - effect: NoExecute - key: "node-role.kubernetes.io/etcd" - value: "true" - - effect: NoSchedule - key: "node-role.kubernetes.io/controlplane" - value: "true" - containers: - - image: alpine - imagePullPolicy: Always - name: alpine - command: ["sh", "-c", "tail -f /dev/null"] - terminationMessagePath: /dev/termination-log - ``` - -2. Launch it using `kubectl --kubeconfig kube_config_rancher-cluster.yml create -f ds-alpine.yml` -3. Wait until `kubectl --kubeconfig kube_config_rancher-cluster.yml rollout status ds/alpine -w` returns: `daemon set "alpine" successfully rolled out`. -4. Run the following command to let each container on every host ping each other (it's a single line command). - - ``` - echo "=> Start"; kubectl --kubeconfig kube_config_rancher-cluster.yml get pods -l name=alpine -o jsonpath='{range .items[*]}{@.metadata.name}{" "}{@.spec.nodeName}{"\n"}{end}' | while read spod shost; do kubectl --kubeconfig kube_config_rancher-cluster.yml get pods -l name=alpine -o jsonpath='{range .items[*]}{@.status.podIP}{" "}{@.spec.nodeName}{"\n"}{end}' | while read tip thost; do kubectl --kubeconfig kube_config_rancher-cluster.yml --request-timeout='10s' exec $spod -- /bin/sh -c "ping -c2 $tip > /dev/null 2>&1"; RC=$?; if [ $RC -ne 0 ]; then echo $shost cannot reach $thost; fi; done; done; echo "=> End" - ``` - -5. When this command has finished running, the output indicating everything is correct is: - - ``` - => Start - => End - ``` - -If you see error in the output, that means that the [required ports]({{}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#networking-requirements/) for overlay networking are not opened between the hosts indicated. - -Example error output of a situation where NODE1 had the UDP ports blocked. - -``` -=> Start -command terminated with exit code 1 -NODE2 cannot reach NODE1 -command terminated with exit code 1 -NODE3 cannot reach NODE1 -command terminated with exit code 1 -NODE1 cannot reach NODE2 -command terminated with exit code 1 -NODE1 cannot reach NODE3 -=> End -``` diff --git a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/troubleshooting/job-complete-status/_index.md b/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/troubleshooting/job-complete-status/_index.md deleted file mode 100644 index ab746496d68..00000000000 --- a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/troubleshooting/job-complete-status/_index.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -title: Failed to get job complete status -weight: 20 -aliases: -- /rancher/v2.x/en/installation/troubleshooting-ha/job-complete-status/ ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/options/helm2/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -To debug issues around this error, you will need to download the command-line tool `kubectl`. See [Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) how to download `kubectl` for your platform. - -When you have made changes to `rancher-cluster.yml`, you will have to run `rke remove --config rancher-cluster.yml` to clean the nodes, so it cannot conflict with previous configuration errors. - -### Failed to deploy addon execute job [rke-user-includes-addons]: Failed to get job complete status - -Something is wrong in the addons definitions, you can run the following command to get the root cause in the logging of the job: - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l job-name=rke-user-addon-deploy-job -n kube-system -``` - -#### error: error converting YAML to JSON: yaml: line 9: - -The structure of the addons definition in `rancher-cluster.yml` is wrong. In the different resources specified in the addons section, there is a error in the structure of the YAML. The pointer `yaml line 9` references to the line number of the addon that is causing issues. - -Things to check -
    -
      -
    • Is each of the base64 encoded certificate string placed directly after the key, for example: `tls.crt: LS01...`, there should be no newline/space before, in between or after.
    • -
    • Is the YAML properly formatted, each indentation should be 2 spaces as shown in the template files.
    • -
    • Verify the integrity of your certificate by running this command `cat MyCertificate | base64 -d` on Linux, `cat MyCertificate | base64 -D` on Mac OS . If any error exists, the command output will tell you. -
    -
- -#### Error from server (BadRequest): error when creating "/etc/config/rke-user-addon.yaml": Secret in version "v1" cannot be handled as a Secret - -The base64 string of one of the certificate strings is wrong. The log message will try to show you what part of the string is not recognized as valid base64. - -Things to check -
    -
      -
    • Check if the base64 string is valid by running one of the commands below:
    • - -``` -# MacOS -echo BASE64_CRT | base64 -D -# Linux -echo BASE64_CRT | base64 -d -# Windows -certutil -decode FILENAME.base64 FILENAME.verify -``` - -
    -
- -#### The Ingress "cattle-ingress-http" is invalid: spec.rules[0].host: Invalid value: "IP": must be a DNS name, not an IP address - -The host value can only contain a host name, as it is needed by the ingress controller to match the hostname and pass to the correct backend. diff --git a/content/rancher/v2.x/en/installation/options/local-system-charts/_index.md b/content/rancher/v2.x/en/installation/options/local-system-charts/_index.md deleted file mode 100644 index 82def8c7c92..00000000000 --- a/content/rancher/v2.x/en/installation/options/local-system-charts/_index.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: Setting up Local System Charts for Air Gapped Installations -weight: 1120 -aliases: - - /rancher/v2.x/en/installation/air-gap-single-node/config-rancher-system-charts/_index.md - - /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-system-charts/_index.md ---- - -The [System Charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. - -In an air gapped installation of Rancher, you will need to configure Rancher to use a local copy of the system charts. This section describes how to use local system charts using a CLI flag in Rancher v2.3.0, and using a Git mirror for Rancher versions prior to v2.3.0. - -# Using Local System Charts in Rancher v2.3.0 - -In Rancher v2.3.0, a local copy of `system-charts` has been packaged into the `rancher/rancher` container. To be able to use these features in an air gap install, you will need to run the Rancher install command with an extra environment variable, `CATTLE_SYSTEM_CATALOG=bundled`, which tells Rancher to use the local copy of the charts instead of attempting to fetch them from GitHub. - -Example commands for a Rancher installation with a bundled `system-charts` are included in the [air gap Docker installation]({{}}/rancher/v2.x/en/installation/air-gap-single-node/install-rancher) instructions and the [air gap Kubernetes installation]({{}}/rancher/v2.x/en/installation/air-gap-high-availability/install-rancher/#c-install-rancher) instructions. - -# Setting Up System Charts for Rancher Prior to v2.3.0 - -### A. Prepare System Charts - -The [System Charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. To be able to use these features in an air gap install, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach and configure Rancher to use that repository. - -Refer to the release notes in the `system-charts` repository to see which branch corresponds to your version of Rancher. - -### B. Configure System Charts - -Rancher needs to be configured to use your Git mirror of the `system-charts` repository. You can configure the system charts repository either from the Rancher UI or from Rancher's API view. - -{{% tabs %}} -{{% tab "Rancher UI" %}} - -In the catalog management page in the Rancher UI, follow these steps: - -1. Go to the **Global** view. - -1. Click **Tools > Catalogs.** - -1. The system chart is displayed under the name `system-library`. To edit the configuration of the system chart, click **⋮ > Edit.** - -1. In the **Catalog URL** field, enter the location of the Git mirror of the `system-charts` repository. - -1. Click **Save.** - -**Result:** Rancher is configured to download all the required catalog items from your `system-charts` repository. - -{{% /tab %}} -{{% tab "Rancher API" %}} - -1. Log into Rancher. - -1. Open `https:///v3/catalogs/system-library` in your browser. - - {{< img "/img/rancher/airgap/system-charts-setting.png" "Open">}} - -1. Click **Edit** on the upper right corner and update the value for **url** to the location of the Git mirror of the `system-charts` repository. - - {{< img "/img/rancher/airgap/system-charts-update.png" "Update">}} - -1. Click **Show Request** - -1. Click **Send Request** - -**Result:** Rancher is configured to download all the required catalog items from your `system-charts` repository. - -{{% /tab %}} -{{% /tabs %}} diff --git a/content/rancher/v2.x/en/installation/options/rke-add-on/_index.md b/content/rancher/v2.x/en/installation/options/rke-add-on/_index.md deleted file mode 100644 index 4904cb0edf6..00000000000 --- a/content/rancher/v2.x/en/installation/options/rke-add-on/_index.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: RKE Add-On Install -weight: 276 ---- - -> **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> -> Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). -> -> If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -- [Kubernetes Installation with External Load Balancer (TCP/Layer 4)]({{}}/rancher/v2.x/en/installation/options/rke-add-on/layer-4-lb) -- [Kubernetes Installation with External Load Balancer (HTTPS/Layer 7)]({{}}/rancher/v2.x/en/installation/options/rke-add-on/layer-7-lb) -- [HTTP Proxy Configuration for a Kubernetes installation]({{}}/rancher/v2.x/en/installation/options/rke-add-on/proxy/) -- [Troubleshooting RKE Add-on Installs]({{}}/rancher/v2.x/en/installation/options/rke-add-on/troubleshooting/) diff --git a/content/rancher/v2.x/en/installation/options/rke-add-on/api-auditing/_index.md b/content/rancher/v2.x/en/installation/options/rke-add-on/api-auditing/_index.md deleted file mode 100644 index 914f283c582..00000000000 --- a/content/rancher/v2.x/en/installation/options/rke-add-on/api-auditing/_index.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: Enable API Auditing -weight: 300 -aliases: - - /rke/latest/en/config-options/add-ons/api-auditing/ ---- - ->**Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -If you're using RKE to install Rancher, you can use directives to enable API Auditing for your Rancher install. You can know what happened, when it happened, who initiated it, and what cluster it affected. API auditing records all requests and responses to and from the Rancher API, which includes use of the Rancher UI and any other use of the Rancher API through programmatic use. - -## In-line Arguments - -Enable API Auditing using RKE by adding arguments to your Rancher container. - -To enable API auditing: - -- Add API Auditing arguments (`args`) to your Rancher container. -- Declare a `mountPath` in the `volumeMounts` directive of the container. -- Declare a `path` in the `volumes` directive. - -For more information about each argument, its syntax, and how to view API Audit logs, see [Rancher v2.0 Documentation: API Auditing]({{}}/rancher/v2.x/en/installation/api-auditing). - -```yaml -... -containers: - - image: rancher/rancher:latest - imagePullPolicy: Always - name: cattle-server - args: ["--audit-log-path", "/var/log/auditlog/rancher-api-audit.log", "--audit-log-maxbackup", "5", "--audit-log-maxsize", "50", "--audit-level", "2"] - ports: - - containerPort: 80 - protocol: TCP - - containerPort: 443 - protocol: TCP - volumeMounts: - - mountPath: /etc/rancher/ssl - name: cattle-keys-volume - readOnly: true - - mountPath: /var/log/auditlog - name: audit-log-dir - volumes: - - name: cattle-keys-volume - secret: - defaultMode: 420 - secretName: cattle-keys-server - - name: audit-log-dir - hostPath: - path: /var/log/rancher/auditlog - type: Directory -``` diff --git a/content/rancher/v2.x/en/installation/options/rke-add-on/layer-4-lb/_index.md b/content/rancher/v2.x/en/installation/options/rke-add-on/layer-4-lb/_index.md deleted file mode 100644 index f40f56075be..00000000000 --- a/content/rancher/v2.x/en/installation/options/rke-add-on/layer-4-lb/_index.md +++ /dev/null @@ -1,402 +0,0 @@ ---- -title: Kubernetes Install with External Load Balancer (TCP/Layer 4) -weight: 275 -aliases: -- /rancher/v2.x/en/installation/ha/rke-add-on/layer-4-lb ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -This procedure walks you through setting up a 3-node cluster using the Rancher Kubernetes Engine (RKE). The cluster's sole purpose is running pods for Rancher. The setup is based on: - -- Layer 4 load balancer (TCP) -- [NGINX ingress controller with SSL termination (HTTPS)](https://kubernetes.github.io/ingress-nginx/) - -In an HA setup that uses a layer 4 load balancer, the load balancer accepts Rancher client connections over the TCP/UDP protocols (i.e., the transport level). The load balancer then forwards these connections to individual cluster nodes without reading the request itself. Because the load balancer cannot read the packets it's forwarding, the routing decisions it can make are limited. - -Rancher installed on a Kubernetes cluster with layer 4 load balancer, depicting SSL termination at ingress controllers -![Rancher HA]({{}}/img/rancher/ha/rancher2ha.svg) - -## Installation Outline - -Installation of Rancher in a high-availability configuration involves multiple procedures. Review this outline to learn about each procedure you need to complete. - - - -- [1. Provision Linux Hosts](#1-provision-linux-hosts) -- [2. Configure Load Balancer](#2-configure-load-balancer) -- [3. Configure DNS](#3-configure-dns) -- [4. Install RKE](#4-install-rke) -- [5. Download RKE Config File Template](#5-download-rke-config-file-template) -- [6. Configure Nodes](#6-configure-nodes) -- [7. Configure Certificates](#7-configure-certificates) -- [8. Configure FQDN](#8-configure-fqdn) -- [9. Configure Rancher version](#9-configure-rancher-version) -- [10. Back Up Your RKE Config File](#10-back-up-your-rke-config-file) -- [11. Run RKE](#11-run-rke) -- [12. Back Up Auto-Generated Config File](#12-back-up-auto-generated-config-file) - - - -
- -## 1. Provision Linux Hosts - -Provision three Linux hosts according to our [Requirements]({{}}/rancher/v2.x/en/installation/requirements). - -## 2. Configure Load Balancer - -We will be using NGINX as our Layer 4 Load Balancer (TCP). NGINX will forward all connections to one of your Rancher nodes. If you want to use Amazon NLB, you can skip this step and use [Amazon NLB configuration]({{}}/rancher/v2.x/en/installation/options/rke-add-on/layer-4-lb/nlb/) - ->**Note:** -> In this configuration, the load balancer is positioned in front of your Linux hosts. The load balancer can be any host that you have available that's capable of running NGINX. -> ->One caveat: do not use one of your Rancher nodes as the load balancer. - -### A. Install NGINX - -Start by installing NGINX on your load balancer host. NGINX has packages available for all known operating systems. For help installing NGINX, refer to their [install documentation](https://www.nginx.com/resources/wiki/start/topics/tutorials/install/). - -The `stream` module is required, which is present when using the official NGINX packages. Please refer to your OS documentation how to install and enable the NGINX `stream` module on your operating system. - -### B. Create NGINX Configuration - -After installing NGINX, you need to update the NGINX config file, `nginx.conf`, with the IP addresses for your nodes. - -1. Copy and paste the code sample below into your favorite text editor. Save it as `nginx.conf`. - -2. From `nginx.conf`, replace `IP_NODE_1`, `IP_NODE_2`, and `IP_NODE_3` with the IPs of your [Linux hosts](#1-provision-linux-hosts). - - >**Note:** This Nginx configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - TCP and UDP Load Balancer](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/). - - **Example NGINX config:** - ``` - worker_processes 4; - worker_rlimit_nofile 40000; - - events { - worker_connections 8192; - } - - http { - server { - listen 80; - return 301 https://$host$request_uri; - } - } - - stream { - upstream rancher_servers { - least_conn; - server IP_NODE_1:443 max_fails=3 fail_timeout=5s; - server IP_NODE_2:443 max_fails=3 fail_timeout=5s; - server IP_NODE_3:443 max_fails=3 fail_timeout=5s; - } - server { - listen 443; - proxy_pass rancher_servers; - } - } - ``` - -3. Save `nginx.conf` to your load balancer at the following path: `/etc/nginx/nginx.conf`. - -4. Load the updates to your NGINX configuration by running the following command: - - ``` - # nginx -s reload - ``` - -### Option - Run NGINX as Docker container - -Instead of installing NGINX as a package on the operating system, you can rather run it as a Docker container. Save the edited **Example NGINX config** as `/etc/nginx.conf` and run the following command to launch the NGINX container: - -``` -docker run -d --restart=unless-stopped \ - -p 80:80 -p 443:443 \ - -v /etc/nginx.conf:/etc/nginx/nginx.conf \ - nginx:1.14 -``` - -## 3. Configure DNS - -Choose a fully qualified domain name (FQDN) that you want to use to access Rancher (e.g., `rancher.yourdomain.com`).

- -1. Log into your DNS server a create a `DNS A` record that points to the IP address of your [load balancer](#2-configure-load-balancer). - -2. Validate that the `DNS A` is working correctly. Run the following command from any terminal, replacing `HOSTNAME.DOMAIN.COM` with your chosen FQDN: - - `nslookup HOSTNAME.DOMAIN.COM` - - **Step Result:** Terminal displays output similar to the following: - - ``` - $ nslookup rancher.yourdomain.com - Server: YOUR_HOSTNAME_IP_ADDRESS - Address: YOUR_HOSTNAME_IP_ADDRESS#53 - - Non-authoritative answer: - Name: rancher.yourdomain.com - Address: HOSTNAME.DOMAIN.COM - ``` - -
- -## 4. Install RKE - -RKE (Rancher Kubernetes Engine) is a fast, versatile Kubernetes installer that you can use to install Kubernetes on your Linux hosts. We will use RKE to setup our cluster and run Rancher. - -1. Follow the [RKE Install]({{}}/rke/latest/en/installation) instructions. - -2. Confirm that RKE is now executable by running the following command: - - ``` - rke --version - ``` - -## 5. Download RKE Config File Template - -RKE uses a `.yml` config file to install and configure your Kubernetes cluster. There are 2 templates to choose from, depending on the SSL certificate you want to use. - -1. Download one of following templates, depending on the SSL certificate you're using. - - - [Template for self-signed certificate
`3-node-certificate.yml`]({{}}/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-certificate) - - [Template for certificate signed by recognized CA
`3-node-certificate-recognizedca.yml`]({{}}/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-certificate-recognizedca) - - >**Advanced Config Options:** - > - >- Want records of all transactions with the Rancher API? Enable the [API Auditing]({{}}/rancher/v2.x/en/installation/api-auditing) feature by editing your RKE config file. For more information, see how to enable it in [your RKE config file]({{}}/rancher/v2.x/en/installation/k8s-install/rke-add-on/api-auditing/). - >- Want to know the other config options available for your RKE template? See the [RKE Documentation: Config Options]({{}}/rke/latest/en/config-options/). - - -2. Rename the file to `rancher-cluster.yml`. - -## 6. Configure Nodes - -Once you have the `rancher-cluster.yml` config file template, edit the nodes section to point toward your Linux hosts. - -1. Open `rancher-cluster.yml` in your favorite text editor. - -1. Update the `nodes` section with the information of your [Linux hosts](#1-provision-linux-hosts). - - For each node in your cluster, update the following placeholders: `IP_ADDRESS_X` and `USER`. The specified user should be able to access the Docket socket, you can test this by logging in with the specified user and run `docker ps`. - - >**Note:** - > When using RHEL/CentOS, the SSH user can't be root due to https://bugzilla.redhat.com/show_bug.cgi?id=1527565. See [Operating System Requirements]({{}}/rke/latest/en/installation/os#redhat-enterprise-linux-rhel-centos) >for RHEL/CentOS specific requirements. - - nodes: - # The IP address or hostname of the node - - address: IP_ADDRESS_1 - # User that can login to the node and has access to the Docker socket (i.e. can execute `docker ps` on the node) - # When using RHEL/CentOS, this can't be root due to https://bugzilla.redhat.com/show_bug.cgi?id=1527565 - user: USER - role: [controlplane,etcd,worker] - # Path the SSH key that can be used to access to node with the specified user - ssh_key_path: ~/.ssh/id_rsa - - address: IP_ADDRESS_2 - user: USER - role: [controlplane,etcd,worker] - ssh_key_path: ~/.ssh/id_rsa - - address: IP_ADDRESS_3 - user: USER - role: [controlplane,etcd,worker] - ssh_key_path: ~/.ssh/id_rsa - -1. **Optional:** By default, `rancher-cluster.yml` is configured to take backup snapshots of your data. To disable these snapshots, change the `backup` directive setting to `false`, as depicted below. - - services: - etcd: - backup: false - - -## 7. Configure Certificates - -For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster. - -Choose from the following options: - -{{% accordion id="option-a" label="Option A—Bring Your Own Certificate: Self-Signed" %}} - ->**Prerequisites:** ->Create a self-signed certificate. -> ->- The certificate files must be in [PEM format](#pem). ->- The certificate files must be encoded in [base64](#base64). ->- In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Intermediate Certificates](#cert-order). - -1. In `kind: Secret` with `name: cattle-keys-ingress`: - - * Replace `` with the base64 encoded string of the Certificate file (usually called `cert.pem` or `domain.crt`) - * Replace `` with the base64 encoded string of the Certificate Key file (usually called `key.pem` or `domain.key`) - - >**Note:** - > The base64 encoded string should be on the same line as `tls.crt` or `tls.key`, without any newline at the beginning, in between or at the end. - - **Step Result:** After replacing the values, the file should look like the example below (the base64 encoded strings should be different): - - ```yaml - --- - apiVersion: v1 - kind: Secret - metadata: - name: cattle-keys-ingress - namespace: cattle-system - type: Opaque - data: - tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1RENDQWN5Z0F3SUJBZ0lKQUlHc25NeG1LeGxLTUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIzUmxjM1F0WTJFd0hoY05NVGd3TlRBMk1qRXdOREE1V2hjTk1UZ3dOekExTWpFd05EQTVXakFXTVJRdwpFZ1lEVlFRRERBdG9ZUzV5Ym1Ob2NpNXViRENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DCmdnRUJBTFJlMXdzekZSb2Rib2pZV05DSHA3UkdJaUVIMENDZ1F2MmdMRXNkUUNKZlcrUFEvVjM0NnQ3bSs3TFEKZXJaV3ZZMWpuY2VuWU5JSGRBU0VnU0ducWExYnhUSU9FaE0zQXpib3B0WDhjSW1OSGZoQlZETGdiTEYzUk0xaQpPM1JLTGdIS2tYSTMxZndjbU9zWGUwaElYQnpUbmxnM20vUzlXL3NTc0l1dDVwNENDUWV3TWlpWFhuUElKb21lCmpkS3VjSHFnMTlzd0YvcGVUalZrcVpuMkJHazZRaWFpMU41bldRV0pjcThTenZxTTViZElDaWlwYU9hWWQ3RFEKYWRTejV5dlF0YkxQNW4wTXpnOU43S3pGcEpvUys5QWdkWDI5cmZqV2JSekp3RzM5R3dRemN6VWtLcnZEb05JaQo0UFJHc01yclFNVXFSYjRSajNQOEJodEMxWXNDQXdFQUFhTTVNRGN3Q1FZRFZSMFRCQUl3QURBTEJnTlZIUThFCkJBTUNCZUF3SFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdJR0NDc0dBUVVGQndNQk1BMEdDU3FHU0liM0RRRUIKQ3dVQUE0SUJBUUNKZm5PWlFLWkowTFliOGNWUW5Vdi9NZkRZVEJIQ0pZcGM4MmgzUGlXWElMQk1jWDhQRC93MgpoOUExNkE4NGNxODJuQXEvaFZYYy9JNG9yaFY5WW9jSEg5UlcvbGthTUQ2VEJVR0Q1U1k4S292MHpHQ1ROaDZ6Ci9wZTNqTC9uU0pYSjRtQm51czJheHFtWnIvM3hhaWpYZG9kMmd3eGVhTklvRjNLbHB2aGU3ZjRBNmpsQTM0MmkKVVlCZ09iN1F5KytRZWd4U1diSmdoSzg1MmUvUUhnU2FVSkN6NW1sNGc1WndnNnBTUXhySUhCNkcvREc4dElSYwprZDMxSk1qY25Fb1Rhc1Jyc1NwVmNGdXZyQXlXN2liakZyYzhienBNcE1obDVwYUZRcEZzMnIwaXpZekhwakFsCk5ZR2I2OHJHcjBwQkp3YU5DS2ErbCtLRTk4M3A3NDYwCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K - tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdEY3WEN6TVZHaDF1aU5oWTBJZW50RVlpSVFmUUlLQkMvYUFzU3gxQUlsOWI0OUQ5ClhmanEzdWI3c3RCNnRsYTlqV09keDZkZzBnZDBCSVNCSWFlcHJWdkZNZzRTRXpjRE51aW0xZnh3aVkwZCtFRlUKTXVCc3NYZEV6V0k3ZEVvdUFjcVJjamZWL0J5WTZ4ZDdTRWhjSE5PZVdEZWI5TDFiK3hLd2k2M21uZ0lKQjdBeQpLSmRlYzhnbWlaNk4wcTV3ZXFEWDJ6QVgrbDVPTldTcG1mWUVhVHBDSnFMVTNtZFpCWWx5cnhMTytvemx0MGdLCktLbG81cGgzc05CcDFMUG5LOUMxc3MvbWZRek9EMDNzck1Xa21oTDcwQ0IxZmIydCtOWnRITW5BYmYwYkJETnoKTlNRcXU4T2cwaUxnOUVhd3l1dEF4U3BGdmhHUGMvd0dHMExWaXdJREFRQUJBb0lCQUJKYUErOHp4MVhjNEw0egpwUFd5bDdHVDRTMFRLbTNuWUdtRnZudjJBZXg5WDFBU2wzVFVPckZyTnZpK2xYMnYzYUZoSFZDUEN4N1RlMDVxClhPa2JzZnZkZG5iZFQ2RjgyMnJleVByRXNINk9TUnBWSzBmeDVaMDQwVnRFUDJCWm04eTYyNG1QZk1vbDdya2MKcm9Kd09rOEVpUHZZekpsZUd0bTAwUm1sRysyL2c0aWJsOTVmQXpyc1MvcGUyS3ZoN2NBVEtIcVh6MjlpUmZpbApiTGhBamQwcEVSMjNYU0hHR1ZqRmF3amNJK1c2L2RtbDZURDhrSzFGaUtldmJKTlREeVNXQnpPbXRTYUp1K01JCm9iUnVWWG4yZVNoamVGM1BYcHZRMWRhNXdBa0dJQWxOWjRHTG5QU2ZwVmJyU0plU3RrTGNzdEJheVlJS3BWZVgKSVVTTHM0RUNnWUVBMmNnZUE2WHh0TXdFNU5QWlNWdGhzbXRiYi9YYmtsSTdrWHlsdk5zZjFPdXRYVzkybVJneQpHcEhUQ0VubDB0Z1p3T081T1FLNjdFT3JUdDBRWStxMDJzZndwcmgwNFZEVGZhcW5QNTBxa3BmZEJLQWpmanEyCjFoZDZMd2hLeDRxSm9aelp2VkowV0lvR1ZLcjhJSjJOWGRTUVlUanZUZHhGczRTamdqNFFiaEVDZ1lFQTFBWUUKSEo3eVlza2EvS2V2OVVYbmVrSTRvMm5aYjJ1UVZXazRXSHlaY2NRN3VMQVhGY3lJcW5SZnoxczVzN3RMTzJCagozTFZNUVBzazFNY25oTTl4WE4vQ3ZDTys5b2t0RnNaMGJqWFh6NEJ5V2lFNHJPS1lhVEFwcDVsWlpUT3ZVMWNyCm05R3NwMWJoVDVZb2RaZ3IwUHQyYzR4U2krUVlEWnNFb2lFdzNkc0NnWUVBcVJLYWNweWZKSXlMZEJjZ0JycGkKQTRFalVLMWZsSjR3enNjbGFKUDVoM1NjZUFCejQzRU1YT0kvSXAwMFJsY3N6em83N3cyMmpud09mOEJSM0RBMwp6ZTRSWDIydWw4b0hGdldvdUZOTTNOZjNaNExuYXpVc0F0UGhNS2hRWGMrcEFBWGthUDJkZzZ0TU5PazFxaUNHCndvU212a1BVVE84b1ViRTB1NFZ4ZmZFQ2dZQUpPdDNROVNadUlIMFpSSitIV095enlOQTRaUEkvUkhwN0RXS1QKajVFS2Y5VnR1OVMxY1RyOTJLVVhITXlOUTNrSjg2OUZPMnMvWk85OGg5THptQ2hDTjhkOWN6enI5SnJPNUFMTApqWEtBcVFIUlpLTFgrK0ZRcXZVVlE3cTlpaHQyMEZPb3E5OE5SZDMzSGYxUzZUWDNHZ3RWQ21YSml6dDAxQ3ZHCmR4VnVnd0tCZ0M2Mlp0b0RLb3JyT2hvdTBPelprK2YwQS9rNDJBOENiL29VMGpwSzZtdmxEWmNYdUF1QVZTVXIKNXJCZjRVYmdVYndqa1ZWSFR6LzdDb1BWSjUvVUxJWk1Db1RUNFprNTZXWDk4ZE93Q3VTVFpZYnlBbDZNS1BBZApTZEpuVVIraEpnSVFDVGJ4K1dzYnh2d0FkbWErWUhtaVlPRzZhSklXMXdSd1VGOURLUEhHCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== - ``` - -2. In `kind: Secret` with `name: cattle-keys-server`, replace `` with the base64 encoded string of the CA Certificate file (usually called `ca.pem` or `ca.crt`). - - >**Note:** - > The base64 encoded string should be on the same line as `cacerts.pem`, without any newline at the beginning, in between or at the end. - - - **Step Result:** The file should look like the example below (the base64 encoded string should be different): - - ```yaml - --- - apiVersion: v1 - kind: Secret - metadata: - name: cattle-keys-server - namespace: cattle-system - type: Opaque - data: - cacerts.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNvRENDQVlnQ0NRRHVVWjZuMEZWeU16QU5CZ2txaGtpRzl3MEJBUXNGQURBU01SQXdEZ1lEVlFRRERBZDAKWlhOMExXTmhNQjRYRFRFNE1EVXdOakl4TURRd09Wb1hEVEU0TURjd05USXhNRFF3T1Zvd0VqRVFNQTRHQTFVRQpBd3dIZEdWemRDMWpZVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFNQmpBS3dQCndhRUhwQTdaRW1iWWczaTNYNlppVmtGZFJGckJlTmFYTHFPL2R0RUdmWktqYUF0Wm45R1VsckQxZUlUS3UzVHgKOWlGVlV4Mmo1Z0tyWmpwWitCUnFiZ1BNbk5hS1hocmRTdDRtUUN0VFFZdGRYMVFZS0pUbWF5NU45N3FoNTZtWQprMllKRkpOWVhHWlJabkdMUXJQNk04VHZramF0ZnZOdmJ0WmtkY2orYlY3aWhXanp2d2theHRUVjZlUGxuM2p5CnJUeXBBTDliYnlVcHlad3E2MWQvb0Q4VUtwZ2lZM1dOWmN1YnNvSjhxWlRsTnN6UjVadEFJV0tjSE5ZbE93d2oKaG41RE1tSFpwZ0ZGNW14TU52akxPRUc0S0ZRU3laYlV2QzlZRUhLZTUxbGVxa1lmQmtBZWpPY002TnlWQUh1dApuay9DMHpXcGdENkIwbkVDQXdFQUFUQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFHTCtaNkRzK2R4WTZsU2VBClZHSkMvdzE1bHJ2ZXdia1YxN3hvcmlyNEMxVURJSXB6YXdCdFJRSGdSWXVtblVqOGo4T0hFWUFDUEthR3BTVUsKRDVuVWdzV0pMUUV0TDA2eTh6M3A0MDBrSlZFZW9xZlVnYjQrK1JLRVJrWmowWXR3NEN0WHhwOVMzVkd4NmNOQQozZVlqRnRQd2hoYWVEQmdma1hXQWtISXFDcEsrN3RYem9pRGpXbi8walI2VDcrSGlaNEZjZ1AzYnd3K3NjUDIyCjlDQVZ1ZFg4TWpEQ1hTcll0Y0ZINllBanlCSTJjbDhoSkJqa2E3aERpVC9DaFlEZlFFVFZDM3crQjBDYjF1NWcKdE03Z2NGcUw4OVdhMnp5UzdNdXk5bEthUDBvTXl1Ty82Tm1wNjNsVnRHeEZKSFh4WTN6M0lycGxlbTNZQThpTwpmbmlYZXc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== - ``` - -{{% /accordion %}} - -{{% accordion id="option-b" label="Option B—Bring Your Own Certificate: Signed by Recognized CA" %}} ->**Note:** -> If you are using Self Signed Certificate, [click here](#option-a-bring-your-own-certificate-self-signed) to proceed. - -If you are using a Certificate Signed By A Recognized Certificate Authority, you will need to generate a base64 encoded string for the Certificate file and the Certificate Key file. Make sure that your certificate file includes all the [intermediate certificates](#cert-order) in the chain, the order of certificates in this case is first your own certificate, followed by the intermediates. Please refer to the documentation of your CSP (Certificate Service Provider) to see what intermediate certificate(s) need to be included. - -In the `kind: Secret` with `name: cattle-keys-ingress`: - -* Replace `` with the base64 encoded string of the Certificate file (usually called `cert.pem` or `domain.crt`) -* Replace `` with the base64 encoded string of the Certificate Key file (usually called `key.pem` or `domain.key`) - -After replacing the values, the file should look like the example below (the base64 encoded strings should be different): - ->**Note:** -> The base64 encoded string should be on the same line as `tls.crt` or `tls.key`, without any newline at the beginning, in between or at the end. - -```yaml ---- -apiVersion: v1 -kind: Secret -metadata: - name: cattle-keys-ingress - namespace: cattle-system -type: Opaque -data: - tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1RENDQWN5Z0F3SUJBZ0lKQUlHc25NeG1LeGxLTUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIzUmxjM1F0WTJFd0hoY05NVGd3TlRBMk1qRXdOREE1V2hjTk1UZ3dOekExTWpFd05EQTVXakFXTVJRdwpFZ1lEVlFRRERBdG9ZUzV5Ym1Ob2NpNXViRENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DCmdnRUJBTFJlMXdzekZSb2Rib2pZV05DSHA3UkdJaUVIMENDZ1F2MmdMRXNkUUNKZlcrUFEvVjM0NnQ3bSs3TFEKZXJaV3ZZMWpuY2VuWU5JSGRBU0VnU0ducWExYnhUSU9FaE0zQXpib3B0WDhjSW1OSGZoQlZETGdiTEYzUk0xaQpPM1JLTGdIS2tYSTMxZndjbU9zWGUwaElYQnpUbmxnM20vUzlXL3NTc0l1dDVwNENDUWV3TWlpWFhuUElKb21lCmpkS3VjSHFnMTlzd0YvcGVUalZrcVpuMkJHazZRaWFpMU41bldRV0pjcThTenZxTTViZElDaWlwYU9hWWQ3RFEKYWRTejV5dlF0YkxQNW4wTXpnOU43S3pGcEpvUys5QWdkWDI5cmZqV2JSekp3RzM5R3dRemN6VWtLcnZEb05JaQo0UFJHc01yclFNVXFSYjRSajNQOEJodEMxWXNDQXdFQUFhTTVNRGN3Q1FZRFZSMFRCQUl3QURBTEJnTlZIUThFCkJBTUNCZUF3SFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdJR0NDc0dBUVVGQndNQk1BMEdDU3FHU0liM0RRRUIKQ3dVQUE0SUJBUUNKZm5PWlFLWkowTFliOGNWUW5Vdi9NZkRZVEJIQ0pZcGM4MmgzUGlXWElMQk1jWDhQRC93MgpoOUExNkE4NGNxODJuQXEvaFZYYy9JNG9yaFY5WW9jSEg5UlcvbGthTUQ2VEJVR0Q1U1k4S292MHpHQ1ROaDZ6Ci9wZTNqTC9uU0pYSjRtQm51czJheHFtWnIvM3hhaWpYZG9kMmd3eGVhTklvRjNLbHB2aGU3ZjRBNmpsQTM0MmkKVVlCZ09iN1F5KytRZWd4U1diSmdoSzg1MmUvUUhnU2FVSkN6NW1sNGc1WndnNnBTUXhySUhCNkcvREc4dElSYwprZDMxSk1qY25Fb1Rhc1Jyc1NwVmNGdXZyQXlXN2liakZyYzhienBNcE1obDVwYUZRcEZzMnIwaXpZekhwakFsCk5ZR2I2OHJHcjBwQkp3YU5DS2ErbCtLRTk4M3A3NDYwCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K - tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdEY3WEN6TVZHaDF1aU5oWTBJZW50RVlpSVFmUUlLQkMvYUFzU3gxQUlsOWI0OUQ5ClhmanEzdWI3c3RCNnRsYTlqV09keDZkZzBnZDBCSVNCSWFlcHJWdkZNZzRTRXpjRE51aW0xZnh3aVkwZCtFRlUKTXVCc3NYZEV6V0k3ZEVvdUFjcVJjamZWL0J5WTZ4ZDdTRWhjSE5PZVdEZWI5TDFiK3hLd2k2M21uZ0lKQjdBeQpLSmRlYzhnbWlaNk4wcTV3ZXFEWDJ6QVgrbDVPTldTcG1mWUVhVHBDSnFMVTNtZFpCWWx5cnhMTytvemx0MGdLCktLbG81cGgzc05CcDFMUG5LOUMxc3MvbWZRek9EMDNzck1Xa21oTDcwQ0IxZmIydCtOWnRITW5BYmYwYkJETnoKTlNRcXU4T2cwaUxnOUVhd3l1dEF4U3BGdmhHUGMvd0dHMExWaXdJREFRQUJBb0lCQUJKYUErOHp4MVhjNEw0egpwUFd5bDdHVDRTMFRLbTNuWUdtRnZudjJBZXg5WDFBU2wzVFVPckZyTnZpK2xYMnYzYUZoSFZDUEN4N1RlMDVxClhPa2JzZnZkZG5iZFQ2RjgyMnJleVByRXNINk9TUnBWSzBmeDVaMDQwVnRFUDJCWm04eTYyNG1QZk1vbDdya2MKcm9Kd09rOEVpUHZZekpsZUd0bTAwUm1sRysyL2c0aWJsOTVmQXpyc1MvcGUyS3ZoN2NBVEtIcVh6MjlpUmZpbApiTGhBamQwcEVSMjNYU0hHR1ZqRmF3amNJK1c2L2RtbDZURDhrSzFGaUtldmJKTlREeVNXQnpPbXRTYUp1K01JCm9iUnVWWG4yZVNoamVGM1BYcHZRMWRhNXdBa0dJQWxOWjRHTG5QU2ZwVmJyU0plU3RrTGNzdEJheVlJS3BWZVgKSVVTTHM0RUNnWUVBMmNnZUE2WHh0TXdFNU5QWlNWdGhzbXRiYi9YYmtsSTdrWHlsdk5zZjFPdXRYVzkybVJneQpHcEhUQ0VubDB0Z1p3T081T1FLNjdFT3JUdDBRWStxMDJzZndwcmgwNFZEVGZhcW5QNTBxa3BmZEJLQWpmanEyCjFoZDZMd2hLeDRxSm9aelp2VkowV0lvR1ZLcjhJSjJOWGRTUVlUanZUZHhGczRTamdqNFFiaEVDZ1lFQTFBWUUKSEo3eVlza2EvS2V2OVVYbmVrSTRvMm5aYjJ1UVZXazRXSHlaY2NRN3VMQVhGY3lJcW5SZnoxczVzN3RMTzJCagozTFZNUVBzazFNY25oTTl4WE4vQ3ZDTys5b2t0RnNaMGJqWFh6NEJ5V2lFNHJPS1lhVEFwcDVsWlpUT3ZVMWNyCm05R3NwMWJoVDVZb2RaZ3IwUHQyYzR4U2krUVlEWnNFb2lFdzNkc0NnWUVBcVJLYWNweWZKSXlMZEJjZ0JycGkKQTRFalVLMWZsSjR3enNjbGFKUDVoM1NjZUFCejQzRU1YT0kvSXAwMFJsY3N6em83N3cyMmpud09mOEJSM0RBMwp6ZTRSWDIydWw4b0hGdldvdUZOTTNOZjNaNExuYXpVc0F0UGhNS2hRWGMrcEFBWGthUDJkZzZ0TU5PazFxaUNHCndvU212a1BVVE84b1ViRTB1NFZ4ZmZFQ2dZQUpPdDNROVNadUlIMFpSSitIV095enlOQTRaUEkvUkhwN0RXS1QKajVFS2Y5VnR1OVMxY1RyOTJLVVhITXlOUTNrSjg2OUZPMnMvWk85OGg5THptQ2hDTjhkOWN6enI5SnJPNUFMTApqWEtBcVFIUlpLTFgrK0ZRcXZVVlE3cTlpaHQyMEZPb3E5OE5SZDMzSGYxUzZUWDNHZ3RWQ21YSml6dDAxQ3ZHCmR4VnVnd0tCZ0M2Mlp0b0RLb3JyT2hvdTBPelprK2YwQS9rNDJBOENiL29VMGpwSzZtdmxEWmNYdUF1QVZTVXIKNXJCZjRVYmdVYndqa1ZWSFR6LzdDb1BWSjUvVUxJWk1Db1RUNFprNTZXWDk4ZE93Q3VTVFpZYnlBbDZNS1BBZApTZEpuVVIraEpnSVFDVGJ4K1dzYnh2d0FkbWErWUhtaVlPRzZhSklXMXdSd1VGOURLUEhHCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== -``` - -{{% /accordion %}} - - - -## 8. Configure FQDN - -There are two references to `` in the config file (one in this step and one in the next). Both need to be replaced with the FQDN chosen in [Configure DNS](#3-configure-dns). - -In the `kind: Ingress` with `name: cattle-ingress-http`: - -* Replace `` with the FQDN chosen in [Configure DNS](#3-configure-dns). - -After replacing `` with the FQDN chosen in [Configure DNS](#3-configure-dns), the file should look like the example below (`rancher.yourdomain.com` is the FQDN used in this example): - -```yaml - --- - apiVersion: extensions/v1beta1 - kind: Ingress - metadata: - namespace: cattle-system - name: cattle-ingress-http - annotations: - nginx.ingress.kubernetes.io/proxy-connect-timeout: "30" - nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open - nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open - spec: - rules: - - host: rancher.yourdomain.com - http: - paths: - - backend: - serviceName: cattle-service - servicePort: 80 - tls: - - secretName: cattle-keys-ingress - hosts: - - rancher.yourdomain.com -``` - -Save the `.yml` file and close it. - -## 9. Configure Rancher version - -The last reference that needs to be replaced is ``. This needs to be replaced with a Rancher version which is marked as stable. The latest stable release of Rancher can be found in the [GitHub README](https://github.com/rancher/rancher/blob/master/README.md). Make sure the version is an actual version number, and not a named tag like `stable` or `latest`. The example below shows the version configured to `v2.0.6`. - -``` - spec: - serviceAccountName: cattle-admin - containers: - - image: rancher/rancher:v2.0.6 - imagePullPolicy: Always -``` - -## 10. Back Up Your RKE Config File - -After you close your `.yml` file, back it up to a secure location. You can use this file again when it's time to upgrade Rancher. - -## 11. Run RKE - -With all configuration in place, use RKE to launch Rancher. You can complete this action by running the `rke up` command and using the `--config` parameter to point toward your config file. - -1. From your workstation, make sure `rancher-cluster.yml` and the downloaded `rke` binary are in the same directory. - -2. Open a Terminal instance. Change to the directory that contains your config file and `rke`. - -3. Enter one of the `rke up` commands listen below. - -``` -rke up --config rancher-cluster.yml -``` - -**Step Result:** The output should be similar to the snippet below: - -``` -INFO[0000] Building Kubernetes cluster -INFO[0000] [dialer] Setup tunnel for host [1.1.1.1] -INFO[0000] [network] Deploying port listener containers -INFO[0000] [network] Pulling image [alpine:latest] on host [1.1.1.1] -... -INFO[0101] Finished building Kubernetes cluster successfully -``` - -## 12. Back Up Auto-Generated Config File - -During installation, RKE automatically generates a config file named `kube_config_rancher-cluster.yml` in the same directory as the RKE binary. Copy this file and back it up to a safe location. You'll use this file later when upgrading Rancher Server. - -## What's Next? - -You have a couple of options: - -- Create a backup of your Rancher Server in case of a disaster scenario: [High Availability Back Up and Restoration]({{}}/rancher/v2.x/en/installation/backups-and-restoration/ha-backup-and-restoration). -- Create a Kubernetes cluster: [Provisioning Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-provisioning/). - -
- -## FAQ and Troubleshooting - -{{< ssl_faq_ha >}} diff --git a/content/rancher/v2.x/en/installation/options/rke-add-on/layer-4-lb/nlb/_index.md b/content/rancher/v2.x/en/installation/options/rke-add-on/layer-4-lb/nlb/_index.md deleted file mode 100644 index fc41dcad175..00000000000 --- a/content/rancher/v2.x/en/installation/options/rke-add-on/layer-4-lb/nlb/_index.md +++ /dev/null @@ -1,182 +0,0 @@ ---- -title: Amazon NLB Configuration -weight: 277 -aliases: -- /rancher/v2.x/en/installation/ha-server-install/nlb/ -- /rancher/v2.x/en/installation/ha/rke-add-on/layer-4-lb/nlb ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -## Objectives - -Configuring an Amazon NLB is a multistage process. We've broken it down into multiple tasks so that it's easy to follow. - -1. [Create Target Groups](#create-target-groups) - - Begin by creating two target groups for the **TCP** protocol, one regarding TCP port 443 and one regarding TCP port 80 (providing redirect to TCP port 443). You'll add your Linux nodes to these groups. - -2. [Register Targets](#register-targets) - - Add your Linux nodes to the target groups. - -3. [Create Your NLB](#create-your-nlb) - - Use Amazon's Wizard to create an Network Load Balancer. As part of this process, you'll add the target groups you created in **1. Create Target Groups**. - - -## Create Target Groups - -Your first NLB configuration step is to create two target groups. Technically, only port 443 is needed to access Rancher, but its convenient to add a listener for port 80 which will be redirected to port 443 automatically. The NGINX controller on the nodes will make sure that port 80 gets redirected to port 443. - -Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started, make sure to select the **Region** where your EC2 instances (Linux nodes) are created. - -The Target Groups configuration resides in the **Load Balancing** section of the **EC2** service. Select **Services** and choose **EC2**, find the section **Load Balancing** and open **Target Groups**. - -{{< img "/img/rancher/ha/nlb/ec2-loadbalancing.png" "EC2 Load Balancing section">}} - -Click **Create target group** to create the first target group, regarding TCP port 443. - -### Target Group (TCP port 443) - -Configure the first target group according to the table below. Screenshots of the configuration are shown just below the table. - -Option | Setting ---------------------------------------|------------------------------------ -Target Group Name | `rancher-tcp-443` -Protocol | `TCP` -Port | `443` -Target type | `instance` -VPC | Choose your VPC -Protocol
(Health Check) | `HTTP` -Path
(Health Check) | `/healthz` -Port (Advanced health check) | `override`,`80` -Healthy threshold (Advanced health) | `3` -Unhealthy threshold (Advanced) | `3` -Timeout (Advanced) | `6 seconds` -Interval (Advanced) | `10 second` -Success codes | `200-399` - -
-**Screenshot Target group TCP port 443 settings**
-{{< img "/img/rancher/ha/nlb/create-targetgroup-443.png" "Target group 443">}} - -
-**Screenshot Target group TCP port 443 Advanced settings**
-{{< img "/img/rancher/ha/nlb/create-targetgroup-443-advanced.png" "Target group 443 Advanced">}} - -
- -Click **Create target group** to create the second target group, regarding TCP port 80. - -### Target Group (TCP port 80) - -Configure the second target group according to the table below. Screenshots of the configuration are shown just below the table. - -Option | Setting ---------------------------------------|------------------------------------ -Target Group Name | `rancher-tcp-80` -Protocol | `TCP` -Port | `80` -Target type | `instance` -VPC | Choose your VPC -Protocol
(Health Check) | `HTTP` -Path
(Health Check) | `/healthz` -Port (Advanced health check) | `traffic port` -Healthy threshold (Advanced health) | `3` -Unhealthy threshold (Advanced) | `3` -Timeout (Advanced) | `6 seconds` -Interval (Advanced) | `10 second` -Success codes | `200-399` - -
-**Screenshot Target group TCP port 80 settings**
-{{< img "/img/rancher/ha/nlb/create-targetgroup-80.png" "Target group 80">}} - -
-**Screenshot Target group TCP port 80 Advanced settings**
-{{< img "/img/rancher/ha/nlb/create-targetgroup-80-advanced.png" "Target group 80 Advanced">}} - -
- -## Register Targets - -Next, add your Linux nodes to both target groups. - -Select the target group named **rancher-tcp-443**, click the tab **Targets** and choose **Edit**. - -{{< img "/img/rancher/ha/nlb/edit-targetgroup-443.png" "Edit target group 443">}} - -Select the instances (Linux nodes) you want to add, and click **Add to registered**. - -
-**Screenshot Add targets to target group TCP port 443**
- -{{< img "/img/rancher/ha/nlb/add-targets-targetgroup-443.png" "Add targets to target group 443">}} - -
-**Screenshot Added targets to target group TCP port 443**
- -{{< img "/img/rancher/ha/nlb/added-targets-targetgroup-443.png" "Added targets to target group 443">}} - -When the instances are added, click **Save** on the bottom right of the screen. - -Repeat those steps, replacing **rancher-tcp-443** with **rancher-tcp-80**. The same instances need to be added as targets to this target group. - -## Create Your NLB - -Use Amazon's Wizard to create an Network Load Balancer. As part of this process, you'll add the target groups you created in [Create Target Groups](#create-target-groups). - -1. From your web browser, navigate to the [Amazon EC2 Console](https://console.aws.amazon.com/ec2/). - -2. From the navigation pane, choose **LOAD BALANCING** > **Load Balancers**. - -3. Click **Create Load Balancer**. - -4. Choose **Network Load Balancer** and click **Create**. - -5. Complete the **Step 1: Configure Load Balancer** form. - - **Basic Configuration** - - - Name: `rancher` - - Scheme: `internet-facing` - - **Listeners** - - Add the **Load Balancer Protocols** and **Load Balancer Ports** below. - - `TCP`: `443` - - - **Availability Zones** - - - Select Your **VPC** and **Availability Zones**. - -6. Complete the **Step 2: Configure Routing** form. - - - From the **Target Group** drop-down, choose **Existing target group**. - - - From the **Name** drop-down, choose `rancher-tcp-443`. - - - Open **Advanced health check settings**, and configure **Interval** to `10 seconds`. - -7. Complete **Step 3: Register Targets**. Since you registered your targets earlier, all you have to do is click **Next: Review**. - -8. Complete **Step 4: Review**. Look over the load balancer details and click **Create** when you're satisfied. - -9. After AWS creates the NLB, click **Close**. - -## Add listener to NLB for TCP port 80 - -1. Select your newly created NLB and select the **Listeners** tab. - -2. Click **Add listener**. - -3. Use `TCP`:`80` as **Protocol** : **Port** - -4. Click **Add action** and choose **Forward to...** - -5. From the **Forward to** drop-down, choose `rancher-tcp-80`. - -6. Click **Save** in the top right of the screen. diff --git a/content/rancher/v2.x/en/installation/options/rke-add-on/layer-7-lb/_index.md b/content/rancher/v2.x/en/installation/options/rke-add-on/layer-7-lb/_index.md deleted file mode 100644 index a41889022ff..00000000000 --- a/content/rancher/v2.x/en/installation/options/rke-add-on/layer-7-lb/_index.md +++ /dev/null @@ -1,288 +0,0 @@ ---- -title: Kubernetes Install with External Load Balancer (HTTPS/Layer 7) -weight: 276 -aliases: -- /rancher/v2.x/en/installation/ha/rke-add-on/layer-7-lb ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -This procedure walks you through setting up a 3-node cluster using the Rancher Kubernetes Engine (RKE). The cluster's sole purpose is running pods for Rancher. The setup is based on: - -- Layer 7 load balancer with SSL termination (HTTPS) -- [NGINX Ingress controller (HTTP)](https://kubernetes.github.io/ingress-nginx/) - -In an HA setup that uses a layer 7 load balancer, the load balancer accepts Rancher client connections over the HTTP protocol (i.e., the application level). This application-level access allows the load balancer to read client requests and then redirect to them to cluster nodes using logic that optimally distributes load. - -Rancher installed on a Kubernetes cluster with layer 7 load balancer, depicting SSL termination at load balancer -![Rancher HA]({{}}/img/rancher/ha/rancher2ha-l7.svg) - -## Installation Outline - -Installation of Rancher in a high-availability configuration involves multiple procedures. Review this outline to learn about each procedure you need to complete. - - - -- [1. Provision Linux Hosts](#1-provision-linux-hosts) -- [2. Configure Load Balancer](#2-configure-load-balancer) -- [3. Configure DNS](#3-configure-dns) -- [4. Install RKE](#4-install-rke) -- [5. Download RKE Config File Template](#5-download-rke-config-file-template) -- [6. Configure Nodes](#6-configure-nodes) -- [7. Configure Certificates](#7-configure-certificates) -- [8. Configure FQDN](#8-configure-fqdn) -- [9. Configure Rancher version](#9-configure-rancher-version) -- [10. Back Up Your RKE Config File](#10-back-up-your-rke-config-file) -- [11. Run RKE](#11-run-rke) -- [12. Back Up Auto-Generated Config File](#12-back-up-auto-generated-config-file) - - - -## 1. Provision Linux Hosts - -Provision three Linux hosts according to our [Requirements]({{}}/rancher/v2.x/en/installation/requirements). - -## 2. Configure Load Balancer - -When using a load balancer in front of Rancher, there's no need for the container to redirect port communication from port 80 or port 443. By passing the header `X-Forwarded-Proto: https`, this redirect is disabled. This is the expected configuration when terminating SSL externally. - -The load balancer has to be configured to support the following: - -* **WebSocket** connections -* **SPDY** / **HTTP/2** protocols -* Passing / setting the following headers: - -| Header | Value | Description | -|---------------------|----------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `Host` | FQDN used to reach Rancher. | To identify the server requested by the client. | -| `X-Forwarded-Proto` | `https` | To identify the protocol that a client used to connect to the load balancer.

**Note:** If this header is present, `rancher/rancher` does not redirect HTTP to HTTPS. | -| `X-Forwarded-Port` | Port used to reach Rancher. | To identify the protocol that client used to connect to the load balancer. | -| `X-Forwarded-For` | IP of the client connection. | To identify the originating IP address of a client. | - -Health checks can be executed on the `/healthz` endpoint of the node, this will return HTTP 200. - -We have example configurations for the following load balancers: - -* [Amazon ALB configuration](alb/) -* [NGINX configuration](nginx/) - -## 3. Configure DNS - -Choose a fully qualified domain name (FQDN) that you want to use to access Rancher (e.g., `rancher.yourdomain.com`).

- -1. Log into your DNS server a create a `DNS A` record that points to the IP address of your [load balancer](#2-configure-load-balancer). - -2. Validate that the `DNS A` is working correctly. Run the following command from any terminal, replacing `HOSTNAME.DOMAIN.COM` with your chosen FQDN: - - `nslookup HOSTNAME.DOMAIN.COM` - - **Step Result:** Terminal displays output similar to the following: - - ``` - $ nslookup rancher.yourdomain.com - Server: YOUR_HOSTNAME_IP_ADDRESS - Address: YOUR_HOSTNAME_IP_ADDRESS#53 - - Non-authoritative answer: - Name: rancher.yourdomain.com - Address: HOSTNAME.DOMAIN.COM - ``` - -
- -## 4. Install RKE - -RKE (Rancher Kubernetes Engine) is a fast, versatile Kubernetes installer that you can use to install Kubernetes on your Linux hosts. We will use RKE to setup our cluster and run Rancher. - -1. Follow the [RKE Install]({{}}/rke/latest/en/installation) instructions. - -2. Confirm that RKE is now executable by running the following command: - - ``` - rke --version - ``` - -## 5. Download RKE Config File Template - -RKE uses a YAML config file to install and configure your Kubernetes cluster. There are 2 templates to choose from, depending on the SSL certificate you want to use. - -1. Download one of following templates, depending on the SSL certificate you're using. - - - [Template for self-signed certificate
`3-node-externalssl-certificate.yml`]({{}}/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-externalssl-certificate) - - [Template for certificate signed by recognized CA
`3-node-externalssl-recognizedca.yml`]({{}}/rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-externalssl-recognizedca) - - >**Advanced Config Options:** - > - >- Want records of all transactions with the Rancher API? Enable the [API Auditing]({{}}/rancher/v2.x/en/installation/api-auditing) feature by editing your RKE config file. For more information, see how to enable it in [your RKE config file]({{}}/rancher/v2.x/en/installation/k8s-install/rke-add-on/api-auditing/). - >- Want to know the other config options available for your RKE template? See the [RKE Documentation: Config Options]({{}}/rke/latest/en/config-options/). - - -2. Rename the file to `rancher-cluster.yml`. - -## 6. Configure Nodes - -Once you have the `rancher-cluster.yml` config file template, edit the nodes section to point toward your Linux hosts. - -1. Open `rancher-cluster.yml` in your favorite text editor. - -1. Update the `nodes` section with the information of your [Linux hosts](#1-provision-linux-hosts). - - For each node in your cluster, update the following placeholders: `IP_ADDRESS_X` and `USER`. The specified user should be able to access the Docket socket, you can test this by logging in with the specified user and run `docker ps`. - - >**Note:** - > - >When using RHEL/CentOS, the SSH user can't be root due to https://bugzilla.redhat.com/show_bug.cgi?id=1527565. See [Operating System Requirements]({{}}/rke/latest/en/installation/os#redhat-enterprise-linux-rhel-centos) for RHEL/CentOS specific requirements. - - nodes: - # The IP address or hostname of the node - - address: IP_ADDRESS_1 - # User that can login to the node and has access to the Docker socket (i.e. can execute `docker ps` on the node) - # When using RHEL/CentOS, this can't be root due to https://bugzilla.redhat.com/show_bug.cgi?id=1527565 - user: USER - role: [controlplane,etcd,worker] - # Path the SSH key that can be used to access to node with the specified user - ssh_key_path: ~/.ssh/id_rsa - - address: IP_ADDRESS_2 - user: USER - role: [controlplane,etcd,worker] - ssh_key_path: ~/.ssh/id_rsa - - address: IP_ADDRESS_3 - user: USER - role: [controlplane,etcd,worker] - ssh_key_path: ~/.ssh/id_rsa - -1. **Optional:** By default, `rancher-cluster.yml` is configured to take backup snapshots of your data. To disable these snapshots, change the `backup` directive setting to `false`, as depicted below. - - services: - etcd: - backup: false - -## 7. Configure Certificates - -For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster. - -Choose from the following options: - -{{% accordion id="option-a" label="Option A—Bring Your Own Certificate: Self-Signed" %}} ->**Prerequisites:** ->Create a self-signed certificate. -> ->- The certificate files must be in [PEM format](#pem). ->- The certificate files must be encoded in [base64](#base64). ->- In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [SSL FAQ / Troubleshooting](#cert-order). - -In `kind: Secret` with `name: cattle-keys-ingress`, replace `` with the base64 encoded string of the CA Certificate file (usually called `ca.pem` or `ca.crt`) - ->**Note:** The base64 encoded string should be on the same line as `cacerts.pem`, without any newline at the beginning, in between or at the end. - -After replacing the values, the file should look like the example below (the base64 encoded strings should be different): - - --- - apiVersion: v1 - kind: Secret - metadata: - name: cattle-keys-server - namespace: cattle-system - type: Opaque - data: - cacerts.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNvRENDQVlnQ0NRRHVVWjZuMEZWeU16QU5CZ2txaGtpRzl3MEJBUXNGQURBU01SQXdEZ1lEVlFRRERBZDAKWlhOMExXTmhNQjRYRFRFNE1EVXdOakl4TURRd09Wb1hEVEU0TURjd05USXhNRFF3T1Zvd0VqRVFNQTRHQTFVRQpBd3dIZEdWemRDMWpZVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFNQmpBS3dQCndhRUhwQTdaRW1iWWczaTNYNlppVmtGZFJGckJlTmFYTHFPL2R0RUdmWktqYUF0Wm45R1VsckQxZUlUS3UzVHgKOWlGVlV4Mmo1Z0tyWmpwWitCUnFiZ1BNbk5hS1hocmRTdDRtUUN0VFFZdGRYMVFZS0pUbWF5NU45N3FoNTZtWQprMllKRkpOWVhHWlJabkdMUXJQNk04VHZramF0ZnZOdmJ0WmtkY2orYlY3aWhXanp2d2theHRUVjZlUGxuM2p5CnJUeXBBTDliYnlVcHlad3E2MWQvb0Q4VUtwZ2lZM1dOWmN1YnNvSjhxWlRsTnN6UjVadEFJV0tjSE5ZbE93d2oKaG41RE1tSFpwZ0ZGNW14TU52akxPRUc0S0ZRU3laYlV2QzlZRUhLZTUxbGVxa1lmQmtBZWpPY002TnlWQUh1dApuay9DMHpXcGdENkIwbkVDQXdFQUFUQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFHTCtaNkRzK2R4WTZsU2VBClZHSkMvdzE1bHJ2ZXdia1YxN3hvcmlyNEMxVURJSXB6YXdCdFJRSGdSWXVtblVqOGo4T0hFWUFDUEthR3BTVUsKRDVuVWdzV0pMUUV0TDA2eTh6M3A0MDBrSlZFZW9xZlVnYjQrK1JLRVJrWmowWXR3NEN0WHhwOVMzVkd4NmNOQQozZVlqRnRQd2hoYWVEQmdma1hXQWtISXFDcEsrN3RYem9pRGpXbi8walI2VDcrSGlaNEZjZ1AzYnd3K3NjUDIyCjlDQVZ1ZFg4TWpEQ1hTcll0Y0ZINllBanlCSTJjbDhoSkJqa2E3aERpVC9DaFlEZlFFVFZDM3crQjBDYjF1NWcKdE03Z2NGcUw4OVdhMnp5UzdNdXk5bEthUDBvTXl1Ty82Tm1wNjNsVnRHeEZKSFh4WTN6M0lycGxlbTNZQThpTwpmbmlYZXc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== - -{{% /accordion %}} -{{% accordion id="option-b" label="Option B—Bring Your Own Certificate: Signed by Recognized CA" %}} -If you are using a Certificate Signed By A Recognized Certificate Authority, you don't need to perform any step in this part. -{{% /accordion %}} - -## 8. Configure FQDN - -There is one reference to `` in the RKE config file. Replace this reference with the FQDN you chose in [3. Configure DNS](#3-configure-dns). - -1. Open `rancher-cluster.yml`. - -2. In the `kind: Ingress` with `name: cattle-ingress-http:` - - Replace `` with the FQDN chosen in [3. Configure DNS](#3-configure-dns). - - **Step Result:** After replacing the values, the file should look like the example below (the base64 encoded strings should be different): - - ``` - apiVersion: extensions/v1beta1 - kind: Ingress - metadata: - namespace: cattle-system - name: cattle-ingress-http - annotations: - nginx.ingress.kubernetes.io/proxy-connect-timeout: "30" - nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open - nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open - spec: - rules: - - host: rancher.yourdomain.com - http: - paths: - - backend: - serviceName: cattle-service - servicePort: 80 - ``` - - -3. Save the file and close it. - -## 9. Configure Rancher version - -The last reference that needs to be replaced is ``. This needs to be replaced with a Rancher version which is marked as stable. The latest stable release of Rancher can be found in the [GitHub README](https://github.com/rancher/rancher/blob/master/README.md). Make sure the version is an actual version number, and not a named tag like `stable` or `latest`. The example below shows the version configured to `v2.0.6`. - -``` - spec: - serviceAccountName: cattle-admin - containers: - - image: rancher/rancher:v2.0.6 - imagePullPolicy: Always -``` - -## 10. Back Up Your RKE Config File - -After you close your RKE config file, `rancher-cluster.yml`, back it up to a secure location. You can use this file again when it's time to upgrade Rancher. - -## 11. Run RKE - -With all configuration in place, use RKE to launch Rancher. You can complete this action by running the `rke up` command and using the `--config` parameter to point toward your config file. - -1. From your workstation, make sure `rancher-cluster.yml` and the downloaded `rke` binary are in the same directory. - -2. Open a Terminal instance. Change to the directory that contains your config file and `rke`. - -3. Enter one of the `rke up` commands listen below. - - ``` - rke up --config rancher-cluster.yml - ``` - - **Step Result:** The output should be similar to the snippet below: - - ``` - INFO[0000] Building Kubernetes cluster - INFO[0000] [dialer] Setup tunnel for host [1.1.1.1] - INFO[0000] [network] Deploying port listener containers - INFO[0000] [network] Pulling image [alpine:latest] on host [1.1.1.1] - ... - INFO[0101] Finished building Kubernetes cluster successfully - ``` - -## 12. Back Up Auto-Generated Config File - -During installation, RKE automatically generates a config file named `kube_config_rancher-cluster.yml` in the same directory as the `rancher-cluster.yml` file. Copy this file and back it up to a safe location. You'll use this file later when upgrading Rancher Server. - -## What's Next? - -- **Recommended:** Review [Creating Backups—High Availability Back Up and Restoration]({{}}/rancher/v2.x/en/backups/backups/ha-backups/) to learn how to backup your Rancher Server in case of a disaster scenario. -- Create a Kubernetes cluster: [Creating a Cluster]({{}}/rancher/v2.x/en/tasks/clusters/creating-a-cluster/). - -
- -## FAQ and Troubleshooting - -{{< ssl_faq_ha >}} diff --git a/content/rancher/v2.x/en/installation/options/rke-add-on/layer-7-lb/alb/_index.md b/content/rancher/v2.x/en/installation/options/rke-add-on/layer-7-lb/alb/_index.md deleted file mode 100644 index 760f25be970..00000000000 --- a/content/rancher/v2.x/en/installation/options/rke-add-on/layer-7-lb/alb/_index.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -title: Amazon ALB Configuration -weight: 277 -aliases: -- /rancher/v2.x/en/installation/ha-server-install-external-lb/alb/ -- /rancher/v2.x/en/installation/ha/rke-add-on/layer-7-lb/alb ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -## Objectives - -Configuring an Amazon ALB is a multistage process. We've broken it down into multiple tasks so that it's easy to follow. - -1. [Create Target Group](#create-target-group) - - Begin by creating one target group for the http protocol. You'll add your Linux nodes to this group. - -2. [Register Targets](#register-targets) - - Add your Linux nodes to the target group. - -3. [Create Your ALB](#create-your-alb) - - Use Amazon's Wizard to create an Application Load Balancer. As part of this process, you'll add the target groups you created in **1. Create Target Groups**. - - -## Create Target Group - -Your first ALB configuration step is to create one target group for HTTP. - -Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started. - -The document below will guide you through this process. Use the data in the tables below to complete the procedure. - -[Amazon Documentation: Create a Target Group](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-target-group.html) - -### Target Group (HTTP) - -Option | Setting -----------------------------|------------------------------------ -Target Group Name | `rancher-http-80` -Protocol | `HTTP` -Port | `80` -Target type | `instance` -VPC | Choose your VPC -Protocol
(Health Check) | `HTTP` -Path
(Health Check) | `/healthz` - -## Register Targets - -Next, add your Linux nodes to your target group. - -[Amazon Documentation: Register Targets with Your Target Group](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-register-targets.html) - -### Create Your ALB - -Use Amazon's Wizard to create an Application Load Balancer. As part of this process, you'll add the target group you created in [Create Target Group](#create-target-group). - -1. From your web browser, navigate to the [Amazon EC2 Console](https://console.aws.amazon.com/ec2/). - -2. From the navigation pane, choose **LOAD BALANCING** > **Load Balancers**. - -3. Click **Create Load Balancer**. - -4. Choose **Application Load Balancer**. - -5. Complete the **Step 1: Configure Load Balancer** form. - - **Basic Configuration** - - - Name: `rancher-http` - - Scheme: `internet-facing` - - IP address type: `ipv4` - - **Listeners** - - Add the **Load Balancer Protocols** and **Load Balancer Ports** below. - - `HTTP`: `80` - - `HTTPS`: `443` - - - **Availability Zones** - - - Select Your **VPC** and **Availability Zones**. - -6. Complete the **Step 2: Configure Security Settings** form. - - Configure the certificate you want to use for SSL termination. - -7. Complete the **Step 3: Configure Security Groups** form. - -8. Complete the **Step 4: Configure Routing** form. - - - From the **Target Group** drop-down, choose **Existing target group**. - - - Add target group `rancher-http-80`. - -9. Complete **Step 5: Register Targets**. Since you registered your targets earlier, all you have to do it click **Next: Review**. - -10. Complete **Step 6: Review**. Look over the load balancer details and click **Create** when you're satisfied. - -11. After AWS creates the ALB, click **Close**. diff --git a/content/rancher/v2.x/en/installation/options/rke-add-on/layer-7-lb/nginx/_index.md b/content/rancher/v2.x/en/installation/options/rke-add-on/layer-7-lb/nginx/_index.md deleted file mode 100644 index c2b9f8fe1ad..00000000000 --- a/content/rancher/v2.x/en/installation/options/rke-add-on/layer-7-lb/nginx/_index.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: NGINX Configuration -weight: 277 -aliases: -- /rancher/v2.x/en/installation/ha-server-install-external-lb/nginx/ -- /rancher/v2.x/en/installation/ha/rke-add-on/layer-7-lb/nginx ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -## Install NGINX - -Start by installing NGINX on your load balancer host. NGINX has packages available for all known operating systems. - -For help installing NGINX, refer to their [install documentation](https://www.nginx.com/resources/wiki/start/topics/tutorials/install/). - -## Create NGINX Configuration - -See [Example NGINX config]({{}}/rancher/v2.x/en/installation/options/chart-options/#example-nginx-config). - -## Run NGINX - -* Reload or restart NGINX - - ```` - # Reload NGINX - nginx -s reload - - # Restart NGINX - # Depending on your Linux distribution - service nginx restart - systemctl restart nginx - ```` - -## Browse to Rancher UI - -You should now be to able to browse to `https://FQDN`. diff --git a/content/rancher/v2.x/en/installation/options/rke-add-on/proxy/_index.md b/content/rancher/v2.x/en/installation/options/rke-add-on/proxy/_index.md deleted file mode 100644 index 4345e9cb121..00000000000 --- a/content/rancher/v2.x/en/installation/options/rke-add-on/proxy/_index.md +++ /dev/null @@ -1,71 +0,0 @@ ---- -title: HTTP Proxy Configuration -weight: 277 -aliases: - - /rancher/v2.x/en/installation/ha/rke-add-on/proxy ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -If you operate Rancher behind a proxy and you want to access services through the proxy (such as retrieving catalogs), you must provide Rancher information about your proxy. As Rancher is written in Go, it uses the common proxy environment variables as shown below. - -Make sure `NO_PROXY` contains the network addresses, network address ranges and domains that should be excluded from using the proxy. - -Environment variable | Purpose ---------------------------|--------- -HTTP_PROXY | Proxy address to use when initiating HTTP connection(s) -HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) -NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s) - -> **Note** NO_PROXY must be in uppercase to use network range (CIDR) notation. - -## Installing Rancher on a Kubernetes Cluster - -When using Kubernetes installation, the environment variables need to be added to the RKE Config File template. - -* [Kubernetes Installation with External Load Balancer (TCP/Layer 4) RKE Config File Template]({{}}/rancher/v2.x/en/installation/ha-server-install/#5-download-rke-config-file-template) -* [Kubernetes Installation with External Load Balancer (HTTPS/Layer 7) RKE Config File Template]({{}}/rancher/v2.x/en/installation/ha-server-install-external-lb/#5-download-rke-config-file-template) - -The environment variables should be defined in the `Deployment` inside the RKE Config File Template. You only have to add the part starting with `env:` to (but not including) `ports:`. Make sure the indentation is identical to the preceding `name:`. Required values for `NO_PROXY` are: - -* `localhost` -* `127.0.0.1` -* `0.0.0.0` -* Configured `service_cluster_ip_range` (default: `10.43.0.0/16`) - -The example below is based on a proxy server accessible at `http://192.168.0.1:3128`, and excluding usage of the proxy when accessing network range `192.168.10.0/24`, the configured `service_cluster_ip_range` (`10.43.0.0/16`) and every hostname under the domain `example.com`. If you have changed the `service_cluster_ip_range`, you have to update the value below accordingly. - -```yaml -... ---- - kind: Deployment - apiVersion: extensions/v1beta1 - metadata: - namespace: cattle-system - name: cattle - spec: - replicas: 1 - template: - metadata: - labels: - app: cattle - spec: - serviceAccountName: cattle-admin - containers: - - image: rancher/rancher:latest - imagePullPolicy: Always - name: cattle-server - env: - - name: HTTP_PROXY - value: "http://192.168.10.1:3128" - - name: HTTPS_PROXY - value: "http://192.168.10.1:3128" - - name: NO_PROXY - value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,192.168.10.0/24,example.com" - ports: -... -``` diff --git a/content/rancher/v2.x/en/installation/options/rke-add-on/troubleshooting/404-default-backend/_index.md b/content/rancher/v2.x/en/installation/options/rke-add-on/troubleshooting/404-default-backend/_index.md deleted file mode 100644 index 0b036c0df28..00000000000 --- a/content/rancher/v2.x/en/installation/options/rke-add-on/troubleshooting/404-default-backend/_index.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -title: 404 - default backend -weight: 30 -aliases: -- /rancher/v2.x/en/installation/troubleshooting-ha/404-default-backend/ ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -To debug issues around this error, you will need to download the command-line tool `kubectl`. See [Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) how to download `kubectl` for your platform. - -When you have made changes to `rancher-cluster.yml`, you will have to run `rke remove --config rancher-cluster.yml` to clean the nodes, so it cannot conflict with previous configuration errors. - -### Possible causes - -The nginx ingress controller is not able to serve the configured host in `rancher-cluster.yml`. This should be the FQDN you configured to access Rancher. You can check if it is properly configured by viewing the ingress that is created by running the following command: - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml get ingress -n cattle-system -o wide -``` - -Check if the `HOSTS` column is displaying the FQDN you configured in the template, and that the used nodes are listed in the `ADDRESS` column. If that is configured correctly, we can check the logging of the nginx ingress controller. - -The logging of the nginx ingress controller will show why it cannot serve the requested host. To view the logs, you can run the following command - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l app=ingress-nginx -n ingress-nginx -``` - -Errors - -* `x509: certificate is valid for fqdn, not your_configured_fqdn` - -The used certificates do not contain the correct hostname. Generate new certificates that contain the chosen FQDN to access Rancher and redeploy. - -* `Port 80 is already in use. Please check the flag --http-port` - -There is a process on the node occupying port 80, this port is needed for the nginx ingress controller to route requests to Rancher. You can find the process by running the command: `netstat -plant | grep \:80`. - -Stop/kill the process and redeploy. - -* `unexpected error creating pem file: no valid PEM formatted block found` - -The base64 encoded string configured in the template is not valid. Please check if you can decode the configured string using `base64 -D STRING`, this should return the same output as the content of the file you used to generate the string. If this is correct, please check if the base64 encoded string is placed directly after the key, without any newlines before, in between or after. (For example: `tls.crt: LS01..`) diff --git a/content/rancher/v2.x/en/installation/options/rke-add-on/troubleshooting/_index.md b/content/rancher/v2.x/en/installation/options/rke-add-on/troubleshooting/_index.md deleted file mode 100644 index e9362246aec..00000000000 --- a/content/rancher/v2.x/en/installation/options/rke-add-on/troubleshooting/_index.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: Troubleshooting HA RKE Add-On Install -weight: 370 -aliases: -- /rancher/v2.x/en/installation/troubleshooting-ha/ ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -This section contains common errors seen when setting up a Kubernetes installation. - -Choose from the following options: - -- [Generic troubleshooting](generic-troubleshooting/) - - In this section, you can find generic ways to debug your Kubernetes cluster. - -- [Failed to set up SSH tunneling for host]({{}}/rke/latest/en/troubleshooting/ssh-connectivity-errors/) - - In this section, you can find errors related to SSH tunneling when you run the `rke` command to setup your nodes. - -- [Failed to get job complete status](job-complete-status/) - - In this section, you can find errors related to deploying addons. - -- [404 - default backend](404-default-backend/) - - In this section, you can find errors related to the `404 - default backend` page that is shown when trying to access Rancher. diff --git a/content/rancher/v2.x/en/installation/options/rke-add-on/troubleshooting/generic-troubleshooting/_index.md b/content/rancher/v2.x/en/installation/options/rke-add-on/troubleshooting/generic-troubleshooting/_index.md deleted file mode 100644 index df8d4381f4a..00000000000 --- a/content/rancher/v2.x/en/installation/options/rke-add-on/troubleshooting/generic-troubleshooting/_index.md +++ /dev/null @@ -1,161 +0,0 @@ ---- -title: Generic troubleshooting -weight: 5 -aliases: -- /rancher/v2.x/en/installation/troubleshooting-ha/generic-troubleshooting/ ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -Below are steps that you can follow to determine what is wrong in your cluster. - -### Double check if all the required ports are opened in your (host) firewall - -Double check if all the [required ports]({{}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#networking-requirements/) are opened in your (host) firewall. - -### All nodes should be present and in **Ready** state - -To check, run the command: - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml get nodes -``` - -If a node is not shown in this output or a node is not in **Ready** state, you can check the logging of the `kubelet` container. Login to the node and run `docker logs kubelet`. - -### All pods/jobs should be in **Running**/**Completed** state - -To check, run the command: - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml get pods --all-namespaces -``` - -If a pod is not in **Running** state, you can dig into the root cause by running: - -#### Describe pod - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml describe pod POD_NAME -n NAMESPACE -``` - -#### Pod container logs - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml logs POD_NAME -n NAMESPACE -``` - -If a job is not in **Completed** state, you can dig into the root cause by running: - -#### Describe job - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml describe job JOB_NAME -n NAMESPACE -``` - -#### Logs from the containers of pods of the job - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l job-name=JOB_NAME -n NAMESPACE -``` - -### Check ingress - -Ingress should have the correct `HOSTS` (showing the configured FQDN) and `ADDRESS` (address(es) it will be routed to). - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml get ingress --all-namespaces -``` - -### List all Kubernetes cluster events - -Kubernetes cluster events are stored, and can be retrieved by running: - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml get events --all-namespaces -``` - -### Check Rancher container logging - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l app=cattle -n cattle-system -``` - -### Check NGINX ingress controller logging - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l app=ingress-nginx -n ingress-nginx -``` - -### Check if overlay network is functioning correctly - -The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. - -To test the overlay network, you can launch the following `DaemonSet` definition. This will run an `alpine` container on every host, which we will use to run a `ping` test between containers on all hosts. - -1. Save the following file as `ds-alpine.yml` - - ``` - apiVersion: apps/v1 - kind: DaemonSet - metadata: - name: alpine - spec: - selector: - matchLabels: - name: alpine - template: - metadata: - labels: - name: alpine - spec: - tolerations: - - effect: NoExecute - key: "node-role.kubernetes.io/etcd" - value: "true" - - effect: NoSchedule - key: "node-role.kubernetes.io/controlplane" - value: "true" - containers: - - image: alpine - imagePullPolicy: Always - name: alpine - command: ["sh", "-c", "tail -f /dev/null"] - terminationMessagePath: /dev/termination-log - ``` - -2. Launch it using `kubectl --kubeconfig kube_config_rancher-cluster.yml create -f ds-alpine.yml` -3. Wait until `kubectl --kubeconfig kube_config_rancher-cluster.yml rollout status ds/alpine -w` returns: `daemon set "alpine" successfully rolled out`. -4. Run the following command to let each container on every host ping each other (it's a single line command). - - ``` - echo "=> Start"; kubectl --kubeconfig kube_config_rancher-cluster.yml get pods -l name=alpine -o jsonpath='{range .items[*]}{@.metadata.name}{" "}{@.spec.nodeName}{"\n"}{end}' | while read spod shost; do kubectl --kubeconfig kube_config_rancher-cluster.yml get pods -l name=alpine -o jsonpath='{range .items[*]}{@.status.podIP}{" "}{@.spec.nodeName}{"\n"}{end}' | while read tip thost; do kubectl --kubeconfig kube_config_rancher-cluster.yml --request-timeout='10s' exec $spod -- /bin/sh -c "ping -c2 $tip > /dev/null 2>&1"; RC=$?; if [ $RC -ne 0 ]; then echo $shost cannot reach $thost; fi; done; done; echo "=> End" - ``` - -5. When this command has finished running, the output indicating everything is correct is: - - ``` - => Start - => End - ``` - -If you see error in the output, that means that the [required ports]({{}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#networking-requirements/) for overlay networking are not opened between the hosts indicated. - -Example error output of a situation where NODE1 had the UDP ports blocked. - -``` -=> Start -command terminated with exit code 1 -NODE2 cannot reach NODE1 -command terminated with exit code 1 -NODE3 cannot reach NODE1 -command terminated with exit code 1 -NODE1 cannot reach NODE2 -command terminated with exit code 1 -NODE1 cannot reach NODE3 -=> End -``` diff --git a/content/rancher/v2.x/en/installation/options/rke-add-on/troubleshooting/job-complete-status/_index.md b/content/rancher/v2.x/en/installation/options/rke-add-on/troubleshooting/job-complete-status/_index.md deleted file mode 100644 index 8fd5e32b41b..00000000000 --- a/content/rancher/v2.x/en/installation/options/rke-add-on/troubleshooting/job-complete-status/_index.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -title: Failed to get job complete status -weight: 20 -aliases: -- /rancher/v2.x/en/installation/troubleshooting-ha/job-complete-status/ ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -To debug issues around this error, you will need to download the command-line tool `kubectl`. See [Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) how to download `kubectl` for your platform. - -When you have made changes to `rancher-cluster.yml`, you will have to run `rke remove --config rancher-cluster.yml` to clean the nodes, so it cannot conflict with previous configuration errors. - -### Failed to deploy addon execute job [rke-user-includes-addons]: Failed to get job complete status - -Something is wrong in the addons definitions, you can run the following command to get the root cause in the logging of the job: - -``` -kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l job-name=rke-user-addon-deploy-job -n kube-system -``` - -#### error: error converting YAML to JSON: yaml: line 9: - -The structure of the addons definition in `rancher-cluster.yml` is wrong. In the different resources specified in the addons section, there is a error in the structure of the YAML. The pointer `yaml line 9` references to the line number of the addon that is causing issues. - -Things to check -
    -
      -
    • Is each of the base64 encoded certificate string placed directly after the key, for example: `tls.crt: LS01...`, there should be no newline/space before, in between or after.
    • -
    • Is the YAML properly formatted, each indentation should be 2 spaces as shown in the template files.
    • -
    • Verify the integrity of your certificate by running this command `cat MyCertificate | base64 -d` on Linux, `cat MyCertificate | base64 -D` on Mac OS . If any error exists, the command output will tell you. -
    -
- -#### Error from server (BadRequest): error when creating "/etc/config/rke-user-addon.yaml": Secret in version "v1" cannot be handled as a Secret - -The base64 string of one of the certificate strings is wrong. The log message will try to show you what part of the string is not recognized as valid base64. - -Things to check -
    -
      -
    • Check if the base64 string is valid by running one of the commands below:
    • - -``` -# MacOS -echo BASE64_CRT | base64 -D -# Linux -echo BASE64_CRT | base64 -d -# Windows -certutil -decode FILENAME.base64 FILENAME.verify -``` - -
    -
- -#### The Ingress "cattle-ingress-http" is invalid: spec.rules[0].host: Invalid value: "IP": must be a DNS name, not an IP address - -The host value can only contain a host name, as it is needed by the ingress controller to match the hostname and pass to the correct backend. diff --git a/content/rancher/v2.x/en/installation/options/single-node-install-external-lb/_index.md b/content/rancher/v2.x/en/installation/options/single-node-install-external-lb/_index.md deleted file mode 100644 index c2aa176b058..00000000000 --- a/content/rancher/v2.x/en/installation/options/single-node-install-external-lb/_index.md +++ /dev/null @@ -1,243 +0,0 @@ ---- -title: Docker Install with TLS Termination at Layer-7 NGINX Load Balancer -weight: 252 -aliases: - - /rancher/v2.x/en/installation/single-node/single-node-install-external-lb/ - - /rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-install-external-lb ---- - -For development and testing environments that have a special requirement to terminate TLS/SSL at a load balancer instead of your Rancher Server container, deploy Rancher and configure a load balancer to work with it conjunction. - -A layer-7 load balancer can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. - -This install procedure walks you through deployment of Rancher using a single container, and then provides a sample configuration for a layer-7 NGINX load balancer. - -> **Want to skip the external load balancer?** -> See [Docker Installation]({{}}/rancher/v2.x/en/installation/single-node) instead. - -## Requirements for OS, Docker, Hardware, and Networking - -Make sure that your node fulfills the general [installation requirements.]({{}}/rancher/v2.x/en/installation/requirements/) - -## Installation Outline - - - -- [1. Provision Linux Host](#1-provision-linux-host) -- [2. Choose an SSL Option and Install Rancher](#2-choose-an-ssl-option-and-install-rancher) -- [3. Configure Load Balancer](#3-configure-load-balancer) - - - -## 1. Provision Linux Host - -Provision a single Linux host according to our [Requirements]({{}}/rancher/v2.x/en/installation/requirements) to launch your {{< product >}} Server. - -## 2. Choose an SSL Option and Install Rancher - -For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster. - -> **Do you want to...** -> -> - Complete an Air Gap Installation? -> - Record all transactions with the Rancher API? -> -> See [Advanced Options](#advanced-options) below before continuing. - -Choose from the following options: - -{{% accordion id="option-a" label="Option A-Bring Your Own Certificate: Self-Signed" %}} -If you elect to use a self-signed certificate to encrypt communication, you must install the certificate on your load balancer (which you'll do later) and your Rancher container. Run the Docker command to deploy Rancher, pointing it toward your certificate. - -> **Prerequisites:** -> Create a self-signed certificate. -> -> - The certificate files must be in [PEM format](#pem). - -**To Install Rancher Using a Self-Signed Cert:** - -1. While running the Docker command to deploy Rancher, point Docker toward your CA certificate file. - - ``` - docker run -d --restart=unless-stopped \ - -p 80:80 -p 443:443 \ - -v /etc/your_certificate_directory/cacerts.pem:/etc/rancher/ssl/cacerts.pem \ - rancher/rancher:latest - ``` - -{{% /accordion %}} -{{% accordion id="option-b" label="Option B-Bring Your Own Certificate: Signed by Recognized CA" %}} -If your cluster is public facing, it's best to use a certificate signed by a recognized CA. - -> **Prerequisites:** -> -> - The certificate files must be in [PEM format](#pem). - -**To Install Rancher Using a Cert Signed by a Recognized CA:** - -If you use a certificate signed by a recognized CA, installing your certificate in the Rancher container isn't necessary. We do have to make sure there is no default CA certificate generated and stored, you can do this by passing the `--no-cacerts` parameter to the container. - -1. Enter the following command. - - ``` - docker run -d --restart=unless-stopped \ - -p 80:80 -p 443:443 \ - rancher/rancher:latest --no-cacerts - ``` - - {{% /accordion %}} - -## 3. Configure Load Balancer - -When using a load balancer in front of your Rancher container, there's no need for the container to redirect port communication from port 80 or port 443. By passing the header `X-Forwarded-Proto: https` header, this redirect is disabled. - -The load balancer or proxy has to be configured to support the following: - -- **WebSocket** connections -- **SPDY** / **HTTP/2** protocols -- Passing / setting the following headers: - - | Header | Value | Description | - |--------|-------|-------------| - | `Host` | Hostname used to reach Rancher. | To identify the server requested by the client. - | `X-Forwarded-Proto` | `https` | To identify the protocol that a client used to connect to the load balancer or proxy.

**Note:** If this header is present, `rancher/rancher` does not redirect HTTP to HTTPS. - | `X-Forwarded-Port` | Port used to reach Rancher. | To identify the protocol that client used to connect to the load balancer or proxy. - | `X-Forwarded-For` | IP of the client connection. | To identify the originating IP address of a client. -### Example NGINX configuration - -This NGINX configuration is tested on NGINX 1.14. - -> **Note:** This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - HTTP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/). - -- Replace `rancher-server` with the IP address or hostname of the node running the Rancher container. -- Replace both occurrences of `FQDN` to the DNS name for Rancher. -- Replace `/certs/fullchain.pem` and `/certs/privkey.pem` to the location of the server certificate and the server certificate key respectively. - -``` -worker_processes 4; -worker_rlimit_nofile 40000; - -events { - worker_connections 8192; -} - -http { - upstream rancher { - server rancher-server:80; - } - - map $http_upgrade $connection_upgrade { - default Upgrade; - '' close; - } - - server { - listen 443 ssl http2; - server_name FQDN; - ssl_certificate /certs/fullchain.pem; - ssl_certificate_key /certs/privkey.pem; - - location / { - proxy_set_header Host $host; - proxy_set_header X-Forwarded-Proto $scheme; - proxy_set_header X-Forwarded-Port $server_port; - proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; - proxy_pass http://rancher; - proxy_http_version 1.1; - proxy_set_header Upgrade $http_upgrade; - proxy_set_header Connection $connection_upgrade; - # This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close. - proxy_read_timeout 900s; - proxy_buffering off; - } - } - - server { - listen 80; - server_name FQDN; - return 301 https://$server_name$request_uri; - } -} -``` - -
- -## What's Next? - -- **Recommended:** Review [Single Node Backup and Restoration]({{}}/rancher/v2.x/en/installation/backups-and-restoration/single-node-backup-and-restoration/). Although you don't have any data you need to back up right now, we recommend creating backups after regular Rancher use. -- Create a Kubernetes cluster: [Provisioning Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-provisioning/). - -
- -## FAQ and Troubleshooting - -{{< ssl_faq_single >}} - -## Advanced Options - -### API Auditing - -If you want to record all transactions with the Rancher API, enable the [API Auditing]({{}}/rancher/v2.x/en/installation/api-auditing) feature by adding the flags below into your install command. - - -e AUDIT_LEVEL=1 \ - -e AUDIT_LOG_PATH=/var/log/auditlog/rancher-api-audit.log \ - -e AUDIT_LOG_MAXAGE=20 \ - -e AUDIT_LOG_MAXBACKUP=20 \ - -e AUDIT_LOG_MAXSIZE=100 \ - -### Air Gap - -If you are visiting this page to complete an [Air Gap Installation]({{}}/rancher/v2.x/en/installation/air-gap-installation/), you must pre-pend your private registry URL to the server tag when running the installation command in the option that you choose. Add `` with your private registry URL in front of `rancher/rancher:latest`. - -**Example:** - - /rancher/rancher:latest - -### Persistent Data - -{{< persistentdata >}} - -This layer 7 NGINX configuration is tested on NGINX version 1.13 (mainline) and 1.14 (stable). - -> **Note:** This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - TCP and UDP Load Balancer](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/). - -``` -upstream rancher { - server rancher-server:80; -} - -map $http_upgrade $connection_upgrade { - default Upgrade; - '' close; -} - -server { - listen 443 ssl http2; - server_name rancher.yourdomain.com; - ssl_certificate /etc/your_certificate_directory/fullchain.pem; - ssl_certificate_key /etc/your_certificate_directory/privkey.pem; - - location / { - proxy_set_header Host $host; - proxy_set_header X-Forwarded-Proto $scheme; - proxy_set_header X-Forwarded-Port $server_port; - proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; - proxy_pass http://rancher; - proxy_http_version 1.1; - proxy_set_header Upgrade $http_upgrade; - proxy_set_header Connection $connection_upgrade; - # This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close. - proxy_read_timeout 900s; - proxy_buffering off; - } -} - -server { - listen 80; - server_name rancher.yourdomain.com; - return 301 https://$server_name$request_uri; -} -``` - -
- diff --git a/content/rancher/v2.x/en/installation/options/troubleshooting/_index.md b/content/rancher/v2.x/en/installation/options/troubleshooting/_index.md deleted file mode 100644 index 2cd45a2b381..00000000000 --- a/content/rancher/v2.x/en/installation/options/troubleshooting/_index.md +++ /dev/null @@ -1,189 +0,0 @@ ---- -title: Troubleshooting the Rancher Server Kubernetes Cluster -weight: 276 -aliases: - - /rancher/v2.x/en/installation/k8s-install/helm-rancher/troubleshooting/ - - /rancher/v2.x/en/installation/ha/kubernetes-rke/troubleshooting - - /rancher/v2.x/en/installation/k8s-install/kubernetes-rke/troubleshooting ---- - -This section describes how to troubleshoot an installation of Rancher on a Kubernetes cluster. - -### Relevant Namespaces - -Most of the troubleshooting will be done on objects in these 3 namespaces. - -- `cattle-system` - `rancher` deployment and pods. -- `ingress-nginx` - Ingress controller pods and services. -- `cert-manager` - `cert-manager` pods. - -### "default backend - 404" - -A number of things can cause the ingress-controller not to forward traffic to your rancher instance. Most of the time its due to a bad ssl configuration. - -Things to check - -- [Is Rancher Running](#is-rancher-running) -- [Cert CN is "Kubernetes Ingress Controller Fake Certificate"](#cert-cn-is-kubernetes-ingress-controller-fake-certificate) - -### Check if Rancher is Running - -Use `kubectl` to check the `cattle-system` system namespace and see if the Rancher pods are in a Running state. - -``` -kubectl -n cattle-system get pods - -NAME READY STATUS RESTARTS AGE -pod/rancher-784d94f59b-vgqzh 1/1 Running 0 10m -``` - -If the state is not `Running`, run a `describe` on the pod and check the Events. - -``` -kubectl -n cattle-system describe pod - -... -Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal Scheduled 11m default-scheduler Successfully assigned rancher-784d94f59b-vgqzh to localhost - Normal SuccessfulMountVolume 11m kubelet, localhost MountVolume.SetUp succeeded for volume "rancher-token-dj4mt" - Normal Pulling 11m kubelet, localhost pulling image "rancher/rancher:v2.0.4" - Normal Pulled 11m kubelet, localhost Successfully pulled image "rancher/rancher:v2.0.4" - Normal Created 11m kubelet, localhost Created container - Normal Started 11m kubelet, localhost Started container -``` - -### Check the Rancher Logs - -Use `kubectl` to list the pods. - -``` -kubectl -n cattle-system get pods - -NAME READY STATUS RESTARTS AGE -pod/rancher-784d94f59b-vgqzh 1/1 Running 0 10m -``` - -Use `kubectl` and the pod name to list the logs from the pod. - -``` -kubectl -n cattle-system logs -f rancher-784d94f59b-vgqzh -``` - -### Cert CN is "Kubernetes Ingress Controller Fake Certificate" - -Use your browser to check the certificate details. If it says the Common Name is "Kubernetes Ingress Controller Fake Certificate", something may have gone wrong with reading or issuing your SSL cert. - -> **Note:** if you are using LetsEncrypt to issue certs it can sometimes take a few minuets to issue the cert. - -### Checking for issues with cert-manager issued certs (Rancher Generated or LetsEncrypt) - -`cert-manager` has 3 parts. - -- `cert-manager` pod in the `cert-manager` namespace. -- `Issuer` object in the `cattle-system` namespace. -- `Certificate` object in the `cattle-system` namespace. - -Work backwards and do a `kubectl describe` on each object and check the events. You can track down what might be missing. - -For example there is a problem with the Issuer: - -``` -kubectl -n cattle-system describe certificate -... -Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Warning IssuerNotReady 18s (x23 over 19m) cert-manager Issuer rancher not ready -``` - -``` -kubectl -n cattle-system describe issuer -... -Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Warning ErrInitIssuer 19m (x12 over 19m) cert-manager Error initializing issuer: secret "tls-rancher" not found - Warning ErrGetKeyPair 9m (x16 over 19m) cert-manager Error getting keypair for CA issuer: secret "tls-rancher" not found -``` - -### Checking for Issues with Your Own SSL Certs - -Your certs get applied directly to the Ingress object in the `cattle-system` namespace. - -Check the status of the Ingress object and see if its ready. - -``` -kubectl -n cattle-system describe ingress -``` - -If its ready and the SSL is still not working you may have a malformed cert or secret. - -Check the nginx-ingress-controller logs. Because the nginx-ingress-controller has multiple containers in its pod you will need to specify the name of the container. - -``` -kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller -... -W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found -``` - -### No matches for kind "Issuer" - -The [SSL configuration]({{}}/rancher/v2.x/en/installation/k8s-install/helm-rancher/#choose-your-ssl-configuration) option you have chosen requires [cert-manager]({{}}/rancher/v2.x/en/installation/k8s-install/helm-rancher/#optional-install-cert-manager) to be installed before installing Rancher or else the following error is shown: - -``` -Error: validation failed: unable to recognize "": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1" -``` - -Install [cert-manager]({{}}/rancher/v2.x/en/installation/k8s-install/helm-rancher/#optional-install-cert-manager) and try installing Rancher again. - - -### Canal Pods show READY 2/3 - -The most common cause of this issue is port 8472/UDP is not open between the nodes. Check your local firewall, network routing or security groups. - -Once the network issue is resolved, the `canal` pods should timeout and restart to establish their connections. - -### nginx-ingress-controller Pods show RESTARTS - -The most common cause of this issue is the `canal` pods have failed to establish the overlay network. See [canal Pods show READY `2/3`](#canal-pods-show-ready-2-3) for troubleshooting. - - -### Failed to dial to /var/run/docker.sock: ssh: rejected: administratively prohibited (open failed) - -Some causes of this error include: - -* User specified to connect with does not have permission to access the Docker socket. This can be checked by logging into the host and running the command `docker ps`: - -``` -$ ssh user@server -user@server$ docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -``` - -See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) how to set this up properly. - -* When using RedHat/CentOS as operating system, you cannot use the user `root` to connect to the nodes because of [Bugzilla #1527565](https://bugzilla.redhat.com/show_bug.cgi?id=1527565). You will need to add a separate user and configure it to access the Docker socket. See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) how to set this up properly. - -* SSH server version is not version 6.7 or higher. This is needed for socket forwarding to work, which is used to connect to the Docker socket over SSH. This can be checked using `sshd -V` on the host you are connecting to, or using netcat: -``` -$ nc xxx.xxx.xxx.xxx 22 -SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.10 -``` - -### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: Error configuring SSH: ssh: no key found - -The key file specified as `ssh_key_path` cannot be accessed. Make sure that you specified the private key file (not the public key, `.pub`), and that the user that is running the `rke` command can access the private key file. - -### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain - -The key file specified as `ssh_key_path` is not correct for accessing the node. Double-check if you specified the correct `ssh_key_path` for the node and if you specified the correct user to connect with. - -### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: Error configuring SSH: ssh: cannot decode encrypted private keys - -If you want to use encrypted private keys, you should use `ssh-agent` to load your keys with your passphrase. If the `SSH_AUTH_SOCK` environment variable is found in the environment where the `rke` command is run, it will be used automatically to connect to the node. - -### Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? - -The node is not reachable on the configured `address` and `port`. diff --git a/content/rancher/v2.x/en/installation/options/upgrading-cert-manager/helm-2-instructions/_index.md b/content/rancher/v2.x/en/installation/options/upgrading-cert-manager/helm-2-instructions/_index.md deleted file mode 100644 index 4aee1733f98..00000000000 --- a/content/rancher/v2.x/en/installation/options/upgrading-cert-manager/helm-2-instructions/_index.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -title: Upgrading Cert-Manager with Helm 2 -weight: 2040 ---- - -Rancher uses cert-manager to automatically generate and renew TLS certificates for HA deployments of Rancher. As of Fall 2019, three important changes to cert-manager are set to occur that you need to take action on if you have an HA deployment of Rancher: - -1. [Let's Encrypt will be blocking cert-manager instances older than 0.8.0 starting November 1st 2019.](https://community.letsencrypt.org/t/blocking-old-cert-manager-versions/98753) -1. [Cert-manager is deprecating and replacing the certificate.spec.acme.solvers field](https://docs.cert-manager.io/en/latest/tasks/upgrading/upgrading-0.7-0.8.html#upgrading-from-v0-7-to-v0-8). This change has no exact deadline. -1. [Cert-manager is deprecating `v1alpha1` API and replacing its API group](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/) - -To address these changes, this guide will do two things: - -1. Document the procedure for upgrading cert-manager -1. Explain the cert-manager API changes and link to cert-manager's offficial documentation for migrating your data - -> **Important:** -> If you are currently running the cert-manger whose version is older than v0.11, and want to upgrade both Rancher and cert-manager to a newer version, you need to reinstall both of them: - -> 1. Take a one-time snapshot of your Kubernetes cluster running Rancher server -> 2. Uninstall Rancher, cert-manager, and the CustomResourceDefinition for cert-manager -> 3. Install the newer version of Rancher and cert-manager - -> The reason is that when Helm upgrades Rancher, it will reject the upgrade and show error messages if the running Rancher app does not match the chart template used to install it. Because cert-manager changed its API group and we cannot modify released charts for Rancher, there will always be a mismatch on the cert-manager's API version, therefore the upgrade will be rejected. - -> For reinstalling Rancher with Helm, please check [Option B: Reinstalling Rancher Chart]({{}}/rancher/v2.x/en/upgrades/upgrades/ha/#c-upgrade-rancher) under the upgrade Rancher section. - -## Upgrade Cert-Manager Only - -> **Note:** -> These instructions are applied if you have no plan to upgrade Rancher. - -The namespace used in these instructions depends on the namespace cert-manager is currently installed in. If it is in kube-system use that in the instructions below. You can verify by running `kubectl get pods --all-namespaces` and checking which namespace the cert-manager-\* pods are listed in. Do not change the namespace cert-manager is running in or this can cause issues. - -In order to upgrade cert-manager, follow these instructions: - -{{% accordion id="normal" label="Upgrading cert-manager with Internet access" %}} -1. Back up existing resources as a precaution - - ```plain - kubectl get -o yaml --all-namespaces issuer,clusterissuer,certificates > cert-manager-backup.yaml - ``` - -1. Delete the existing deployment - - ```plain - helm delete --purge cert-manager - ``` - -1. Install the CustomResourceDefinition resources separately - - ```plain - kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml - ``` - -1. Add the Jetstack Helm repository - - ```plain - helm repo add jetstack https://charts.jetstack.io - ``` - -1. Update your local Helm chart repository cache - - ```plain - helm repo update - ``` - -1. Install the new version of cert-manager - - ```plain - helm install --version 0.12.0 --name cert-manager --namespace kube-system jetstack/cert-manager - ``` -{{% /accordion %}} - -{{% accordion id="airgap" label="Upgrading cert-manager in an airgapped environment" %}} -### Prerequisites - -Before you can perform the upgrade, you must prepare your air gapped environment by adding the necessary container images to your private registry and downloading or rendering the required Kubernetes manifest files. - -1. Follow the guide to [Prepare your Private Registry]({{}}/rancher/v2.x/en/installation/air-gap-installation/prepare-private-reg/) with the images needed for the upgrade. - -1. From a system connected to the internet, add the cert-manager repo to Helm - - ```plain - helm repo add jetstack https://charts.jetstack.io - helm repo update - ``` - -1. Fetch the latest cert-manager chart available from the [Helm chart repository](https://hub.helm.sh/charts/jetstack/cert-manager). - - ```plain - helm fetch jetstack/cert-manager --version v0.12.0 - ``` - -1. Render the cert manager template with the options you would like to use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files. - - ```plain - helm template ./cert-manager-v0.12.0.tgz --output-dir . \ - --name cert-manager --namespace kube-system \ - --set image.repository=/quay.io/jetstack/cert-manager-controller - --set webhook.image.repository=/quay.io/jetstack/cert-manager-webhook - --set cainjector.image.repository=/quay.io/jetstack/cert-manager-cainjector - ``` - -1. Download the required CRD file for cert-manager - - ```plain - curl -L -o cert-manager/cert-manager-crd.yaml https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml - ``` - -### Install cert-manager - -1. Back up existing resources as a precaution - - ```plain - kubectl get -o yaml --all-namespaces issuer,clusterissuer,certificates > cert-manager-backup.yaml - ``` - -1. Delete the existing cert-manager installation - - ```plain - kubectl -n kube-system delete deployment,sa,clusterrole,clusterrolebinding -l 'app=cert-manager' -l 'chart=cert-manager-v0.5.2' - ``` - -1. Install the CustomResourceDefinition resources separately - - ```plain - kubectl apply -f cert-manager/cert-manager-crd.yaml - ``` - - -1. Install cert-manager - - ```plain - kubectl -n kube-system apply -R -f ./cert-manager - ``` -{{% /accordion %}} - - -Once you’ve installed cert-manager, you can verify it is deployed correctly by checking the kube-system namespace for running pods: - -``` -kubectl get pods --namespace kube-system - -NAME READY STATUS RESTARTS AGE -cert-manager-7cbdc48784-rpgnt 1/1 Running 0 3m -cert-manager-webhook-5b5dd6999-kst4x 1/1 Running 0 3m -cert-manager-cainjector-3ba5cd2bcd-de332x 1/1 Running 0 3m -``` - -If the ‘webhook’ pod (2nd line) is in a ContainerCreating state, it may still be waiting for the Secret to be mounted into the pod. Wait a couple of minutes for this to happen but if you experience problems, please check cert-manager's [troubleshooting](https://docs.cert-manager.io/en/latest/getting-started/troubleshooting.html) guide. - -> **Note:** The above instructions ask you to add the disable-validation label to the kube-system namespace. Here are additional resources that explain why this is necessary: -> -> - [Information on the disable-validation label](https://docs.cert-manager.io/en/latest/tasks/upgrading/upgrading-0.4-0.5.html?highlight=certmanager.k8s.io%2Fdisable-validation#disabling-resource-validation-on-the-cert-manager-namespace) -> - [Information on webhook validation for certificates](https://docs.cert-manager.io/en/latest/getting-started/webhook.html) - -## Cert-Manager API change and data migration - -Cert-manager has deprecated the use of the `certificate.spec.acme.solvers` field and will drop support for it completely in an upcoming release. - -Per the cert-manager documentation, a new format for configuring ACME certificate resources was introduced in v0.8. Specifically, the challenge solver configuration field was moved. Both the old format and new are supported as of v0.9, but support for the old format will be dropped in an upcoming release of cert-manager. The cert-manager documentation strongly recommends that after upgrading you update your ACME Issuer and Certificate resources to the new format. - -Details about the change and migration instructions can be found in the [cert-manager v0.7 to v0.8 upgrade instructions](https://cert-manager.io/docs/installation/upgrading/upgrading-0.7-0.8/). - -The v0.11 release marks the removal of the v1alpha1 API that was used in previous versions of cert-manager, as well as our API group changing to be `cert-manager.io` instead of `certmanager.k8s.io.` - -We have also removed support for the old configuration format that was deprecated in the v0.8 release. This means you must transition to using the new solvers style configuration format for your ACME issuers before upgrading to v0.11. For more information, see the [upgrading to v0.8 guide](https://cert-manager.io/docs/installation/upgrading/upgrading-0.7-0.8/). - -Details about the change and migration instructions can be found in the [cert-manager v0.10 to v0.11 upgrade instructions](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/). - -For information on upgrading from all other versions of cert-manager, refer to the [official documentation](https://cert-manager.io/docs/installation/upgrading/). diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/_index.md index e417c9260a5..b9e2266abf8 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/_index.md @@ -1,16 +1,20 @@ --- title: Other Installation Methods -weight: 4 +weight: 3 --- -### Docker Installations - -The [single-node Docker installation]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) is for Rancher users that are wanting to test out Rancher. Instead of running on a Kubernetes cluster using Helm, you install the Rancher server component on a single node using a `docker run` command. - -Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. - ### Air Gapped Installations Follow [these steps]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap) to install the Rancher server in an air gapped environment. -An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. \ No newline at end of file +An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. + +### Docker Installations + +The [single-node Docker installation]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) is for Rancher users that are wanting to test out Rancher. Instead of running on a Kubernetes cluster using Helm, you install the Rancher server component on a single node using a `docker run` command. + +The Docker installation is for development and testing environments only. + +Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. + +When the Rancher server is installed with Docker, it cannot be migrated to a Kubernetes cluster for a production environment. diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/_index.md index fb264adc5e9..2c4d94963b6 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/_index.md @@ -1,6 +1,6 @@ --- title: Installing Rancher in an Air Gapped Environment -weight: 3 +weight: 1 aliases: - /rancher/v2.x/en/installation/air-gap-installation/ - /rancher/v2.x/en/installation/air-gap-high-availability/ diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md index 80281d3e061..a7393895f68 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md @@ -12,6 +12,10 @@ This section is about how to deploy Rancher for your air gapped environment. An > **Note:** These installation instructions assume you are using Helm 3. For migration of installs started with Helm 2, refer to the official [Helm 2 to 3 migration docs.](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) This [section]({{}}/rancher/v2.x/en/installation/options/air-gap-helm2) provides a copy of the older air gap installation instructions for Rancher installed on Kubernetes with Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible. +### Privileged Access for Rancher v2.5+ + +When the Rancher server is deployed in the Docker container, a local Kubernetes cluster is installed within the container for Rancher to use. Because many features of Rancher run as deployments, and privileged mode is required to run containers within containers, you will need to install Rancher with the `--privileged` option. + {{% tabs %}} {{% tab "Kubernetes Install (Recommended)" %}} @@ -252,11 +256,14 @@ Log into your Linux host, and then run the installation command below. When ente | `` | Your private registry URL and port. | | `` | The release tag of the [Rancher version]({{}}/rancher/v2.x/en/installation/options/server-tags/) that you want to install. | +As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5) + ``` docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ -e CATTLE_SYSTEM_DEFAULT_REGISTRY= \ # Set a default private registry to be used in Rancher -e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts + --privileged \ /rancher/rancher: ``` @@ -282,6 +289,8 @@ After creating your certificate, log into your Linux host, and then run the inst | `` | Your private registry URL and port. | | `` | The release tag of the [Rancher version]({{}}/rancher/v2.x/en/installation/options/server-tags/) that you want to install. | +As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5) + ``` docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ @@ -290,6 +299,7 @@ docker run -d --restart=unless-stopped \ -v //:/etc/rancher/ssl/cacerts.pem \ -e CATTLE_SYSTEM_DEFAULT_REGISTRY= \ # Set a default private registry to be used in Rancher -e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts + --privileged \ /rancher/rancher: ``` @@ -312,6 +322,8 @@ After obtaining your certificate, log into your Linux host, and then run the ins > **Note:** Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher. +As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5) + ``` docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ @@ -320,6 +332,7 @@ docker run -d --restart=unless-stopped \ -v //:/etc/rancher/ssl/key.pem \ -e CATTLE_SYSTEM_DEFAULT_REGISTRY= \ # Set a default private registry to be used in Rancher -e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts + --privileged /rancher/rancher: ``` diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/_index.md index 6cef213e1ae..a97cc6bea3c 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/_index.md @@ -16,7 +16,11 @@ Populating the private registry with images is the same process for installing R The steps in this section differ depending on whether or not you are planning to use Rancher to provision a downstream cluster with Windows nodes or not. By default, we provide the steps of how to populate your private registry assuming that Rancher will provision downstream Kubernetes clusters with only Linux nodes. But if you plan on provisioning any [downstream Kubernetes clusters using Windows nodes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/), there are separate instructions to support the images needed. -> **Prerequisites:** You must have a [private registry](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry) available to use. +> **Prerequisites:** +> +> You must have a [private registry](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry) available to use. +> +> If the registry has certs, follow [this K3s documentation](https://rancher.com/docs/k3s/latest/en/installation/private-registry/) about adding a private registry. The certs and registry configuration files need to be mounted into the Rancher container. {{% tabs %}} {{% tab "Linux Only Clusters" %}} diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md index 97e9673c322..1136fac80da 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md @@ -1,20 +1,26 @@ --- title: Installing Rancher on a Single Node Using Docker description: For development and testing environments only, use a Docker install. Install Docker on a single Linux host, and deploy Rancher with a single Docker container. -weight: 1 +weight: 2 aliases: - /rancher/v2.x/en/installation/single-node-install/ - /rancher/v2.x/en/installation/single-node - /rancher/v2.x/en/installation/other-installation-methods/single-node --- -For development and testing environments only, Rancher can be installed by running a single Docker container. +> The Docker installation is for development and testing environments only. When the Rancher server is installed with Docker, it cannot be migrated to a Kubernetes cluster for a production environment. + +Rancher can be installed by running a single Docker container. In this installation scenario, you'll install Docker on a single Linux host, and then deploy Rancher on your host using a single Docker container. > **Want to use an external load balancer?** > See [Docker Install with an External Load Balancer]({{}}/rancher/v2.x/en/installation/options/single-node-install-external-lb) instead. +### Privileged Access for Rancher v2.5+ + +When the Rancher server is deployed in the Docker container, a local Kubernetes cluster is installed within the container for Rancher to use. Because many features of Rancher run as deployments, and privileged mode is required to run containers within containers, you will need to install Rancher with the `--privileged` option. + # Requirements for OS, Docker, Hardware, and Networking Make sure that your node fulfills the general [installation requirements.]({{}}/rancher/v2.x/en/installation/requirements/) @@ -42,9 +48,12 @@ If you are installing Rancher in a development or testing environment where iden Log into your Linux host, and then run the minimum installation command below. +As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5) + ```bash docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ + --privileged \ rancher/rancher:latest ``` @@ -66,12 +75,15 @@ After creating your certificate, run the Docker command below to install Rancher | `` | The path to the private key for your certificate. | | `` | The path to the certificate authority's certificate. | +As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5) + ```bash docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ -v //:/etc/rancher/ssl/cert.pem \ -v //:/etc/rancher/ssl/key.pem \ -v //:/etc/rancher/ssl/cacerts.pem \ + --privileged \ rancher/rancher:latest ``` @@ -95,12 +107,15 @@ After obtaining your certificate, run the Docker command below. | `` | The path to your full certificate chain. | | `` | The path to the private key for your certificate. | +As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5) + ```bash docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ -v //:/etc/rancher/ssl/cert.pem \ -v //:/etc/rancher/ssl/key.pem \ rancher/rancher:latest \ + --privileged \ --no-cacerts ``` @@ -122,10 +137,13 @@ After you fulfill the prerequisites, you can install Rancher using a Let's Encry | ----------------- | ------------------- | | `` | Your domain address | +As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5) + ``` docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ rancher/rancher:latest \ + --privileged \ --acme-domain ``` diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/advanced/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/advanced/_index.md index 3aa2362b502..8eefb2db503 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/advanced/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/advanced/_index.md @@ -25,11 +25,14 @@ Use the command example to start a Rancher container with your private CA certif The example below is based on having the CA root certificates in the `/host/certs` directory on the host and mounting this directory on `/container/certs` inside the Rancher container. +As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) + ``` docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ -v /host/certs:/container/certs \ -e SSL_CERT_DIR="/container/certs" \ + --privileged \ rancher/rancher:latest ``` @@ -41,11 +44,14 @@ The API Audit Log writes to `/var/log/auditlog` inside the rancher container by See [API Audit Log]({{}}/rancher/v2.x/en/installation/api-auditing) for more information and options. +As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) + ``` docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ -v /var/log/rancher/auditlog:/var/log/auditlog \ -e AUDIT_LEVEL=1 \ + --privileged \ rancher/rancher:latest ``` @@ -59,9 +65,12 @@ To set a different TLS configuration, you can use the `CATTLE_TLS_MIN_VERSION` a docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ -e CATTLE_TLS_MIN_VERSION="1.0" \ + --privileged \ rancher/rancher:latest ``` +As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) + See [TLS settings]({{}}/rancher/v2.x/en/admin-settings/tls-settings) for more information and options. ### Air Gap @@ -89,5 +98,8 @@ To change the host ports mapping, replace the following part `-p 80:80 -p 443:44 ``` docker run -d --restart=unless-stopped \ -p 8080:80 -p 8443:443 \ + --privileged \ rancher/rancher:latest ``` + +As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/proxy/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/proxy/_index.md index da3e4484bfe..100acc4f282 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/proxy/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/proxy/_index.md @@ -35,5 +35,8 @@ docker run -d --restart=unless-stopped \ -e HTTP_PROXY="http://192.168.10.1:3128" \ -e HTTPS_PROXY="http://192.168.10.1:3128" \ -e NO_PROXY="localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.10.0/24,example.com" \ + --privileged \ rancher/rancher:latest ``` + +As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) diff --git a/content/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-rollbacks/_index.md similarity index 93% rename from content/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/_index.md rename to content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-rollbacks/_index.md index 4705d65d1d8..c87b0571c8b 100644 --- a/content/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-rollbacks/_index.md @@ -1,8 +1,9 @@ --- -title: Docker Rollback +title: Rolling Back Rancher Installed with Docker weight: 1015 aliases: - /rancher/v2.x/en/upgrades/single-node-rollbacks + - /rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks --- If a Rancher upgrade does not complete successfully, you'll have to roll back to your Rancher setup that you were using before [Docker Upgrade]({{}}/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade). Rolling back restores: @@ -73,8 +74,13 @@ If you have issues upgrading Rancher, roll it back to its latest known healthy s 1. Start a new Rancher Server container with the `` tag [placeholder](#before-you-start) pointing to the data container. ``` docker run -d --volumes-from rancher-data \ - --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher: + --restart=unless-stopped \ + -p 80:80 -p 443:443 \ + --privileged \ + rancher/rancher: ``` + As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) + >**Note:** _Do not_ stop the rollback after initiating it, even if the rollback process seems longer than expected. Stopping the rollback may result in database issues during future upgrades. 1. Wait a few moments and then open Rancher in a web browser. Confirm that the rollback succeeded and that your data is restored. diff --git a/content/rancher/v2.x/en/upgrades/upgrades/single-node/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/_index.md similarity index 95% rename from content/rancher/v2.x/en/upgrades/upgrades/single-node/_index.md rename to content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/_index.md index acba3d760b3..542ddfa5854 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/single-node/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/_index.md @@ -4,6 +4,7 @@ weight: 1010 aliases: - /rancher/v2.x/en/upgrades/single-node-upgrade/ - /rancher/v2.x/en/upgrades/upgrades/single-node-air-gap-upgrade + - /rancher/v2.x/en/upgrades/upgrades/single-node --- The following instructions will guide you through upgrading a Rancher server that was installed with Docker. @@ -135,9 +136,12 @@ Placeholder | Description docker run -d --volumes-from rancher-data \ --restart=unless-stopped \ -p 80:80 -p 443:443 \ + --privileged \ rancher/rancher: ``` +As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) + {{% /accordion %}} {{% accordion id="option-b" label="Option B-Bring Your Own Certificate: Self-Signed" %}} @@ -161,9 +165,12 @@ docker run -d --volumes-from rancher-data \ -v //:/etc/rancher/ssl/cert.pem \ -v //:/etc/rancher/ssl/key.pem \ -v //:/etc/rancher/ssl/cacerts.pem \ + --privileged \ rancher/rancher: ``` +As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) + {{% /accordion %}} {{% accordion id="option-c" label="Option C-Bring Your Own Certificate: Signed by Recognized CA" %}} @@ -185,8 +192,11 @@ docker run -d --volumes-from rancher-data \ -v //:/etc/rancher/ssl/cert.pem \ -v //:/etc/rancher/ssl/key.pem \ rancher/rancher: \ + --privileged \ --no-cacerts ``` + +As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) {{% /accordion %}} {{% accordion id="option-d" label="Option D-Let's Encrypt Certificate" %}} @@ -208,10 +218,13 @@ Placeholder | Description docker run -d --volumes-from rancher-data \ --restart=unless-stopped \ -p 80:80 -p 443:443 \ + --privileged \ rancher/rancher: \ --acme-domain ``` +As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) + {{% /accordion %}} {{% /tab %}} @@ -238,8 +251,11 @@ Placeholder | Description -p 80:80 -p 443:443 \ -e CATTLE_SYSTEM_DEFAULT_REGISTRY= \ # Set a default private registry to be used in Rancher -e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts + --privileged \ /rancher/rancher: ``` + +As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) {{% /accordion %}} {{% accordion id="option-b" label="Option B-Bring Your Own Certificate: Self-Signed" %}} @@ -265,8 +281,10 @@ docker run -d --restart=unless-stopped \ -v //:/etc/rancher/ssl/cacerts.pem \ -e CATTLE_SYSTEM_DEFAULT_REGISTRY= \ # Set a default private registry to be used in Rancher -e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts + --privileged \ /rancher/rancher: ``` +As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) {{% /accordion %}} {{% accordion id="option-c" label="Option C-Bring Your Own Certificate: Signed by Recognized CA" %}} @@ -294,8 +312,10 @@ docker run -d --volumes-from rancher-data \ -v //:/etc/rancher/ssl/key.pem \ -e CATTLE_SYSTEM_DEFAULT_REGISTRY= \ # Set a default private registry to be used in Rancher -e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts + --privileged /rancher/rancher: ``` +As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) {{% /accordion %}} {{% /tab %}} {{% /tabs %}} diff --git a/content/rancher/v2.x/en/installation/requirements/_index.md b/content/rancher/v2.x/en/installation/requirements/_index.md index eb2463fcc02..4c479634852 100644 --- a/content/rancher/v2.x/en/installation/requirements/_index.md +++ b/content/rancher/v2.x/en/installation/requirements/_index.md @@ -47,7 +47,7 @@ All supported operating systems are 64-bit x86. The `ntp` (Network Time Protocol) package should be installed. This prevents errors with certificate validation that can occur when the time is not synchronized between the client and server. -Some distributions of Linux may have default firewall rules that block communication with Helm. This [how-to guide]({{}}/rancher/v2.x/en/installation/options/firewall) shows how to check the default firewall rules for Oracle Linux and how to open the ports with `firewalld` if necessary. +Some distributions of Linux may have default firewall rules that block communication with Helm. We recommend disabling firewalld. For Kubernetes 1.19, firewalld must be turned off. If you plan to run Rancher on ARM64, see [Running on ARM64 (Experimental).]({{}}/rancher/v2.x/en/installation/options/arm64-platform/) diff --git a/content/rancher/v2.x/en/installation/options/_index.md b/content/rancher/v2.x/en/installation/resources/_index.md similarity index 56% rename from content/rancher/v2.x/en/installation/options/_index.md rename to content/rancher/v2.x/en/installation/resources/_index.md index be369e77a78..90156f1f1e7 100644 --- a/content/rancher/v2.x/en/installation/options/_index.md +++ b/content/rancher/v2.x/en/installation/resources/_index.md @@ -1,8 +1,22 @@ --- -title: Resources, References, and Advanced Options -weight: 5 +title: Resources +weight: 4 --- +### Docker Installations + +The [single-node Docker installation]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) is for Rancher users that are wanting to test out Rancher. Instead of running on a Kubernetes cluster using Helm, you install the Rancher server component on a single node using a `docker run` command. + +Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. + +### Air Gapped Installations + +Follow [these steps]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap) to install the Rancher server in an air gapped environment. + +An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. + +### Advanced Options + When installing Rancher, there are several advanced options that can be enabled during installation. Within each install guide, these options are presented. Learn more about these options: | Advanced Option | Available as of | diff --git a/content/rancher/v2.x/en/installation/resources/advanced/_index.md b/content/rancher/v2.x/en/installation/resources/advanced/_index.md new file mode 100644 index 00000000000..3c7b9e72cd6 --- /dev/null +++ b/content/rancher/v2.x/en/installation/resources/advanced/_index.md @@ -0,0 +1,6 @@ +--- +title: Advanced +weight: 5 +--- + +The documents in this section contain resources for less common use cases. \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/options/api-audit-log/_index.md b/content/rancher/v2.x/en/installation/resources/advanced/api-audit-log/_index.md similarity index 99% rename from content/rancher/v2.x/en/installation/options/api-audit-log/_index.md rename to content/rancher/v2.x/en/installation/resources/advanced/api-audit-log/_index.md index e465c60eb6c..d68051fc7bb 100644 --- a/content/rancher/v2.x/en/installation/options/api-audit-log/_index.md +++ b/content/rancher/v2.x/en/installation/resources/advanced/api-audit-log/_index.md @@ -1,8 +1,6 @@ --- title: Enabling the API Audit Log to Record System Events -weight: 10000 -aliases: - - /rancher/v2.x/en/admin-settings/api-auditing/ +weight: 4 --- You can enable the API audit log to record the sequence of system events initiated by individual users. You can know what happened, when it happened, who initiated it, and what cluster it affected. When you enable this feature, all requests to the Rancher API and all responses from it are written to a log. diff --git a/content/rancher/v2.x/en/installation/options/arm64-platform/_index.md b/content/rancher/v2.x/en/installation/resources/advanced/arm64-platform/_index.md similarity index 91% rename from content/rancher/v2.x/en/installation/options/arm64-platform/_index.md rename to content/rancher/v2.x/en/installation/resources/advanced/arm64-platform/_index.md index 762d3c1541d..b0a7f913c38 100644 --- a/content/rancher/v2.x/en/installation/options/arm64-platform/_index.md +++ b/content/rancher/v2.x/en/installation/resources/advanced/arm64-platform/_index.md @@ -1,12 +1,8 @@ --- title: Running on ARM64 (Experimental) -weight: 7600 -aliases: - - /rancher/v2.x/en/installation/arm64-platform +weight: 3 --- -_Available as of v2.2.0_ - > **Important:** > > Running on an ARM64 platform is currently an experimental feature and is not yet officially supported in Rancher. Therefore, we do not recommend using ARM64 based nodes in a production environment. diff --git a/content/rancher/v2.x/en/installation/options/etcd/_index.md b/content/rancher/v2.x/en/installation/resources/advanced/etcd/_index.md similarity index 99% rename from content/rancher/v2.x/en/installation/options/etcd/_index.md rename to content/rancher/v2.x/en/installation/resources/advanced/etcd/_index.md index bbfefa11a28..a605c7343aa 100644 --- a/content/rancher/v2.x/en/installation/options/etcd/_index.md +++ b/content/rancher/v2.x/en/installation/resources/advanced/etcd/_index.md @@ -1,7 +1,6 @@ --- title: Tuning etcd for Large Installations -weight: 3 -aliases: +weight: 2 --- When running larger Rancher installations with 15 or more clusters it is recommended to increase the default keyspace for etcd from the default 2GB. The maximum setting is 8GB and the host should have enough RAM to keep the entire dataset in memory. When increasing this value you should also increase the size of the host. The keyspace size can also be adjusted in smaller installations if you anticipate a high rate of change of pods during the garbage collection interval. diff --git a/content/rancher/v2.x/en/installation/options/firewall/_index.md b/content/rancher/v2.x/en/installation/resources/advanced/firewall/_index.md similarity index 97% rename from content/rancher/v2.x/en/installation/options/firewall/_index.md rename to content/rancher/v2.x/en/installation/resources/advanced/firewall/_index.md index 601d8c046ee..69cb99eeff4 100644 --- a/content/rancher/v2.x/en/installation/options/firewall/_index.md +++ b/content/rancher/v2.x/en/installation/resources/advanced/firewall/_index.md @@ -1,8 +1,10 @@ --- title: Opening Ports with firewalld -weight: 12000 +weight: 1 --- +> We recommend disabling firewalld. For Kubernetes 1.19, firewalld must be turned off. + Some distributions of Linux [derived from RHEL,](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Rebuilds) including Oracle Linux, may have default firewall rules that block communication with Helm. For example, one Oracle Linux image in AWS has REJECT rules that stop Helm from communicating with Tiller: diff --git a/content/rancher/v2.x/en/installation/options/chart-options/_index.md b/content/rancher/v2.x/en/installation/resources/chart-options/_index.md similarity index 90% rename from content/rancher/v2.x/en/installation/options/chart-options/_index.md rename to content/rancher/v2.x/en/installation/resources/chart-options/_index.md index 5cec85b61e6..9a9f4b8ce6c 100644 --- a/content/rancher/v2.x/en/installation/options/chart-options/_index.md +++ b/content/rancher/v2.x/en/installation/resources/chart-options/_index.md @@ -1,8 +1,20 @@ --- -title: Helm Chart Options for Kubernetes Installations -weight: 276 +title: Helm Chart Options +weight: 2 --- +- [Common Options](#common-options) +- [Advanced Options](#advanced-options) +- [API Audit Log](#api-audit-log) +- [Setting Extra Environment Variables](#setting-extra-environment-variables) +- [TLS Settings](#tls-settings) +- [Import local Cluster](#import-local-cluster) +- [Customizing your Ingress](#customizing-your-ingress) +- [HTTP Proxy](#http-proxy) +- [Additional Trusted CAs](#additional-trusted-cas) +- [Private Registry and Air Gap Installs](#private-registry-and-air-gap-installs) +- [External TLS Termination](#external-tls-termination) + ### Common Options | Option | Default Value | Description | @@ -20,7 +32,7 @@ weight: 276 | Option | Default Value | Description | | ------------------------------ | ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | | `additionalTrustedCAs` | false | `bool` - See [Additional Trusted CAs](#additional-trusted-cas) | -| `addLocal` | "auto" | `string` - Have Rancher detect and import the "local" Rancher server cluster [Import "local Cluster](#import-local-cluster) | +| `addLocal` | "true" | `string` - Have Rancher detect and import the "local" Rancher server cluster. For more information, see [Import local Cluster.](#import-local-cluster) _Note: This option is no longer available in v2.5.0. Consider using the `restrictedAdmin` option to prevent users from modifying the local cluster._ | | `antiAffinity` | "preferred" | `string` - AntiAffinity rule for Rancher pods - "preferred, required" | | `replicas` | 3 | `int` - Number of replicas of Rancher pods | | `auditLog.destination` | "sidecar" | `string` - Stream to sidecar container console or hostPath volume - "sidecar, hostPath" | @@ -44,8 +56,9 @@ weight: 276 | `rancherImageTag` | same as chart version | `string` - rancher/rancher image tag | | `rancherImagePullPolicy` | "IfNotPresent" | `string` - Override imagePullPolicy for rancher server images - "Always", "Never", "IfNotPresent" | | `tls` | "ingress" | `string` - See [External TLS Termination](#external-tls-termination) for details. - "ingress, external" | -| `systemDefaultRegistry` | "" | `string` - private registry to be used for all system Docker images, e.g., http://registry.example.com/ _Available as of v2.3.0_ | -| `useBundledSystemChart` | `false` | `bool` - select to use the system-charts packaged with Rancher server. This option is used for air gapped installations. _Available as of v2.3.0_ | +| `systemDefaultRegistry` | "" | `string` - private registry to be used for all system Docker images, e.g., http://registry.example.com/ | +| `useBundledSystemChart` | `false` | `bool` - select to use the system-charts packaged with Rancher server. This option is used for air gapped installations. | +| `restrictedAdmin` | `false` | _Available in Rancher v2.5_ When this option is set to true, the initial Rancher user has restricted access to the local Kubernetes cluster to prevent privilege escalation. For more information, see the section about the [restricted-admin role.]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#restricted-admin) |
@@ -65,8 +78,6 @@ Set the `auditLog.destination` to `hostPath` to forward logs to volume shared wi ### Setting Extra Environment Variables -_Available as of v2.2.0_ - You can set extra environment variables for Rancher server using `extraEnv`. This list uses the same `name` and `value` keys as the container manifest definitions. Remember to quote the values. ```plain @@ -74,9 +85,7 @@ You can set extra environment variables for Rancher server using `extraEnv`. Thi --set 'extraEnv[0].value=1.0' ``` -### TLS settings - -_Available as of v2.2.0_ +### TLS Settings To set a different TLS configuration, you can use the `CATTLE_TLS_MIN_VERSION` and `CATTLE_TLS_CIPHERS` environment variables. For example, to configure TLS 1.0 as minimum accepted TLS version: @@ -91,9 +100,11 @@ See [TLS settings]({{}}/rancher/v2.x/en/admin-settings/tls-settings) fo By default Rancher server will detect and import the `local` cluster it's running on. User with access to the `local` cluster will essentially have "root" access to all the clusters managed by Rancher server. +> **Important:** If you turn addLocal off, most Rancher v2.5 features won't work, including the EKS provisioner. + If this is a concern in your environment you can set this option to "false" on your initial install. -> Note: This option is only effective on the initial Rancher install. See [Issue 16522](https://github.com/rancher/rancher/issues/16522) for more information. +This option is only effective on the initial Rancher install. See [Issue 16522](https://github.com/rancher/rancher/issues/16522) for more information. ```plain --set addLocal="false" @@ -109,8 +120,6 @@ Example on setting a custom certificate issuer: --set ingress.extraAnnotations.'certmanager\.k8s\.io/cluster-issuer'=ca-key-pair ``` -_Available as of v2.0.15, v2.1.10 and v2.2.4_ - Example on setting a static proxy header with `ingress.configurationSnippet`. This value is parsed like a template so variables can be used. ```plain diff --git a/content/rancher/v2.x/en/installation/options/server-tags/_index.md b/content/rancher/v2.x/en/installation/resources/choosing-version/_index.md similarity index 99% rename from content/rancher/v2.x/en/installation/options/server-tags/_index.md rename to content/rancher/v2.x/en/installation/resources/choosing-version/_index.md index 103b487d081..36edeae1f5a 100644 --- a/content/rancher/v2.x/en/installation/options/server-tags/_index.md +++ b/content/rancher/v2.x/en/installation/resources/choosing-version/_index.md @@ -1,8 +1,6 @@ --- title: Choosing a Rancher Version -weight: 230 -aliases: - - /rancher/v2.x/en/installation/server-tags +weight: 1 --- This section describes how to choose a Rancher version. diff --git a/content/rancher/v2.x/en/installation/resources/encryption/_index.md b/content/rancher/v2.x/en/installation/resources/encryption/_index.md new file mode 100644 index 00000000000..345f6b72818 --- /dev/null +++ b/content/rancher/v2.x/en/installation/resources/encryption/_index.md @@ -0,0 +1,6 @@ +--- +title: Encryption +weight: 3 +--- + +The documents in this section contain information about certificate configuration and `cert-manager`. \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/options/custom-ca-root-certificate/_index.md b/content/rancher/v2.x/en/installation/resources/encryption/custom-ca-root-certificate/_index.md similarity index 94% rename from content/rancher/v2.x/en/installation/options/custom-ca-root-certificate/_index.md rename to content/rancher/v2.x/en/installation/resources/encryption/custom-ca-root-certificate/_index.md index 69a2a82944e..924bb8a8203 100644 --- a/content/rancher/v2.x/en/installation/options/custom-ca-root-certificate/_index.md +++ b/content/rancher/v2.x/en/installation/resources/encryption/custom-ca-root-certificate/_index.md @@ -1,8 +1,6 @@ --- title: About Custom CA Root Certificates -weight: 1110 -aliases: - - /rancher/v2.x/en/installation/custom-ca-root-certificate/ +weight: 1 --- If you're using Rancher in an internal production environment where you aren't exposing apps publicly, use a certificate from a private certificate authority (CA). diff --git a/content/rancher/v2.x/en/installation/options/tls-secrets/_index.md b/content/rancher/v2.x/en/installation/resources/encryption/tls-secrets/_index.md similarity index 95% rename from content/rancher/v2.x/en/installation/options/tls-secrets/_index.md rename to content/rancher/v2.x/en/installation/resources/encryption/tls-secrets/_index.md index 9693d584b0b..ec2da6815aa 100644 --- a/content/rancher/v2.x/en/installation/options/tls-secrets/_index.md +++ b/content/rancher/v2.x/en/installation/resources/encryption/tls-secrets/_index.md @@ -1,8 +1,6 @@ --- title: Adding TLS Secrets -weight: 276 -aliases: -- /rancher/v2.x/en/installation/k8s-install/helm-rancher/tls-secrets/ +weight: 2 --- Kubernetes will create all the objects and services for Rancher, but it will not become available until we populate the `tls-rancher-ingress` secret in the `cattle-system` namespace with the certificate and key. diff --git a/content/rancher/v2.x/en/installation/options/tls-settings/_index.md b/content/rancher/v2.x/en/installation/resources/encryption/tls-settings/_index.md similarity index 97% rename from content/rancher/v2.x/en/installation/options/tls-settings/_index.md rename to content/rancher/v2.x/en/installation/resources/encryption/tls-settings/_index.md index 2cefd9fc853..fc57ede4a78 100644 --- a/content/rancher/v2.x/en/installation/options/tls-settings/_index.md +++ b/content/rancher/v2.x/en/installation/resources/encryption/tls-settings/_index.md @@ -1,10 +1,8 @@ --- -title: TLS settings -weight: 11000 +title: TLS Settings +weight: 3 --- -_Available as of v2.1.7_ - In Rancher v2.1.7, the default TLS configuration changed to only accept TLS 1.2 and secure TLS cipher suites. TLS 1.3 and TLS 1.3 exclusive cipher suites are not supported. ## Configuring TLS settings diff --git a/content/rancher/v2.x/en/installation/options/upgrading-cert-manager/_index.md b/content/rancher/v2.x/en/installation/resources/encryption/upgrading-cert-manager/_index.md similarity index 99% rename from content/rancher/v2.x/en/installation/options/upgrading-cert-manager/_index.md rename to content/rancher/v2.x/en/installation/resources/encryption/upgrading-cert-manager/_index.md index 2f224f311b3..19ee7d06ee9 100644 --- a/content/rancher/v2.x/en/installation/options/upgrading-cert-manager/_index.md +++ b/content/rancher/v2.x/en/installation/resources/encryption/upgrading-cert-manager/_index.md @@ -1,6 +1,6 @@ --- title: Upgrading Cert-Manager -weight: 2040 +weight: 4 --- Rancher uses cert-manager to automatically generate and renew TLS certificates for HA deployments of Rancher. As of Fall 2019, three important changes to cert-manager are set to occur that you need to take action on if you have an HA deployment of Rancher: diff --git a/content/rancher/v2.x/en/installation/options/helm-version/_index.md b/content/rancher/v2.x/en/installation/resources/helm-version/_index.md similarity index 92% rename from content/rancher/v2.x/en/installation/options/helm-version/_index.md rename to content/rancher/v2.x/en/installation/resources/helm-version/_index.md index 11900d73e09..c4e66f92849 100644 --- a/content/rancher/v2.x/en/installation/options/helm-version/_index.md +++ b/content/rancher/v2.x/en/installation/resources/helm-version/_index.md @@ -1,14 +1,13 @@ --- title: Helm Version Requirements -weight: 400 -aliases: -- /rancher/v2.x/en/installation/helm-version +weight: 3 --- This section contains the requirements for Helm, which is the tool used to install Rancher on a high-availability Kubernetes cluster. > The installation instructions have been updated for Helm 3. For migration of installs started with Helm 2, refer to the official [Helm 2 to 3 Migration Docs.](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) [This section]({{}}/rancher/v2.x/en/installation/options/helm2) provides a copy of the older high-availability Rancher installation instructions that used Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible. +- Helm v3.2.x or higher is required to install or upgrade Rancher v2.5. - Helm v2.16.0 or higher is required for Kubernetes v1.16. For the default Kubernetes version, refer to the [release notes](https://github.com/rancher/rke/releases) for the version of RKE that you are using. - Helm v2.15.0 should not be used, because of an issue with converting/comparing numbers. - Helm v2.12.0 should not be used, because of an issue with `cert-manager`. diff --git a/content/rancher/v2.x/en/installation/resources/installing-docker/_index.md b/content/rancher/v2.x/en/installation/resources/installing-docker/_index.md new file mode 100644 index 00000000000..9e040067e5d --- /dev/null +++ b/content/rancher/v2.x/en/installation/resources/installing-docker/_index.md @@ -0,0 +1,20 @@ +--- +title: Installing Docker +weight: 1 +aliases: + - /rancher/v2.x/en/installation/requirements/installing-docker +--- + +Docker is required to be installed on any node that runs the Rancher server. + +There are a couple of options for installing Docker. One option is to refer to the [official Docker documentation](https://docs.docker.com/install/) about how to install Docker on Linux. The steps will vary based on the Linux distribution. + +Another option is to use one of Rancher's Docker installation scripts, which are available for most recent versions of Docker. + +For example, this command could be used to install Docker 19.03 on Ubuntu: + +``` +curl https://releases.rancher.com/install-docker/19.03.sh | sh +``` + +Rancher has installation scripts for every version of upstream Docker that Kubernetes supports. To find out whether a script is available for installing a certain Docker version, refer to this [GitHub repository,](https://github.com/rancher/install-docker) which contains all of Rancher's Docker installation scripts. diff --git a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/_index.md b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/_index.md new file mode 100644 index 00000000000..895a2ba95ae --- /dev/null +++ b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/_index.md @@ -0,0 +1,12 @@ +--- +title: "Don't have a Kubernetes cluster? Try one of these tutorials." +weight: 4 +--- + +This section contains information on how to install a Kubernetes cluster that the Rancher server can be installed on. + +In Rancher prior to v2.4, the Rancher server needed to run on an RKE Kubernetes cluster. + +In Rancher v2.4.x, Rancher need to run on either an RKE Kubernetes cluster or a K3s Kubernetes cluster. + +In Rancher v2.5, Rancher can run on any Kubernetes cluster. \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/_index.md b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-RKE/_index.md similarity index 65% rename from content/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/_index.md rename to content/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-RKE/_index.md index e5b82679211..99edaa3e1ce 100644 --- a/content/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/_index.md +++ b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-RKE/_index.md @@ -1,17 +1,18 @@ --- -title: 2. Set up a Kubernetes Cluster -description: Learn how to use Rancher Kubernetes Engine (RKE) to install Kubernetes with a high availability etcd configuration. -weight: 190 -aliases: - - /rancher/v2.x/en/installation/ha/kubernetes-rke/ +title: Setting up a High-availability RKE Kubernetes Cluster +shortTitle: Set up RKE Kubernetes +weight: 3 --- -This section describes how to install a Kubernetes cluster according to our [best practices for the Rancher server environment.]({{}}/rancher/v2.x/en/overview/architecture-recommendations/#environment-for-kubernetes-installations) This cluster should be dedicated to run only the Rancher server. + +This section describes how to install a Kubernetes cluster. This cluster should be dedicated to run only the Rancher server. For Rancher prior to v2.4, Rancher should be installed on an RKE Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. As of Rancher v2.4, the Rancher management server can be installed on either an RKE cluster or a K3s Kubernetes cluster. K3s is also a fully certified Kubernetes distribution released by Rancher, but is newer than RKE. We recommend installing Rancher on K3s because K3s is easier to use, and more lightweight, with a binary size of less than 100 MB. Note: After Rancher is installed on an RKE cluster, there is no migration path to a K3s setup at this time. +> As of Rancher v2.5, Rancher can run on any Kubernetes cluster, included hosted Kubernetes solutions such as Amazon EKS. So if you are installing Rancher v2.5, The below instructions represent only one possible way to install Kubernetes. + The Rancher management server can only be run on Kubernetes cluster in an infrastructure provider where Kubernetes is installed using RKE or K3s. Use of Rancher on hosted Kubernetes providers, such as EKS, is not supported. For systems without direct internet access, refer to [Air Gap: Kubernetes install.]({{}}/rancher/v2.x/en/installation/air-gap-high-availability/) @@ -21,111 +22,10 @@ For systems without direct internet access, refer to [Air Gap: Kubernetes instal > > To set up a single-node RKE cluster, configure only one node in the `cluster.yml` . The single node should have all three roles: `etcd`, `controlplane`, and `worker`. > -> To set up a single-node K3s cluster, run the Rancher server installation command on just one node instead of two nodes. -> > In both single-node setups, Rancher can be installed with Helm on the Kubernetes cluster in the same way that it would be installed on any other cluster. # Installing Kubernetes - -The steps to set up the Kubernetes cluster differ depending on whether you are using RKE or K3s. - -{{% tabs %}} -{{% tab "K3s" %}} - -### 1. Install Kubernetes and Set up the K3s Server - -When running the command to start the K3s Kubernetes API server, you will pass in an option to use the external datastore that you set up earlier. - -1. Connect to one of the Linux nodes that you have prepared to run the Rancher server. -1. On the Linux node, run this command to start the K3s server and connect it to the external datastore: - ``` - curl -sfL https://get.k3s.io | sh -s - server \ - --datastore-endpoint="mysql://username:password@tcp(hostname:3306)/database-name" - ``` - Note: The datastore endpoint can also be passed in using the environment variable `$K3S_DATASTORE_ENDPOINT`. - -1. Repeat the same command on your second K3s server node. - -### 2. Confirm that K3s is Running - -To confirm that K3s has been set up successfully, run the following command on either of the K3s server nodes: -``` -sudo k3s kubectl get nodes -``` - -Then you should see two nodes with the master role: -``` -ubuntu@ip-172-31-60-194:~$ sudo k3s kubectl get nodes -NAME STATUS ROLES AGE VERSION -ip-172-31-60-194 Ready master 44m v1.17.2+k3s1 -ip-172-31-63-88 Ready master 6m8s v1.17.2+k3s1 -``` - -Then test the health of the cluster pods: -``` -sudo k3s kubectl get pods --all-namespaces -``` - -**Result:** You have successfully set up a K3s Kubernetes cluster. - -### 3. Save and Start Using the kubeconfig File - -When you installed K3s on each Rancher server node, a `kubeconfig` file was created on the node at `/etc/rancher/k3s/k3s.yaml`. This file contains credentials for full access to the cluster, and you should save this file in a secure location. - -To use this `kubeconfig` file, - -1. Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool. -2. Copy the file at `/etc/rancher/k3s/k3s.yaml` and save it to the directory `~/.kube/config` on your local machine. -3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your load balancer, referring to port 6443. (The Kubernetes API server will be reached at port 6443, while the Rancher server will be reached at ports 80 and 443.) Here is an example `k3s.yaml`: - -``` -apiVersion: v1 -clusters: -- cluster: - certificate-authority-data: [CERTIFICATE-DATA] - server: [LOAD-BALANCER-DNS]:6443 # Edit this line - name: default -contexts: -- context: - cluster: default - user: default - name: default -current-context: default -kind: Config -preferences: {} -users: -- name: default - user: - password: [PASSWORD] - username: admin -``` - -**Result:** You can now use `kubectl` to manage your K3s cluster. If you have more than one kubeconfig file, you can specify which one you want to use by passing in the path to the file when using `kubectl`: - -``` -kubectl --kubeconfig ~/.kube/config/k3s.yaml get pods --all-namespaces -``` - -For more information about the `kubeconfig` file, refer to the [K3s documentation]({{}}/k3s/latest/en/cluster-access/) or the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) about organizing cluster access using `kubeconfig` files. - -### 4. Check the Health of Your Cluster Pods - -Now that you have set up the `kubeconfig` file, you can use `kubectl` to access the cluster from your local machine. - -Check that all the required pods and containers are healthy are ready to continue: -``` -ubuntu@ip-172-31-60-194:~$ sudo kubectl get pods --all-namespaces -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system metrics-server-6d684c7b5-bw59k 1/1 Running 0 8d -kube-system local-path-provisioner-58fb86bdfd-fmkvd 1/1 Running 0 8d -kube-system coredns-d798c9dd-ljjnf 1/1 Running 0 8d -``` - -**Result:** You have confirmed that you can access the cluster with `kubectl` and the K3s cluster is running successfully. Now the Rancher management server can be installed on the cluster. -{{% /tab %}} -{{% tab "RKE" %}} - ### Required CLI Tools Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool. @@ -267,7 +167,5 @@ Save a copy of the following files in a secure location: ### Issues or errors? See the [Troubleshooting]({{}}/rancher/v2.x/en/installation/options/troubleshooting/) page. -{{% /tab %}} -{{% /tabs %}} -### [Next: Install Rancher]({{}}/rancher/v2.x/en/installation/k8s-install/helm-rancher/) +### [Next: Install Rancher]({{}}/rancher/v2.x/en/installation/k8s-install/helm-rancher/) \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/how-ha-works/_index.md b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/how-ha-works/_index.md similarity index 99% rename from content/rancher/v2.x/en/installation/how-ha-works/_index.md rename to content/rancher/v2.x/en/installation/resources/k8s-tutorials/how-ha-works/_index.md index 0eeb43cfc3b..9aaf3bb236d 100644 --- a/content/rancher/v2.x/en/installation/how-ha-works/_index.md +++ b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/how-ha-works/_index.md @@ -1,6 +1,6 @@ --- title: About High-availability Installations -weight: 2 +weight: 1 --- We recommend using [Helm,]({{}}/rancher/v2.x/en/overview/architecture/concepts/#about-helm) a Kubernetes package manager, to install Rancher on a dedicated Kubernetes cluster. This is called a high-availability Kubernetes installation because increased availability is achieved by running Rancher on multiple nodes. diff --git a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/_index.md b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/_index.md new file mode 100644 index 00000000000..c24bffb45ad --- /dev/null +++ b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/_index.md @@ -0,0 +1,15 @@ +--- +title: Don't have infrastructure for your Kubernetes cluster? Try one of these tutorials. +shortTitle: Infrastructure Tutorials +weight: 5 +--- + +The K3s documentation has: + +- Instructions for [setting up infrastructure for a high-availability K3s Kubernetes cluster with an external DB]({{}}/k3s/latest/en/installation/tutorials/ha-with-external-db) +- Instructions for [setting up a high-availability K3s Kubernetes cluster with an external DB for a Rancher server]({{}}/k3s/latest/en/installation/tutorials/ha-with-external-db) + +The RKE documentation has: + +- Instructions for [setting up infrastructure for a high-availability RKE Kubernetes cluster]({{}}/) +- Instructions for [setting up a high-availability RKE cluster]() \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/options/ec2-node/_index.md b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md similarity index 97% rename from content/rancher/v2.x/en/installation/options/ec2-node/_index.md rename to content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md index 0df051accda..ecf063ca94e 100644 --- a/content/rancher/v2.x/en/installation/options/ec2-node/_index.md +++ b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md @@ -1,6 +1,6 @@ --- title: Setting up Nodes in Amazon EC2 -weight: 280 +weight: 3 --- In this tutorial, you will learn one way to set up Linux nodes for the Rancher management server. These nodes will fulfill the node requirements for [OS, Docker, hardware, and networking.]({{}}/rancher/v2.x/en/installation/requirements/) @@ -14,7 +14,7 @@ If the Rancher server is installed in a single Docker container, you only need o ### 1. Optional Preparation - **Create IAM role:** To allow Rancher to manipulate AWS resources, such as provisioning new storage or new nodes, you will need to configure Amazon as a cloud provider. There are several things you'll need to do to set up the cloud provider on EC2, but part of this process is setting up an IAM role for the Rancher server nodes. For the full details on setting up the cloud provider, refer to this [page.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) -- **Create security group:** We also recommend setting up a security group for the Rancher nodes that complies with the [port requirements for Rancher nodes.]({{}}/rancher/v2.x/en/installation/requirements/#port-requirements) The exact requirements will differ depending on whether Kubernetes is installed with RKE or K3s. +- **Create security group:** We also recommend setting up a security group for the Rancher nodes that complies with the [port requirements for Rancher nodes.]({{}}/rancher/v2.x/en/installation/requirements/#port-requirements) ### 2. Provision Instances diff --git a/content/rancher/v2.x/en/installation/options/nginx/_index.md b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/nginx/_index.md similarity index 95% rename from content/rancher/v2.x/en/installation/options/nginx/_index.md rename to content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/nginx/_index.md index 02beb3f87ae..fde4297dfae 100644 --- a/content/rancher/v2.x/en/installation/options/nginx/_index.md +++ b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/nginx/_index.md @@ -1,9 +1,6 @@ --- title: Setting up an NGINX Load Balancer -weight: 270 -aliases: - - /rancher/v2.x/en/installation/ha/create-nodes-lb/nginx - - /rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nginx +weight: 4 --- NGINX will be configured as Layer 4 load balancer (TCP) that forwards connections to one of your Rancher nodes. diff --git a/content/rancher/v2.x/en/installation/options/nlb/_index.md b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/nlb/_index.md similarity index 99% rename from content/rancher/v2.x/en/installation/options/nlb/_index.md rename to content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/nlb/_index.md index cf863775ba5..7166089c1c5 100644 --- a/content/rancher/v2.x/en/installation/options/nlb/_index.md +++ b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/nlb/_index.md @@ -1,6 +1,6 @@ --- title: Setting up Amazon ELB Network Load Balancer -weight: 277 +weight: 5 aliases: - /rancher/v2.x/en/installation/ha/create-nodes-lb/nlb - /rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nlb diff --git a/content/rancher/v2.x/en/installation/options/rds/_index.md b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/rds/_index.md similarity index 99% rename from content/rancher/v2.x/en/installation/options/rds/_index.md rename to content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/rds/_index.md index 41d7b8eb501..f40b9f96b59 100644 --- a/content/rancher/v2.x/en/installation/options/rds/_index.md +++ b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/rds/_index.md @@ -1,6 +1,6 @@ --- title: Setting up a MySQL Database in Amazon RDS -weight: 290 +weight: 4 --- This tutorial describes how to set up a MySQL database in Amazon's RDS. diff --git a/content/rancher/v2.x/en/upgrades/_index.md b/content/rancher/v2.x/en/installation/upgrades-rollbacks/_index.md similarity index 60% rename from content/rancher/v2.x/en/upgrades/_index.md rename to content/rancher/v2.x/en/installation/upgrades-rollbacks/_index.md index 4debea3156e..d235ebcea30 100644 --- a/content/rancher/v2.x/en/upgrades/_index.md +++ b/content/rancher/v2.x/en/installation/upgrades-rollbacks/_index.md @@ -1,18 +1,17 @@ --- title: Upgrades and Rollbacks weight: 150 +aliases: + - /rancher/v2.x/en/upgrades --- ### Upgrading Rancher -- [Upgrades]({{}}/rancher/v2.x/en/upgrades/upgrades/) +To upgrade Rancher, refer to [these instructions.](./upgrades/) ### Rolling Back Unsuccessful Upgrades -In the event that your Rancher Server does not upgrade successfully, you can rollback to your installation prior to upgrade: - -- [Rollbacks for Rancher installed with Docker]({{}}/rancher/v2.x/en/upgrades/single-node-rollbacks) -- [Rollbacks for Rancher installed on a Kubernetes cluster]({{}}/rancher/v2.x/en/upgrades/ha-server-rollbacks) +In the event that your Rancher Server does not upgrade successfully, you can rollback to your installation prior to upgrade using these instructions: [Rollbacks for Rancher installed on a Kubernetes cluster](./rollbacks/ha-server-rollbacks) > **Note:** If you are rolling back to versions in either of these scenarios, you must follow some extra [instructions]({{}}/rancher/v2.x/en/upgrades/rollbacks/) in order to get your clusters working. > diff --git a/content/rancher/v2.x/en/upgrades/rollbacks/_index.md b/content/rancher/v2.x/en/installation/upgrades-rollbacks/rollbacks/_index.md similarity index 86% rename from content/rancher/v2.x/en/upgrades/rollbacks/_index.md rename to content/rancher/v2.x/en/installation/upgrades-rollbacks/rollbacks/_index.md index 4a3c79a010a..d595b448dae 100644 --- a/content/rancher/v2.x/en/upgrades/rollbacks/_index.md +++ b/content/rancher/v2.x/en/installation/upgrades-rollbacks/rollbacks/_index.md @@ -1,8 +1,17 @@ --- title: Rollbacks weight: 1010 +aliases: + - /rancher/v2.x/en/upgrades/rollbacks --- -This section contains information about how to rollback your Rancher server to a previous version. + +This section contains information about how to roll back your Rancher server to a previous version. + +If you upgrade Rancher and the upgrade does not complete successfully, you may need to [restore Rancher from backup.](../../backups/restores) + +Restoring a snapshot of the Rancher Server cluster will revert Rancher to the version and state at the time of the snapshot. + +>**Note:** Managed clusters are authoritative for their state. This means restoring the rancher server will not revert workload deployments or changes made on managed clusters after the snapshot was taken. - [Rolling back Rancher installed with Docker]({{}}/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/) - [Rolling back Rancher installed on a Kubernetes cluster]({{}}/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/) @@ -14,7 +23,6 @@ If you are rolling back to versions in either of these scenarios, you must follo - Rolling back from v2.1.6+ to any version between v2.1.0 - v2.1.5 or v2.0.0 - v2.0.10. - Rolling back from v2.0.11+ to any version between v2.0.0 - v2.0.10. - Because of the changes necessary to address [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321), special steps are necessary if the user wants to roll back to a previous version of Rancher where this vulnerability exists. The steps are as follows: 1. Record the `serviceAccountToken` for each cluster. To do this, save the following script on a machine with `kubectl` access to the Rancher management plane and execute it. You will need to run these commands on the machine where the rancher container is running. Ensure JQ is installed before running the command. The commands will vary depending on how you installed Rancher. diff --git a/content/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/_index.md b/content/rancher/v2.x/en/installation/upgrades-rollbacks/rollbacks/ha-server-rollbacks/_index.md similarity index 57% rename from content/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/_index.md rename to content/rancher/v2.x/en/installation/upgrades-rollbacks/rollbacks/ha-server-rollbacks/_index.md index 2cca7a4b78a..955a0938879 100644 --- a/content/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/_index.md +++ b/content/rancher/v2.x/en/installation/upgrades-rollbacks/rollbacks/ha-server-rollbacks/_index.md @@ -3,11 +3,14 @@ title: Kubernetes Rollback weight: 1025 aliases: - /rancher/v2.x/en/upgrades/ha-server-rollbacks + - /rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks --- If you upgrade Rancher and the upgrade does not complete successfully, you may need to rollback your Rancher Server to its last healthy state. -To restore Rancher follow the procedure detailed here: [Restoring Backups — Kubernetes installs]({{}}/rancher/v2.x/en/backups/restorations/ha-restoration) +To restore Rancher prior to v2.5, follow the procedure detailed here: [Restoring Backups — Kubernetes installs]({{}}/rancher/v2.x/en/backups/restorations/ha-restoration) + +To restore Rancher v2.5, you can use the `rancher-backup` application and restore Rancher from backup according to [this section.]({{}}/rancher/v2.x/en/backups/restoring-rancher/) Restoring a snapshot of the Rancher Server cluster will revert Rancher to the version and state at the time of the snapshot. diff --git a/content/rancher/v2.x/en/upgrades/upgrades/_index.md b/content/rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/_index.md similarity index 94% rename from content/rancher/v2.x/en/upgrades/upgrades/_index.md rename to content/rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/_index.md index 3dad9caeffc..80b054e8eda 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/_index.md +++ b/content/rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/_index.md @@ -1,11 +1,13 @@ --- title: Upgrades weight: 1005 +aliases: + - /rancher/v2.x/en/upgrades/upgrades --- This section contains information about how to upgrade your Rancher server to a newer version. Regardless if you installed in an air gap environment or not, the upgrade steps mainly depend on whether you have a single node or high-availability installation of Rancher. Select from the following options: -- [Upgrading Rancher installed with Docker]({{}}/rancher/v2.x/en/upgrades/upgrades/single-node/) -- [Upgrading Rancher installed on a Kubernetes cluster]({{}}/rancher/v2.x/en/upgrades/upgrades/ha/) +- [Upgrading Rancher installed with Docker]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/upgrades/single-node/) +- [Upgrading Rancher installed on a Kubernetes cluster]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/upgrades/ha/) ### Known Upgrade Issues diff --git a/content/rancher/v2.x/en/upgrades/upgrades/ha/_index.md b/content/rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/ha/_index.md similarity index 95% rename from content/rancher/v2.x/en/upgrades/upgrades/ha/_index.md rename to content/rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/ha/_index.md index 333009b02dc..55e2a6e9e9d 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/ha/_index.md +++ b/content/rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/ha/_index.md @@ -14,6 +14,7 @@ If you installed Rancher using the RKE Add-on yaml, follow the directions to [mi >**Notes:** > +> - If you are upgrading to Rancher v2.5 from a Rancher server that was started with the Helm chart option `--add-local=false`, you will need to drop that flag when upgrading. Otherwise, the Rancher server will not start. The `restricted-admin` role can be used to continue restricting access to the local cluster. For more information, see [this section.]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#upgrading-from-rancher-with-a-hidden-local-cluster) > - [Let's Encrypt will be blocking cert-manager instances older than 0.8.0 starting November 1st 2019.](https://community.letsencrypt.org/t/blocking-old-cert-manager-versions/98753) Upgrade cert-manager to the latest version by following [these instructions.]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager) > - Helm should be run from the same location as your kubeconfig file (where you run your kubectl commands from. If you installed K8s with RKE, the config will have been created in the directory you ran `rke up` in) or should manually target the kubeconfig for the intended cluster with the `--kubeconfig` tag (see: https://helm.sh/docs/helm/helm/) > - The upgrade instructions assume you are using Helm 3. For migration of installs started with Helm 2, refer to the official [Helm 2 to 3 migration docs.](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) The [Helm 2 upgrade page here]({{}}/rancher/v2.x/en/upgrades/upgrades/ha/helm2) provides a copy of the older upgrade instructions that used Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible. diff --git a/content/rancher/v2.x/en/upgrades/upgrades/ha/helm2/_index.md b/content/rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/ha/helm2/_index.md similarity index 100% rename from content/rancher/v2.x/en/upgrades/upgrades/ha/helm2/_index.md rename to content/rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/ha/helm2/_index.md diff --git a/content/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/_index.md b/content/rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/migrating-from-rke-add-on/_index.md similarity index 98% rename from content/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/_index.md rename to content/rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/migrating-from-rke-add-on/_index.md index 77b7a515e7a..c48914cb4da 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/_index.md +++ b/content/rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/migrating-from-rke-add-on/_index.md @@ -4,6 +4,7 @@ weight: 1030 aliases: - /rancher/v2.x/en/upgrades/ha-server-upgrade/ - /rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade/ + - /rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on --- > **Important: RKE add-on install is only supported up to Rancher v2.0.8** diff --git a/content/rancher/v2.x/en/upgrades/upgrades/namespace-migration/_index.md b/content/rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/namespace-migration/_index.md similarity index 99% rename from content/rancher/v2.x/en/upgrades/upgrades/namespace-migration/_index.md rename to content/rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/namespace-migration/_index.md index 56855eb7b5e..01dff5d23b3 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/namespace-migration/_index.md +++ b/content/rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/namespace-migration/_index.md @@ -1,6 +1,8 @@ --- title: Upgrading to v2.0.7+ — Namespace Migration weight: 1040 +aliases: + - /rancher/v2.x/en/upgrades/upgrades/namespace-migration --- >This section applies only to Rancher upgrades from v2.0.6 or earlier to v2.0.7 or later. Upgrades from v2.0.7 to later version are unaffected. diff --git a/content/rancher/v2.x/en/istio/_index.md b/content/rancher/v2.x/en/istio/_index.md new file mode 100644 index 00000000000..2a7005bc834 --- /dev/null +++ b/content/rancher/v2.x/en/istio/_index.md @@ -0,0 +1,93 @@ +--- +title: Istio +weight: 15 +--- + +# Istio in Cluster Manager +If you are using a Rancher version from **v2.3.x** to **v2.4.x**, the older way of setting up Istio in th **Cluster Manager** is documented in [this section.]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/) + + +# Istio in Cluster Explorer + [Istio](https://istio.io/) is an open-source tool that makes it easier for DevOps teams to observe, secure, control, and troubleshoot the traffic within a complex network of microservices. + +As a network of microservices changes and grows, the interactions between them can become increasingly difficult to manage and understand. In such a situation, it is useful to have a service mesh as a separate infrastructure layer. Istio's service mesh lets you manipulate traffic between microservices without changing the microservices directly. + +Our integration of Istio is designed so that a Rancher operator, such as an administrator or cluster owner, can deliver Istio to a team of developers. Then developers can use Istio to enforce security policies, troubleshoot problems, or manage traffic for green/blue deployments, canary deployments, or A/B testing. + +This core service mesh provides features that include but are not limited to the following: + +- **Traffic Management** such as ingress and egress routing, circuit breaking, mirroring. +- **Security** with resources to authenticate and authorize traffic and users, mTLS included. +- **Observability** of logs, metrics, and distributed traffic flows. + +After [setting up istio]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup) you can leverage Istio's control plane functionality through the Cluster Explorer, `kubectl`, or `istioctl`. + +Rancher's Istio integration comes with a comprehensive visualization aid: + +- **Get the full picture of your microservice architecture with Kiali.** [Kiali](https://www.kiali.io/) provides a diagram that shows the services within a service mesh and how they are connected, including the traffic rates and latencies between them. You can check the health of the service mesh, or drill down to see the incoming and outgoing requests to a single component. + +Istio needs to be set up by a `cluster-admin` before it can be used in a project. + +# What's New in Rancher v2.5 + +The overall architecture of Istio has been simplified. A single component, Istiod, has been created by combining Pilot, Citadel, Galley and the sidecar injector. Node Agent functionality has also been merged into istio-agent. + +Addons that were previously installed by Istio (cert-manager, Grafana, Jaeger, Kiali, Prometheus, Zipkin) will now need to be installed separately. Istio will support installation of integrations that are from the Istio Project and will maintain compatibility with those that are not. + +A Prometheus integration will still be available through an installation of [Rancher Monitoring]({{}}/rancher/v2.x/en/monitoring-alerting/), or by installing your own Prometheus operator. Rancher's Istio chart will also install Kiali by default to ensure you can get a full picture of your microservices out of the box. + +Istio has migrated away from Helm as a way to install Istio and now provides installation through the istioctl binary or Istio Operator. To ensure the easiest interaction with Istio, Rancher's Istio will maintain a Helm chart that utilizes the istioctl binary to manage your Istio installation. + +This Helm chart will be available via the Apps and Marketplace in the UI. A user that has access to the Rancher Chart's catalog will need to set up Istio before it can be used in the project. + +# Prerequisites + +Before enabling Istio, we recommend that you confirm that your Rancher worker nodes have enough [CPU and memory]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/resources) to run all of the components of Istio. + +# Setup Guide + +Refer to the [setup guide]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup) for instructions on how to set up Istio and use it in a project. + +# Remove Istio + +To remove Istio components from a cluster, namespace, or workload, refer to the section on [uninstalling Istio.]({{}}/rancher/v2.x/en/istio/disabling-istio/) + +# Migrate From Previous Istio Version + +There is no upgrade path for Istio versions less than 1.7 + +# Accessing Visualizations + +> By default, only cluster-admins have access to Kiali. For instructions on how to allow admin, edit or views roles to access them, refer to [Access to Visualizations.]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/rbac/#access-to-visualizations) + +After Istio is set up in a cluster, Grafana, Prometheus,and Kiali are available in the Rancher UI. + +To access the Grafana and Prometheus visualizations, from the **Cluster Explorer** navigate to the **Monitoring** app overview page, and click on **Grafana** or **Prometheus** + +To access the Kiali visualization, from the **Cluster Explorer** navigate to the **Istio** app overview page, and click on **Kiali**. From here you can access the **Traffic Graph** tab or the **Traffic Metrics** tab to see network visualizations and metrics. + +By default, all namespace will picked up by prometheus and make data available for Kiali graphs. Refer to [selector/scrape config setup]({{}}/rancher/v2.x/en/istio/setup/enable-istio-in-cluster/#selectors-scrape-configs) if you would like to use a different configuration for prometheus data scraping. + +Your access to the visualizations depend on your role. Grafana and Prometheus are only available for `cluster-admin` roles. The Kiali UI is available only to `cluster-admin` by default, but `cluster-admin` can allow other roles to access them by editing the Istio values.yaml. + +# Architecture + +Istio installs a service mesh that uses [Envoy](https://www.envoyproxy.io/learn/service-mesh) sidecar proxies to intercept traffic to each workload. These sidecars intercept and manage service-to-service communication, allowing fine-grained observation and control over traffic within the cluster. + +Only workloads that have the Istio sidecar injected can be tracked and controlled by Istio. + +When a namespace has Istio enabled, new workloads deployed in the namespace will automatically have the Istio sidecar. You need to manually enable Istio in preexisting workloads. + +For more information on the Istio sidecar, refer to the [Istio sidecare-injection docs](https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/) and for more information on Istio's architecture, refer to the [Istio Architecture docs](https://istio.io/latest/docs/ops/deployment/architecture/) + +### Multiple Ingresses + +By default, each Rancher-provisioned cluster has one NGINX ingress controller allowing traffic into the cluster. Istio also installs an ingress gateway by default into the `istio-system` namespace. The result is that your cluster will have two ingresses in your cluster. + +![In an Istio-enabled cluster, you can have two ingresses: the default Nginx ingress, and the default Istio controller.]({{}}/img/rancher/istio-ingress.svg) + + Additional Istio Ingress gateways can be enabled via the [overlay file]({{}}/rancher/v2.x/en/istio/setup/enable-istio-in-cluster/#overlay-file). + +### Egress Support + +By default the Egress gateway is disabled, but can be enabled on install or upgrade through the values.yaml or via the [overlay file]({{}}/rancher/v2.x/en/istio/setup/enable-istio-in-cluster/#overlay-file). \ No newline at end of file diff --git a/content/rancher/v2.x/en/istio/disabling-istio/_index.md b/content/rancher/v2.x/en/istio/disabling-istio/_index.md new file mode 100644 index 00000000000..2c054c556f0 --- /dev/null +++ b/content/rancher/v2.x/en/istio/disabling-istio/_index.md @@ -0,0 +1,31 @@ +--- +title: Disabling Istio +weight: 4 + +--- + +This section describes how to uninstall Istio in a cluster or disable a namespace, or workload. + +# Uninstall Istio in a Cluster + +To uninstall Istio, + +1. From the **Cluster Explorer,** navigate to **Installed Apps** in **Apps & Marketplace** and locate the `rancher-istio` installation. +1. Select all the apps in the `istio-system` namespace and click **Delete**. + +**Result:** The `rancher-istio` app in the cluster gets removed. The Istio sidecar cannot be deployed on any workloads in the cluster. + +**Note:** You can no longer disable and reenable your Istio installation. If you would like to save your settings for a future install, view and save individual YAMLs to refer back to / reuse for future installations. + +# Disable Istio in a Namespace + +1. From the **Cluster Explorer** view, use the side-nav to select **Namespaces** page +1. On the **Namespace** page, you will see a list of namespaces. Go to the namespace where you want to disable and click the select **Edit as Form** or **Edit as Yaml** +1. Remove the `istio-injection=enabled` label from the namespace +1. Click **Save** + +**Result:** When workloads are deployed in this namespace, they will not have the Istio sidecar. + +# Remove the Istio Sidecar from a Workload + +Disable Istio in the namespace, then redeploy the workloads with in it. They will be deployed without the Istio sidecar. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md b/content/rancher/v2.x/en/istio/legacy/_index.md similarity index 91% rename from content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md rename to content/rancher/v2.x/en/istio/legacy/_index.md index 71313bc816f..9df562214ff 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md +++ b/content/rancher/v2.x/en/istio/legacy/_index.md @@ -1,13 +1,17 @@ --- -title: Istio -weight: 5 +title: Legacy Istio Documentation +shortTitle: Legacy +weight: 1 aliases: - - /rancher/v2.x/en/project-admin/istio/configuring-resource-allocations/_index.md - - /rancher/v2.x/en/cluster-admin/tools/istio/_index.md - - /rancher/v2.x/en/project-admin/istio/index.md + - /rancher/v2.x/en/project-admin/istio/configuring-resource-allocations/ + - /rancher/v2.x/en/cluster-admin/tools/istio/ + - /rancher/v2.x/en/project-admin/istio + - /rancher/v2.x/en/istio/legacy/cluster-istio --- _Available as of v2.3.0_ +> In Rancher 2.5, the Istio application was improved. There are now two ways to enable Istio. The older way is documented in this section, and the new application for Istio is documented in the [dashboard section.]({{}}/rancher/v2.x/en/dashboard/istio) + [Istio](https://istio.io/) is an open-source tool that makes it easier for DevOps teams to observe, control, troubleshoot, and secure the traffic within a complex network of microservices. As a network of microservices changes and grows, the interactions between them can become more difficult to manage and understand. In such a situation, it is useful to have a service mesh as a separate infrastructure layer. Istio's service mesh lets you manipulate traffic between microservices without changing the microservices directly. @@ -32,6 +36,9 @@ Rancher's Istio integration comes with comprehensive visualization aids: - **Gain insights from time series analytics with Grafana dashboards.** [Grafana](https://grafana.com/) is an analytics platform that allows you to query, visualize, alert on and understand the data gathered by Prometheus. - **Write custom queries for time series data with the Prometheus UI.** [Prometheus](https://prometheus.io/) is a systems monitoring and alerting toolkit. Prometheus scrapes data from your cluster, which is then used by Grafana. A Prometheus UI is also integrated into Rancher, and lets you write custom queries for time series data and see the results in the UI. + +Istio needs to be set up by a Rancher administrator or cluster administrator before it can be used in a project. + # Prerequisites Before enabling Istio, we recommend that you confirm that your Rancher worker nodes have enough [CPU and memory]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/resources) to run all of the components of Istio. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/disabling-istio/_index.md b/content/rancher/v2.x/en/istio/legacy/disabling-istio/_index.md similarity index 94% rename from content/rancher/v2.x/en/cluster-admin/tools/istio/disabling-istio/_index.md rename to content/rancher/v2.x/en/istio/legacy/disabling-istio/_index.md index d2035689626..46821dc798c 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/disabling-istio/_index.md +++ b/content/rancher/v2.x/en/istio/legacy/disabling-istio/_index.md @@ -1,6 +1,8 @@ --- title: Disabling Istio weight: 4 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/disabling-istio --- This section describes how to disable Istio in a cluster, namespace, or workload. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/rbac/_index.md b/content/rancher/v2.x/en/istio/legacy/rbac/_index.md similarity index 98% rename from content/rancher/v2.x/en/cluster-admin/tools/istio/rbac/_index.md rename to content/rancher/v2.x/en/istio/legacy/rbac/_index.md index eb6f3c20fa7..4f11c00f6d9 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/rbac/_index.md +++ b/content/rancher/v2.x/en/istio/legacy/rbac/_index.md @@ -1,6 +1,8 @@ --- title: Role-based Access Control weight: 3 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/rbac --- This section describes the permissions required to access Istio features and how to configure access to the Kiali and Jaeger visualizations. diff --git a/content/rancher/v2.x/en/istio/legacy/release-notes/_index.md b/content/rancher/v2.x/en/istio/legacy/release-notes/_index.md new file mode 100644 index 00000000000..af54839bc48 --- /dev/null +++ b/content/rancher/v2.x/en/istio/legacy/release-notes/_index.md @@ -0,0 +1,20 @@ +--- +title: Release Notes +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/release-notes +--- + + +# Istio 1.5.8 + +### Important note on 1.5.x versions + +When upgrading from any 1.4 version of Istio to any 1.5 version, the Rancher installer will delete several resources in order to complete the upgrade, at which point they will be immediately re-installed. This includes the `istio-reader-service-account`. If your Istio installation is using this service account be aware that any secrets tied to the service account will be deleted. Most notably this will **break specific [multi-cluster deployments](https://archive.istio.io/v1.4/docs/setup/install/multicluster/)**. Downgrades back to 1.4 are not possible. + +See the official upgrade notes for additional information on the 1.5 release and upgrading from 1.4: https://istio.io/latest/news/releases/1.5.x/announcing-1.5/upgrade-notes/ + +> **Note:** Rancher continues to use the Helm installation method, which produces a different architecture from an istioctl installation. + +### Known Issues + +* The Kiali traffic graph is currently not working [#24924](https://github.com/istio/istio/issues/24924) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/resources/_index.md b/content/rancher/v2.x/en/istio/legacy/resources/_index.md similarity index 98% rename from content/rancher/v2.x/en/cluster-admin/tools/istio/resources/_index.md rename to content/rancher/v2.x/en/istio/legacy/resources/_index.md index 9b0dea50923..8a6dafeb684 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/resources/_index.md +++ b/content/rancher/v2.x/en/istio/legacy/resources/_index.md @@ -2,8 +2,9 @@ title: CPU and Memory Allocations weight: 1 aliases: - - /rancher/v2.x/en/project-admin/istio/configuring-resource-allocations/_index.md - - /rancher/v2.x/en/project-admin/istio/config/_index.md + - /rancher/v2.x/en/project-admin/istio/configuring-resource-allocations/ + - /rancher/v2.x/en/project-admin/istio/config/ + - /rancher/v2.x/en/cluster-admin/tools/istio/resources --- _Available as of v2.3.0_ diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/_index.md b/content/rancher/v2.x/en/istio/legacy/setup/_index.md similarity index 97% rename from content/rancher/v2.x/en/cluster-admin/tools/istio/setup/_index.md rename to content/rancher/v2.x/en/istio/legacy/setup/_index.md index da1fbcacc7a..ab842cf4f30 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/_index.md +++ b/content/rancher/v2.x/en/istio/legacy/setup/_index.md @@ -1,6 +1,8 @@ --- title: Setup Guide weight: 2 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup --- This section describes how to enable Istio and start using it in your projects. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/deploy-workloads/_index.md b/content/rancher/v2.x/en/istio/legacy/setup/deploy-workloads/_index.md similarity index 99% rename from content/rancher/v2.x/en/cluster-admin/tools/istio/setup/deploy-workloads/_index.md rename to content/rancher/v2.x/en/istio/legacy/setup/deploy-workloads/_index.md index 8e52d678bbf..e1338e861bf 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/deploy-workloads/_index.md +++ b/content/rancher/v2.x/en/istio/legacy/setup/deploy-workloads/_index.md @@ -1,6 +1,8 @@ --- title: 4. Add Deployments and Services with the Istio Sidecar weight: 4 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup/deploy-workloads --- > **Prerequisite:** To enable Istio for a workload, the cluster and namespace must have Istio enabled. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/_index.md b/content/rancher/v2.x/en/istio/legacy/setup/enable-istio-in-cluster/_index.md similarity index 95% rename from content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/_index.md rename to content/rancher/v2.x/en/istio/legacy/setup/enable-istio-in-cluster/_index.md index a98090a28ca..a17cc358d30 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/_index.md +++ b/content/rancher/v2.x/en/istio/legacy/setup/enable-istio-in-cluster/_index.md @@ -1,6 +1,8 @@ --- title: 1. Enable Istio in the Cluster weight: 1 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster --- This cluster uses the default Nginx controller to allow traffic into the cluster. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/enable-istio-with-psp/_index.md b/content/rancher/v2.x/en/istio/legacy/setup/enable-istio-in-cluster/enable-istio-with-psp/_index.md similarity index 95% rename from content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/enable-istio-with-psp/_index.md rename to content/rancher/v2.x/en/istio/legacy/setup/enable-istio-in-cluster/enable-istio-with-psp/_index.md index f31369cfc61..5a6cc65b8c7 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/enable-istio-with-psp/_index.md +++ b/content/rancher/v2.x/en/istio/legacy/setup/enable-istio-in-cluster/enable-istio-with-psp/_index.md @@ -1,5 +1,7 @@ --- title: Enable Istio with Pod Security Policies +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/enable-istio-with-psp --- >**Note:** The following guide is only for RKE provisioned clusters. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-namespace/_index.md b/content/rancher/v2.x/en/istio/legacy/setup/enable-istio-in-namespace/_index.md similarity index 96% rename from content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-namespace/_index.md rename to content/rancher/v2.x/en/istio/legacy/setup/enable-istio-in-namespace/_index.md index 2f1f6d74786..24c594e80eb 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-namespace/_index.md +++ b/content/rancher/v2.x/en/istio/legacy/setup/enable-istio-in-namespace/_index.md @@ -1,6 +1,8 @@ --- title: 2. Enable Istio in a Namespace weight: 2 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-namespace --- You will need to manually enable Istio in each namespace that you want to be tracked or controlled by Istio. When Istio is enabled in a namespace, the Envoy sidecar proxy will be automatically injected into all new workloads that are deployed in the namespace. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/gateway/_index.md b/content/rancher/v2.x/en/istio/legacy/setup/gateway/_index.md similarity index 98% rename from content/rancher/v2.x/en/cluster-admin/tools/istio/setup/gateway/_index.md rename to content/rancher/v2.x/en/istio/legacy/setup/gateway/_index.md index 47c9ff33812..60f8780fd65 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/gateway/_index.md +++ b/content/rancher/v2.x/en/istio/legacy/setup/gateway/_index.md @@ -1,6 +1,8 @@ --- title: 5. Set up the Istio Gateway weight: 5 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup/gateway --- The gateway to each cluster can have its own port or load balancer, which is unrelated to a service mesh. By default, each Rancher-provisioned cluster has one NGINX ingress controller allowing traffic into the cluster. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/node-selectors/_index.md b/content/rancher/v2.x/en/istio/legacy/setup/node-selectors/_index.md similarity index 96% rename from content/rancher/v2.x/en/cluster-admin/tools/istio/setup/node-selectors/_index.md rename to content/rancher/v2.x/en/istio/legacy/setup/node-selectors/_index.md index 994656361e3..f226580b0de 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/node-selectors/_index.md +++ b/content/rancher/v2.x/en/istio/legacy/setup/node-selectors/_index.md @@ -1,6 +1,8 @@ --- title: 3. Select the Nodes Where Istio Components Will be Deployed weight: 3 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup/node-selectors --- > **Prerequisite:** Your cluster needs a worker node that can designated for Istio. The worker node should meet the [resource requirements.]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/resources) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/set-up-traffic-management/_index.md b/content/rancher/v2.x/en/istio/legacy/setup/set-up-traffic-management/_index.md similarity index 96% rename from content/rancher/v2.x/en/cluster-admin/tools/istio/setup/set-up-traffic-management/_index.md rename to content/rancher/v2.x/en/istio/legacy/setup/set-up-traffic-management/_index.md index 2048e779265..b9d44ea7193 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/set-up-traffic-management/_index.md +++ b/content/rancher/v2.x/en/istio/legacy/setup/set-up-traffic-management/_index.md @@ -1,6 +1,8 @@ --- title: 6. Set up Istio's Components for Traffic Management weight: 6 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup/set-up-traffic-management --- A central advantage of traffic management in Istio is that it allows dynamic request routing. Some common applications for dynamic request routing include canary deployments and blue/green deployments. The two key resources in Istio traffic management are *virtual services* and *destination rules*. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/view-traffic/_index.md b/content/rancher/v2.x/en/istio/legacy/setup/view-traffic/_index.md similarity index 95% rename from content/rancher/v2.x/en/cluster-admin/tools/istio/setup/view-traffic/_index.md rename to content/rancher/v2.x/en/istio/legacy/setup/view-traffic/_index.md index bb6c979e28d..e456dd14b81 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/view-traffic/_index.md +++ b/content/rancher/v2.x/en/istio/legacy/setup/view-traffic/_index.md @@ -1,6 +1,8 @@ --- title: 7. Generate and View Traffic weight: 7 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup/view-traffic --- This section describes how to view the traffic that is being managed by Istio. diff --git a/content/rancher/v2.x/en/istio/rbac/_index.md b/content/rancher/v2.x/en/istio/rbac/_index.md new file mode 100644 index 00000000000..91c56254a8c --- /dev/null +++ b/content/rancher/v2.x/en/istio/rbac/_index.md @@ -0,0 +1,46 @@ +--- +title: Role-based Access Control +weight: 3 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/rbac +--- + +This section describes the permissions required to access Istio features. + +The rancher istio chart installs three `ClusterRoles` + +## Cluster-Admin Access + +By default, only those with the `cluster-admin` `ClusterRole` can: + +- Install istio app in a cluster +- Configure resource allocations for Istio + + +## Admin and Edit access + +By default, only Admin and Edit roles can: + +- Enable and disable Istio sidecar auto-injection for namespaces +- Add the Istio sidecar to workloads +- View the traffic metrics and traffic graph for the cluster +- Configure Istio's resources (such as the gateway, destination rules, or virtual services) + +## Summary of Default Permissions for Kubernetes Default roles + +Istio creates three `ClusterRoles` and adds Istio CRD access to the following default K8s `ClusterRole`: + +ClusterRole create by chart | Default K8s ClusterRole | Rancher Role | + ------------------------------:| ---------------------------:|---------:| + `istio-admin` | admin| Project Owner | + `istio-edit`| edit | Project Member | + `istio-view` | view | Read-only | + +Rancher will continue to use cluster-owner, cluster-member, project-owner, project-member, etc as role names, but will utilize default roles to determine access. For each default K8s `ClusterRole` there are different Istio CRD permissions and K8s actions (Create ( C ), Get ( G ), List ( L ), Update ( U ), Patch ( P ), Delete( D ), All ( * )) that can be performed. + + +|CRDs | Admin | Edit | View +|----------------------------| ------| -----| ----- +|
  • `config.istio.io`
    • `adapters`
    • `attributemanifests`
    • `handlers`
    • `httpapispecbindings`
    • `httpapispecs`
    • `instances`
    • `quotaspecbindings`
    • `quotaspecs`
    • `rules`
    • `templates`
| GLW | GLW | GLW +|
  • `networking.istio.io`
    • `destinationrules`
    • `envoyfilters`
    • `gateways`
    • `serviceentries`
    • `sidecars`
    • `virtualservices`
    • `workloadentries`
| * | * | GLW +|
  • `security.istio.io`
    • `authorizationpolicies`
    • `peerauthentications`
    • `requestauthentications`
| * | * | GLW \ No newline at end of file diff --git a/content/rancher/v2.x/en/istio/release-notes/_index.md b/content/rancher/v2.x/en/istio/release-notes/_index.md new file mode 100644 index 00000000000..52962b8596c --- /dev/null +++ b/content/rancher/v2.x/en/istio/release-notes/_index.md @@ -0,0 +1,20 @@ +--- +title: Release Notes +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/release-notes +--- + + +# Istio 1.5.8 + +### Important note on 1.5.x versions + +When upgrading from any 1.4 version of Istio to any 1.5 version, the Rancher installer will delete several resources in order to complete the upgrade, at which point they will be immediately re-installed. This includes the `istio-reader-service-account`. If your Istio installation is using this service account be aware that any secrets tied to the service account will be deleted. Most notably this will **break specific [multi-cluster deployments](https://archive.istio.io/v1.4/docs/setup/install/multicluster/)**. Downgrades back to 1.4 are not possible. + +See the official upgrade notes for additional information on the 1.5 release and upgrading from 1.4: https://istio.io/latest/news/releases/1.5.x/announcing-1.5/upgrade-notes/ + +> **Note:** Rancher continues to use the Helm installation method, which produces a different architecture from an istioctl installation. + +### Known Issues + +* The Kiali traffic graph is currently not working [#24924](https://github.com/istio/istio/issues/24924) \ No newline at end of file diff --git a/content/rancher/v2.x/en/istio/resources/_index.md b/content/rancher/v2.x/en/istio/resources/_index.md new file mode 100644 index 00000000000..9a13b076b69 --- /dev/null +++ b/content/rancher/v2.x/en/istio/resources/_index.md @@ -0,0 +1,48 @@ +--- +title: CPU and Memory Allocations +weight: 1 +aliases: + - /rancher/v2.x/en/project-admin/istio/configuring-resource-allocations/ + - /rancher/v2.x/en/project-admin/istio/config/ + - /rancher/v2.x/en/cluster-admin/tools/istio/resources +--- +_This section applies to Istio in Rancher v2.5.0. If you are using Rancher v2.4.x, refer to [this section.]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/)_ + +This section describes the minimum recommended computing resources for the Istio components in a cluster. + +The CPU and memory allocations for each component are [configurable.](#configuring-resource-allocations) + +Before enabling Istio, we recommend that you confirm that your Rancher worker nodes have enough CPU and memory to run all of the components of Istio. + +> **Tip:** In larger deployments, it is strongly advised that the infrastructure be placed on dedicated nodes in the cluster by adding a node selector for each Istio component. + +The table below shows a summary of the minimum recommended resource requests and limits for the CPU and memory of each core Istio component. + +In Kubernetes, the resource request indicates that the workload will not deployed on a node unless the node has at least the specified amount of memory and CPU available. If the workload surpasses the limit for CPU or memory, it can be terminated or evicted from the node. For more information on managing resource limits for containers, refer to the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) + +Workload | CPU - Request | Mem - Request | CPU - Limit | Mem - Limit | Configurable +---------:|---------------:|---------------:|-------------:|-------------:|-------------: +Istiod | 610m | 2186Mi | 4000m | 2048Mi | Y | Y +Istio-policy | 1000m | 1024Mi | 4800m | 4096Mi | Y +Istio-telemetry | 1000m | 10214Mi | 4800m | 4096Mi | Y +Istio-ingressgateway | 2000m | 1024Mi | 10m | 40Mi | Y +Others | 500m | 500Mi | - | - | Y +**Total** | **4500m** | **5620Mi** | **>12300m** | **>14848Mi** | **-** + + +# Configuring Resource Allocations + +You can individually configure the resource allocation for each type of Istio component. This section includes the default resource allocations for each component. + +To make it easier to schedule the workloads to a node, a cluster-admin can reduce the CPU and memory resource requests for the component. However, the default CPU and memory allocations are the minimum that we recommend. + +You can find more information about Istio configuration in the [official Istio documentation](https://istio.io/). + +To configure the resources allocated to an Istio component, + +1. In the Rancher **Cluster Explorer**, navigate to your Istio installation in **Apps & Marketplace** +1. Click **Upgrade** to edit the base components via changes the values.yaml or add an [overlay file]({{}}/rancher/v2.x/en/istio/setup/enable-istio-in-cluster/#overlay-file). +1. Change the CPU or memory allocations, the nodes where each component will be scheduled to, or the node tolerations. +1. Click **Upgrade.** to rollout changes + +**Result:** The resource allocations for the Istio components are updated. \ No newline at end of file diff --git a/content/rancher/v2.x/en/istio/setup/_index.md b/content/rancher/v2.x/en/istio/setup/_index.md new file mode 100644 index 00000000000..f7237953385 --- /dev/null +++ b/content/rancher/v2.x/en/istio/setup/_index.md @@ -0,0 +1,30 @@ +--- +title: Setup Guide +weight: 2 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup +--- + +This section describes how to enable Istio and start using it in your projects. + +If you use Istio for traffic management, you will need to allow external traffic to the cluster. In that case, you will need to follow all of the steps below. + +# Prerequisites + +This guide assumes you have already [installed Rancher,]({{}}/rancher/v2.x/en/installation) and you have already [provisioned a separate Kubernetes cluster]({{}}/rancher/v2.x/en/cluster-provisioning) on which you will install Istio. + +The nodes in your cluster must meet the [CPU and memory requirements.]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/resources/) + +The workloads and services that you want to be controlled by Istio must meet [Istio's requirements.](https://istio.io/docs/setup/additional-setup/requirements/) + + +# Install + +> **Quick Setup** If you don't need external traffic to reach Istio, and you just want to set up Istio for monitoring and tracing traffic within the cluster, skip the steps for [setting up the Istio gateway]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/gateway) and [setting up Istio's components for traffic management.]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/set-up-traffic-management) + +1. [Enable Istio in the cluster.]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster) +1. [Enable Istio in all the namespaces where you want to use it.]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-namespace) +1. [Add deployments and services that have the Istio sidecar injected.]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/deploy-workloads) +1. [Set up the Istio gateway. ]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/gateway) +1. [Set up Istio's components for traffic management.]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/set-up-traffic-management) +1. [Generate traffic and see Istio in action.](#generate-traffic-and-see-istio-in-action) diff --git a/content/rancher/v2.x/en/istio/setup/deploy-workloads/_index.md b/content/rancher/v2.x/en/istio/setup/deploy-workloads/_index.md new file mode 100644 index 00000000000..017ee200eed --- /dev/null +++ b/content/rancher/v2.x/en/istio/setup/deploy-workloads/_index.md @@ -0,0 +1,349 @@ +--- +title: 4. Add Deployments and Services with the Istio Sidecar +weight: 4 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup/deploy-workloads +--- + +> **Prerequisite:** To enable Istio for a workload, the cluster and namespace must have the Istio app installed. + +Enabling Istio in a namespace only enables automatic sidecar injection for new workloads. To enable the Envoy sidecar for existing workloads, you need to enable it manually for each workload. + +To inject the Istio sidecar on an existing workload in the namespace, from the **Cluster Explorer** go to the workload, click the **⋮,** and click **Redeploy.** When the workload is redeployed, it will have the Envoy sidecar automatically injected. + +Wait a few minutes for the workload to upgrade to have the istio sidecar. Click it and go to the Containers section. You should be able to see `istio-proxy` alongside your original workload. This means the Istio sidecar is enabled for the workload. Istio is doing all the wiring for the sidecar envoy. Now Istio can do all the features automatically if you enable them in the yaml. + +### 3. Add Deployments and Services + +There are a few ways to add new **Deployments** in your namespace + +1. From the **Cluster Explorer** click on Workload > Overview +1. Click **Create** +1. Select **Deployment** from the various workload options +1. Fill out the form, or **Edit as Yaml** +1. Click **Create** + +Alternatively, you can select the specific workload you want to deploy from worklod > specific workload and create from there. + +To add a **Service** to your namespace + +1. From the **Cluster Explorer** click on **Service Discovery > Services** +1. Click **Create** +1. Select the type of service you want to create from the various options +1. Fill out the form, or **Edit as Yaml** +1. Click **Create** + +You can also create deployments and services using the kubectl **shell** + +1. Run `kubectl create -f .yaml` if your file is stored locally in the cluster +1. Or run `cat<< EOF | kubectl apply -f -`, paste the file contents into the terminal, then run `EOF` to complete the command. + +### 4. Example Deployments and Services + +Next we add the Kubernetes resources for the sample deployments and services for the BookInfo app in Istio's documentation. + +1. From the **Cluster Explorer**, open the kubectl **shell** +1. Run `cat<< EOF | kubectl apply -f -` +1. Copy the below resources into the the shell +1. Run `EOF` + +This will set up the following sample resources from Istio's example BookInfo app: + +Details service and deployment: + +- A `details` Service +- A ServiceAccount for `bookinfo-details` +- A `details-v1` Deployment + +Ratings service and deployment: + +- A `ratings` Service +- A ServiceAccount for `bookinfo-ratings` +- A `ratings-v1` Deployment + +Reviews service and deployments (three versions): + +- A `reviews` Service +- A ServiceAccount for `bookinfo-reviews` +- A `reviews-v1` Deployment +- A `reviews-v2` Deployment +- A `reviews-v3` Deployment + +Productpage service and deployment: + +This is the main page of the app, which will be visible from a web browser. The other services will be called from this page. + +- A `productpage` service +- A ServiceAccount for `bookinfo-productpage` +- A `productpage-v1` Deployment + +### Resource YAML + +```yaml +# Copyright 2017 Istio Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +################################################################################################## +# Details service +################################################################################################## +apiVersion: v1 +kind: Service +metadata: + name: details + labels: + app: details + service: details +spec: + ports: + - port: 9080 + name: http + selector: + app: details +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: bookinfo-details +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: details-v1 + labels: + app: details + version: v1 +spec: + replicas: 1 + selector: + matchLabels: + app: details + version: v1 + template: + metadata: + labels: + app: details + version: v1 + spec: + serviceAccountName: bookinfo-details + containers: + - name: details + image: docker.io/istio/examples-bookinfo-details-v1:1.15.0 + imagePullPolicy: IfNotPresent + ports: + - containerPort: 9080 +--- +################################################################################################## +# Ratings service +################################################################################################## +apiVersion: v1 +kind: Service +metadata: + name: ratings + labels: + app: ratings + service: ratings +spec: + ports: + - port: 9080 + name: http + selector: + app: ratings +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: bookinfo-ratings +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: ratings-v1 + labels: + app: ratings + version: v1 +spec: + replicas: 1 + selector: + matchLabels: + app: ratings + version: v1 + template: + metadata: + labels: + app: ratings + version: v1 + spec: + serviceAccountName: bookinfo-ratings + containers: + - name: ratings + image: docker.io/istio/examples-bookinfo-ratings-v1:1.15.0 + imagePullPolicy: IfNotPresent + ports: + - containerPort: 9080 +--- +################################################################################################## +# Reviews service +################################################################################################## +apiVersion: v1 +kind: Service +metadata: + name: reviews + labels: + app: reviews + service: reviews +spec: + ports: + - port: 9080 + name: http + selector: + app: reviews +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: bookinfo-reviews +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: reviews-v1 + labels: + app: reviews + version: v1 +spec: + replicas: 1 + selector: + matchLabels: + app: reviews + version: v1 + template: + metadata: + labels: + app: reviews + version: v1 + spec: + serviceAccountName: bookinfo-reviews + containers: + - name: reviews + image: docker.io/istio/examples-bookinfo-reviews-v1:1.15.0 + imagePullPolicy: IfNotPresent + ports: + - containerPort: 9080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: reviews-v2 + labels: + app: reviews + version: v2 +spec: + replicas: 1 + selector: + matchLabels: + app: reviews + version: v2 + template: + metadata: + labels: + app: reviews + version: v2 + spec: + serviceAccountName: bookinfo-reviews + containers: + - name: reviews + image: docker.io/istio/examples-bookinfo-reviews-v2:1.15.0 + imagePullPolicy: IfNotPresent + ports: + - containerPort: 9080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: reviews-v3 + labels: + app: reviews + version: v3 +spec: + replicas: 1 + selector: + matchLabels: + app: reviews + version: v3 + template: + metadata: + labels: + app: reviews + version: v3 + spec: + serviceAccountName: bookinfo-reviews + containers: + - name: reviews + image: docker.io/istio/examples-bookinfo-reviews-v3:1.15.0 + imagePullPolicy: IfNotPresent + ports: + - containerPort: 9080 +--- +################################################################################################## +# Productpage services +################################################################################################## +apiVersion: v1 +kind: Service +metadata: + name: productpage + labels: + app: productpage + service: productpage +spec: + ports: + - port: 9080 + name: http + selector: + app: productpage +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: bookinfo-productpage +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: productpage-v1 + labels: + app: productpage + version: v1 +spec: + replicas: 1 + selector: + matchLabels: + app: productpage + version: v1 + template: + metadata: + labels: + app: productpage + version: v1 + spec: + serviceAccountName: bookinfo-productpage + containers: + - name: productpage + image: docker.io/istio/examples-bookinfo-productpage-v1:1.15.0 + imagePullPolicy: IfNotPresent + ports: + - containerPort: 9080 +--- +``` + +### [Next: Set up the Istio Gateway]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/gateway) diff --git a/content/rancher/v2.x/en/istio/setup/enable-istio-in-cluster/_index.md b/content/rancher/v2.x/en/istio/setup/enable-istio-in-cluster/_index.md new file mode 100644 index 00000000000..06fc840f7d0 --- /dev/null +++ b/content/rancher/v2.x/en/istio/setup/enable-istio-in-cluster/_index.md @@ -0,0 +1,150 @@ +--- +title: 1. Enable Istio in the Cluster +weight: 1 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster +--- + +Only a user with the following [Kubernetes default roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) assigned can configure and install Istio in a Kubernetes cluster. + + - `cluster-admin` + + +1. From the **Cluster Explorer**, navigate to available **Charts** in **Apps & Marketplace** +1. Select the Istio chart from the rancher provided charts +1. If you have not already installed your own monitoring app, you will be prompted to install the rancher-monitoring app. Optional: Set your Selector or Scrape config options on rancher-monitoring app install. +1. Optional: Configure member access and [resource limits]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/resources/) for the Istio components. Ensure you have enough resources on your worker nodes to enable Istio. +1. Optional: Make additional configuration changes to values.yaml if needed +1. Optional: Add additional resources or configuration via the [overlay file](#overlay-file) +1. Click **Install**. + +**Result:** Istio is installed at the cluster level. + +Automatic sidecar injection is disabled by default. To enable this, set the `sidecarInjectorWebhook.enableNamespacesByDefault=true` in the values.yaml on install or upgrade. This automatically enables Istio sidecar injection into all new namespaces that are deployed. + +## Additonal Config Options + +### Overlay File + +An Overlay File is designed to support extensive configuration of your Istio installation. It allows you to make changes to any values available in the [IstioOperator API](https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/). This will ensure you can customize the default installation to fit any scenario. + +The Overlay File will add configuration on top of the default installation that is provided from the Istio chart installation. This means you do not need to redefine the components that already defined for installation. + +For more information on Overlay Files, refer to the [documentation](https://istio.io/latest/docs/setup/install/istioctl/#configure-component-settings) + +## Selectors & Scrape Configs + +The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false` which enables monitoring across all namespaces by default. This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. + +If you would like to limit prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you will need to add additional configuration to continue to monitor your resources. + +**Set ingnoreNamspaceSelectors to True** + +This limits monitoring to specific namespaces. + + +1. From the **Cluster Explorer**, navigate to **Installed Apps** if Monitoring is already installed, or **Charts** in **Apps & Marketplace** +1. If starting a new install, **Click** the **rancher-monitoring** chart, then in **Chart Options** click **Edit as Yaml**. +1. If updating an existing installation, click on **Upgrade**, then in **Chart Options** click **Edit as Yaml**. +1. Set`prometheus.prometheusSpec.ignoreNamespaceSelectors=true` +1. Complete install or upgrade + +**Result:** Prometheus will be limited to specific namespaces which means one of the following configurations will need to be set up to continue to view data in various dashboards + +There are two different ways to enable prometheus to detect resources in other namespaces when `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`: + +1. Add a Service Monitor or Pod Monitor in the namespace with the targets you want to scrape. +1. Add an `additionalScrapeConfig` to your rancher-monitoring instance to scrape all targets in all namespaces. + +**Option 1: Create a Service Monitor or Pod Monitor** + +This option allows you to define which specific services or pods you would like monitored in a specific namespace. + + >Usability tradeoff is that you have to create the service monitor / pod monitor per namespace since you cannot monitor across namespaces. + + **Pre Requisite:** define a ServiceMonitor or PodMonitor for ``. Example ServiceMonitor is provided below. + +1. From the **Cluster Explorer**, open the kubectl shell +1. Run `kubectl create -f .yaml` if the file is stored locally in your cluster. +1. Or run `cat<< EOF | kubectl apply -f -`, paste the file contents into the terminal, then run `EOF` to complete the command. +1. If starting a new install, **Click** the **rancher-monitoring** chart and scroll down to **Preview Yaml**. +1. Run `kubectl label namespace istio-injection=enabled` to enable the envoy sidecar injection + +**Result:** `` can be scraped by prometheus. + +**Example Service Monitor for Istio Proxies** + +```yaml +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: envoy-stats-monitor + namespace: istio-system + labels: + monitoring: istio-proxies +spec: + selector: + matchExpressions: + - {key: istio-prometheus-ignore, operator: DoesNotExist} + namespaceSelector: + any: true + jobLabel: envoy-stats + endpoints: + - path: /stats/prometheus + targetPort: 15090 + interval: 15s + relabelings: + - sourceLabels: [__meta_kubernetes_pod_container_port_name] + action: keep + regex: '.*-envoy-prom' + - action: labeldrop + regex: "__meta_kubernetes_pod_label_(.+)" + - sourceLabels: [__meta_kubernetes_namespace] + action: replace + targetLabel: namespace + - sourceLabels: [__meta_kubernetes_pod_name] + action: replace + targetLabel: pod_name +``` + + + +**Option 3: Set ingnoreNamspaceSelectors to False** + +This enables monitoring accross namespaces by giving prometheus additional scrape configurations. + + >Usability tradeoff is that all of prometheus' `additionalScrapeConfigs` are maintained in a single Secret. This could make upgrading difficult if monitoring is already deployed with additionalScrapeConfigs prior to installing Istio. + +1. If starting a new install, **Click** the **rancher-monitoring** chart, then in **Chart Options** click **Edit as Yaml**. +1. If updating an existing installation, click on **Upgrade**, then in **Chart Options** click **Edit as Yaml**. +1. If updating an existing installation, click on **Upgrade** and then **Preview Yaml**. +1. Set`prometheus.prometheusSpec.additionalScrapeConfigs` array to the **Additional Scrape Config** provided below. +1. Complete install or upgrade + +**Result:** All namespaces with the `istio-injection=enabled` label will be scraped by prometheus. + +**Additional Scrape Config:** +``` yaml +- job_name: 'istio/envoy-stats' + scrape_interval: 15s + metrics_path: /stats/prometheus + kubernetes_sd_configs: + - role: pod + relabel_configs: + - source_labels: [__meta_kubernetes_pod_container_port_name] + action: keep + regex: '.*-envoy-prom' + - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] + action: replace + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:15090 + target_label: __address__ + - action: labelmap + regex: __meta_kubernetes_pod_label_(.+) + - source_labels: [__meta_kubernetes_namespace] + action: replace + target_label: namespace + - source_labels: [__meta_kubernetes_pod_name] + action: replace + target_label: pod_name +``` \ No newline at end of file diff --git a/content/rancher/v2.x/en/istio/setup/enable-istio-in-namespace/_index.md b/content/rancher/v2.x/en/istio/setup/enable-istio-in-namespace/_index.md new file mode 100644 index 00000000000..4e74c56d523 --- /dev/null +++ b/content/rancher/v2.x/en/istio/setup/enable-istio-in-namespace/_index.md @@ -0,0 +1,43 @@ +--- +title: 2. Enable Istio in a Namespace +weight: 2 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-namespace +--- + +You will need to manually enable Istio in each namespace that you want to be tracked or controlled by Istio. When Istio is enabled in a namespace, the Envoy sidecar proxy will be automatically injected into all new workloads that are deployed in the namespace. + +This namespace setting will only affect new workloads in the namespace. Any preexisting workloads will need to be re-deployed to leverage the sidecar auto injection. + +> **Prerequisite:** To enable Istio in a namespace, the cluster must have Istio installed. + +1. In the Rancher **Cluster Explorer,** open the kubectl shell. +1. Then run `kubectl label namespace istio-injection=enabled` + +**Result:** The namespace now has the label `istio-injection=enabled`. All new workloads deployed in this namespace will have the Istio sidecar injected by default. + +### Verifying that Automatic Istio Sidecar Injection is Enabled + +To verify that Istio is enabled, deploy a hello-world workload in the namespace. Go to the workload and click the pod name. In the **Containers** section, you should see the `istio-proxy` container. + +### Excluding Workloads from Being Injected with the Istio Sidecar + +If you need to exclude a workload from getting injected with the Istio sidecar, use the following annotation on the workload: + +``` +sidecar.istio.io/inject: “false” +``` + +To add the annotation to a workload, + +1. From the **Cluster Explorer** view, use the side-nav to select the **Overview** page for workloads. +1. Go to the workload that should not have the sidecar and edit as yaml +1. Add the following key, value `sidecar.istio.io/inject: false` as an annotation on the workload +1. Click **Save.** + +**Result:** The Istio sidecar will not be injected into the workload. + +> **NOTE:** If you are having issues with a Job you deployed not completing, you will need to add this annotation to your pod using the provided steps. Since Istio Sidecars run indefinitely, a Job cannot be considered complete even after its task has completed. + + +### [Next: Select the Nodes ]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/node-selectors) \ No newline at end of file diff --git a/content/rancher/v2.x/en/istio/setup/gateway/_index.md b/content/rancher/v2.x/en/istio/setup/gateway/_index.md new file mode 100644 index 00000000000..68743ee852e --- /dev/null +++ b/content/rancher/v2.x/en/istio/setup/gateway/_index.md @@ -0,0 +1,142 @@ +--- +title: 5. Set up the Istio Gateway +weight: 5 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup/gateway +--- + +The gateway to each cluster can have its own port or load balancer, which is unrelated to a service mesh. By default, each Rancher-provisioned cluster has one NGINX ingress controller allowing traffic into the cluster. + +You can use the Nginx Ingress controller with or without Istio installed. If this is the only gateway to your cluster, Istio will be able to route traffic from service to service, but Istio will not be able to receive traffic from outside the cluster. + +To allow Istio to receive external traffic, you need to enable Istio's gateway, which works as a north-south proxy for external traffic. When you enable the Istio gateway, the result is that your cluster will have two Ingresses. + +You will also need to set up a Kubernetes gateway for your services. This Kubernetes resource points to Istio's implementation of the ingress gateway to the cluster. + +You can route traffic into the service mesh with a load balancer or use Istio's NodePort gateway. This section describes how to set up the NodePort gateway. + +For more information on the Istio gateway, refer to the [Istio documentation.](https://istio.io/docs/reference/config/networking/v1alpha3/gateway/) + +![In an Istio-enabled cluster, you can have two Ingresses: the default Nginx Ingress, and the default Istio controller.]({{}}/img/rancher/istio-ingress.svg) + +# Enable an Istio Gateway + +The ingress gateway is a Kubernetes service that will be deployed in your cluster. The Istio Gateway allows for more extensive customization and flexibility. + +1. From the **Cluster Explorer**, select **Istio** from the nav dropdown. +1. Click **Gateways** in the side nav bar. +1. Click **Create from Yaml**. +1. Paste your Istio Gateway yaml, or **Read from File**. +1. Click **Create**. + +**Result:** The gateway is deployed, and will now route traffic with applied rules + +# Example Istio Gateway + +We add the BookInfo app deployments in services when going through the Workloads example. Next we add an Istio Gateway so that the app is accessible from outside your cluster. + +1. From the **Cluster Explorer**, select **Istio** from the nav dropdown. +1. Click **Gateways** in the side nav bar. +1. Click **Create from Yaml**. +1. Copy and paste the Gateway yaml provided below. +1. Click **Create**. + +```yaml +apiVersion: networking.istio.io/v1alpha3 +kind: Gateway +metadata: + name: bookinfo-gateway +spec: + selector: + istio: ingressgateway # use istio default controller + servers: + - port: + number: 80 + name: http + protocol: HTTP + hosts: + - "*" +--- +``` + +Then to deploy the VirtualService that provides the traffic routing for the Gateway + +1. Click **VirtualService** in the side nav bar. +1. Click **Create from Yaml**. +1. Copy and paste the VirtualService yaml provided below. +1. Click **Create**. + +```yaml +apiVersion: networking.istio.io/v1alpha3 +kind: VirtualService +metadata: + name: bookinfo +spec: + hosts: + - "*" + gateways: + - bookinfo-gateway + http: + - match: + - uri: + exact: /productpage + - uri: + prefix: /static + - uri: + exact: /login + - uri: + exact: /logout + - uri: + prefix: /api/v1/products + route: + - destination: + host: productpage + port: + number: 9080 +``` + +**Result:** You have configured your gateway resource so that Istio can receive traffic from outside the cluster. + +Confirm that the resource exists by running: +``` +kubectl get gateway -A +``` + +The result should be something like this: +``` +NAME AGE +bookinfo-gateway 64m +``` + +### Access the ProductPage Service from a Web Browser + +To test and see if the BookInfo app deployed correctly, the app can be viewed a web browser using the Istio controller IP and port, combined with the request name specified in your Kubernetes gateway resource: + +`http://:/productpage` + +To get the ingress gateway URL and port, + +1. From the **Cluster Explorer**, Click on **Workloads > Overview**. +1. Scroll down to the `istio-system` namespace. +1. Within `istio-system`, there is a workload named `istio-ingressgateway`. Under the name of this workload, you should see links, such as `80/tcp`. +1. Click one of those links. This should show you the URL of the ingress gateway in your web browser. Append `/productpage` to the URL. + +**Result:** You should see the BookInfo app in the web browser. + +For help inspecting the Istio controller URL and ports, try the commands the [Istio documentation.](https://istio.io/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports) + +# Troubleshooting + +The [official Istio documentation](https://istio.io/docs/tasks/traffic-management/ingress/ingress-control/#troubleshooting) suggests `kubectl` commands to inspect the correct ingress host and ingress port for external requests. + +### Confirming that the Kubernetes Gateway Matches Istio's Ingress Controller + +You can try the steps in this section to make sure the Kubernetes gateway is configured properly. + +In the gateway resource, the selector refers to Istio's default ingress controller by its label, in which the key of the label is `istio` and the value is `ingressgateway`. To make sure the label is appropriate for the gateway, do the following: + +1. From the **Cluster Explorer**, Click on **Workloads > Overview**. +1. Scroll down to the `istio-system` namespace. +1. Within `istio-system`, there is a workload named `istio-ingressgateway`. Click the name of this workload and go to the **Labels and Annotations** section. You should see that it has the key `istio` and the value `ingressgateway`. This confirms that the selector in the Gateway resource matches Istio's default ingress controller. + +### [Next: Set up Istio's Components for Traffic Management]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/set-up-traffic-management) diff --git a/content/rancher/v2.x/en/istio/setup/set-up-traffic-management/_index.md b/content/rancher/v2.x/en/istio/setup/set-up-traffic-management/_index.md new file mode 100644 index 00000000000..aa3ff5c1e8c --- /dev/null +++ b/content/rancher/v2.x/en/istio/setup/set-up-traffic-management/_index.md @@ -0,0 +1,76 @@ +--- +title: 6. Set up Istio's Components for Traffic Management +weight: 6 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup/set-up-traffic-management +--- + +A central advantage of traffic management in Istio is that it allows dynamic request routing. Some common applications for dynamic request routing include canary deployments and blue/green deployments. The two key resources in Istio traffic management are *virtual services* and *destination rules*. + +- [Virtual services](https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/) intercept and direct traffic to your Kubernetes services, allowing you to divide percentages of traffic from a request to different services. You can use them to define a set of routing rules to apply when a host is addressed. +- [Destination rules](https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule/) serve as the single source of truth about which service versions are available to receive traffic from virtual services. You can use these resources to define policies that apply to traffic that is intended for a service after routing has occurred. + +This section describes how to add an example virtual service that corresponds to the `reviews` microservice in the sample BookInfo app. The purpose of this service is to divide traffic between two versions of the `reviews` service. + +In this example, we take the traffic to the `reviews` service and intercept it so that 50 percent of it goes to `v1` of the service and 50 percent goes to `v2`. + +After this virtual service is deployed, we will generate traffic and see from the Kiali visualization that traffic is being routed evenly between the two versions of the service. + +To deploy the virtual service and destination rules for the `reviews` service, + +1. From the **Cluster Explorer**, select **Istio** from the nav dropdown. +1. Click **DestinationRule** in the side nav bar. +1. Click **Create from Yaml**. +1. Copy and paste the DestinationRule yaml provided below. +1. Click **Create**. + +```yaml +apiVersion: networking.istio.io/v1alpha3 +kind: DestinationRule +metadata: + name: reviews +spec: + host: reviews + subsets: + - name: v1 + labels: + version: v1 + - name: v2 + labels: + version: v2 + - name: v3 + labels: + version: v3 +``` + +Then to deploy the VirtualService that provides the traffic routing that utilizes the DestinationRule + +1. Click **VirtualService** in the side nav bar. +1. Click **Create from Yaml**. +1. Copy and paste the VirtualService yaml provided below. +1. Click **Create**. + +```yaml +apiVersion: networking.istio.io/v1alpha3 +kind: VirtualService +metadata: + name: reviews +spec: + hosts: + - reviews + http: + - route: + - destination: + host: reviews + subset: v1 + weight: 50 + - destination: + host: reviews + subset: v3 + weight: 50 +--- +``` + +**Result:** When you generate traffic to this service (for example, by refreshing the ingress gateway URL), the Kiali traffic graph will reflect that traffic to the `reviews` service is divided evenly between `v1` and `v3`. + +### [Next: Generate and View Traffic]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/view-traffic) diff --git a/content/rancher/v2.x/en/istio/setup/view-traffic/_index.md b/content/rancher/v2.x/en/istio/setup/view-traffic/_index.md new file mode 100644 index 00000000000..e0aad92e277 --- /dev/null +++ b/content/rancher/v2.x/en/istio/setup/view-traffic/_index.md @@ -0,0 +1,25 @@ +--- +title: 7. Generate and View Traffic +weight: 7 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/istio/setup/view-traffic +--- + +This section describes how to view the traffic that is being managed by Istio. + +# The Kiali Traffic Graph + +The Istio overpage provides a link to the Kiali dashboard. From the Kiali dashboard, you are able to view graphs for each namespace. The Kiali graph provides a powerful way to visualize the topology of your Istio service mesh. It shows you which services communicate with each other. + +>**Prerequisite:** To enable traffic to show up in the graph, ensure you have enabled one of the [Selectors & Scrape Configs]({{}}/rancher/v2.x/en/istio/setup/enable-istio-in-cluster/#selectors-scrape-configs)options. If you do not have this configured, you will not see information on the graph. + +To see the traffic graph, + +1. From the **Cluster Explorer**, select **Istio** from the nav dropdown. +1. Click the **Kiali** link on the Istio **Overview** page. +1. Click on **Graph** in the side nav. +1. Change the namespace in the **Namesace** dropdown to view the traffic for each namespace. + +If you refresh the URL to the BookInfo app several times, you should be able to see green arrows on the Kiali graph showing traffic to `v1` and `v3` of the `reviews` service. The control panel on the right side of the graph lets you configure details including how many minutes of the most recent traffic should be shown on the graph. + +For additional tools and visualizations, you can go to Grafana, and Prometheus dashboards from the **Monitoring** **Overview** page diff --git a/content/rancher/v2.x/en/k8s-resources/_index.md b/content/rancher/v2.x/en/k8s-resources/_index.md new file mode 100644 index 00000000000..2c460974c1e --- /dev/null +++ b/content/rancher/v2.x/en/k8s-resources/_index.md @@ -0,0 +1,11 @@ +--- +title: Kubernetes Resources +weight: 10 +--- + + +### About the Cluster Explorer + +_Available as of v2.5_ + +The cluster explorer is a new feature in Rancher v2.5 that allows you to view and manipulate all of the custom resources and CRDs in a Kubernetes cluster from the Rancher UI. \ No newline at end of file diff --git a/content/rancher/v2.x/en/k8s-in-rancher/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/_index.md similarity index 98% rename from content/rancher/v2.x/en/k8s-in-rancher/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/_index.md index 71830fc1f00..5e50a555387 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/_index.md +++ b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/_index.md @@ -1,6 +1,6 @@ --- -title: Kubernetes Resources, Registries and Pipelines -weight: 3000 +title: Legacy +weight: 19 aliases: - /rancher/v2.x/en/concepts/ - /rancher/v2.x/en/tasks/ diff --git a/content/rancher/v2.x/en/k8s-in-rancher/certificates/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/certificates/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/certificates/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/certificates/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/configmaps/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/configmaps/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/configmaps/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/configmaps/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-background/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-background/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-background/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-background/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/load-balancers-and-ingress/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/load-balancers-and-ingress/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/load-balancers-and-ingress/load-balancers/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/load-balancers-and-ingress/load-balancers/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/registries/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/registries/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/registries/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/registries/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/secrets/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/secrets/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/secrets/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/secrets/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/service-discovery/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/service-discovery/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/service-discovery/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/service-discovery/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/workloads/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/workloads/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/workloads/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/workloads/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/workloads/add-a-sidecar/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/workloads/add-a-sidecar/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/workloads/add-a-sidecar/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/workloads/add-a-sidecar/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/workloads/deploy-workloads/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/workloads/deploy-workloads/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/workloads/rollback-workloads/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/workloads/rollback-workloads/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/workloads/rollback-workloads/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/workloads/rollback-workloads/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md b/content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/workloads/upgrade-workloads/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md rename to content/rancher/v2.x/en/k8s-resources/k8s-in-rancher/workloads/upgrade-workloads/_index.md diff --git a/content/rancher/v2.x/en/logging/_index.md b/content/rancher/v2.x/en/logging/_index.md new file mode 100644 index 00000000000..6c096f25e05 --- /dev/null +++ b/content/rancher/v2.x/en/logging/_index.md @@ -0,0 +1,243 @@ +--- +title: Rancher Integration with Logging Services +shortTitle: Logging +description: Rancher integrates with popular logging services. Learn the requirements and benefits of integrating with logging services, and enable logging on your cluster. +metaDescription: "Rancher integrates with popular logging services. Learn the requirements and benefits of integrating with logging services, and enable logging on your cluster." +weight: 16 +--- + +- [Changes in Rancher v2.5](#changes-in-rancher-v2-5) +- [Configuring the Logging Output for the Rancher Kubernetes Cluster](#configuring-the-logging-output-for-the-rancher-kubernetes-cluster) +- [Enabling Logging for Rancher Managed Clusters](#enabling-logging-for-rancher-managed-clusters) +- [Configuring the Logging Application](#configuring-the-logging-application) + + +### Changes in Rancher v2.5 + +The following changes were introduced to logging in Rancher v2.5: + +- Rancher's logging feature is now powered the [Banzai Cloud Logging operator](https://banzaicloud.com/docs/one-eye/logging-operator/) instead of Rancher's in-house logging solution. +- [Fluent Bit](https://fluentbit.io/) is now used to aggregate the logs. [Fluentd](https://www.fluentd.org/) is used for filtering the messages and routing them to the outputs. Previously, only Fluentd was used. +- Logging can be configured with a Kubernetes manifest, because now the logging uses a Kubernetes operator with Custom Resource Definitions. +- We now support filtering logs. +- We now support writing logs to multiple outputs. +- We now always collect Control Plane and etcd logs. + + +The following figure from the [Banzai documentation](https://banzaicloud.com/docs/one-eye/logging-operator/#architecture) shows the new logging architecture: + +
How the Banzai Cloud Logging Operator Works with Fluentd and Fluent Bit
+ +![How the Banzai Cloud Logging Operator Works with Fluentd]({{}}/img/rancher/banzai-cloud-logging-operator.png) + +### Configuring the Logging Output for the Rancher Kubernetes Cluster + +If you install Rancher as a Helm chart, you'll configure the Helm chart options to select a logging output for all the logs in the local Kubernetes cluster. + +If you install Rancher using the Rancher CLI on an Linux OS, the Rancher Helm chart will be installed on a Kubernetes cluster with default options. Then when the Rancher UI is available, you'll enable the logging app from the Apps section of the UI. Then during the process of installing the logging application, you will configure the logging output. + +### Enabling Logging for Rancher Managed Clusters + +If you have Enterprise Cluster Manager enabled, you can enable the logging for a Rancher managed cluster by going to the Apps page and installing the logging app. + +### Configuring the Logging Application + +The following Custom Resource Definitions are used to configure logging: + +- [Flow and ClusterFlow](https://banzaicloud.com/docs/one-eye/logging-operator/crds/#flows-clusterflows) +- [Output and ClusterOutput](https://banzaicloud.com/docs/one-eye/logging-operator/crds/#outputs-clusteroutputs) + +According to the [Banzai Cloud documentation,](https://banzaicloud.com/docs/one-eye/logging-operator/#architecture) + +> You can define `outputs` (destinations where you want to send your log messages, for example, Elasticsearch, or and Amazon S3 bucket), and `flows` that use filters and selectors to route log messages to the appropriate outputs. You can also define cluster-wide outputs and flows, for example, to use a centralized output that namespaced users cannot modify. + +### RBAC +Rancher logging has two roles, `logging-admin` and `logging-view`. `logging-admin` allows users full access to namespaced flows and outputs. The `logging-view` role allows users to view namespaced flows and outputs, and cluster flows and outputs. Edit access to the cluster flow and cluster output resources is powerful as it allows any user with edit access control of all logs in the cluster. Cluster admin is the only role with full access to all rancher-logging resources. Cluster members are not able to edit or read any logging resources. Project owners are able to create namespaced flows and outputs in the namespaces under their projects. This means that project owners can collect logs from anything in their project namespaces. Project members are able to view the flows and outputs in the namespaces under their projects. Project owners and project members require at least 1 namespace in their project to use logging. If they do not have at least one namespace in their project they may not see the logging button in the top nav dropdown. + + +### Examples + +Let's say you wanted to send all logs in your cluster to an elasticsearch cluster. + +First lets create our cluster output: +```yaml +apiVersion: logging.banzaicloud.io/v1beta1 +kind: ClusterOutput +metadata: + name: "example-es" + namespace: "cattle-logging-system" +spec: + elasticsearch: + host: elasticsearch.example.com + port: 9200 + scheme: http +``` + +We have created a cluster output, without elasticsearch configuration, in the same namespace as our operator `cattle-logging-system.`. Any time we create a cluster flow or cluster output we have to put it in the `cattle-logging-system` namespace. + +Now we have configured where we want the logs to go, lets configure all logs to go to that output. + +```yaml +apiVersion: logging.banzaicloud.io/v1beta1 +kind: ClusterFlow +metadata: + name: "all-logs" + namespace: "cattle-logging-system" +spec: + globalOutputRefs: + - "example-es +``` + +We should now see our configured index with logs in it. + +What if we have an application team who only wants logs from a specific namespaces sent to a splunk server? For this case can use namespaced outputs and flows. + +Before we start lets set up a scenario. + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: devteam +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: coolapp + namespace: devteam + labels: + app: coolapp +spec: + replicas: 2 + selector: + matchLabels: + app: coolapp + template: + metadata: + labels: + app: coolapp + spec: + containers: + - name: generator + image: paynejacob/loggenerator:latest +``` + +like before we start with an output, unlike cluster outputs we create our output in our application's namespace: + +```yaml +apiVersion: logging.banzaicloud.io/v1beta1 +kind: Output +metadata: + name: "devteam-splunk" + namespace: "devteam" +spec: + SplunkHec: + host: splunk.example.com + port: 8088 + protocol: http +``` + +Once again, lets give our output some logs: + +```yaml +apiVersion: logging.banzaicloud.io/v1beta1 +kind: Flow +metadata: + name: "devteam-logs" + namespace: "devteam" +spec: + localOutputRefs: + - "devteam-splunk" +``` + +For the final example we create an output to write logs to a destination that is not supported out of the box (e.g. syslog): + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: syslog-config + namespace: cattle-logging-system +type: Opaque +stringData: + fluent-bit.conf: | + [INPUT] + Name forward + Port 24224 + + [OUTPUT] + Name syslog + InstanceName syslog-output + Match * + Addr syslog.example.com + Port 514 + Cluster ranchers + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: fluentbit-syslog-forwarder + namespace: cattle-logging-system + labels: + output: syslog +spec: + selector: + matchLabels: + output: syslog + template: + metadata: + labels: + output: syslog + spec: + containers: + - name: fluentbit + image: paynejacob/fluent-bit-out-syslog:latest + ports: + - containerPort: 24224 + volumeMounts: + - mountPath: "/fluent-bit/etc/" + name: configuration + volumes: + - name: configuration + secret: + secretName: syslog-config +--- +apiVersion: v1 +kind: Service +metadata: + name: syslog-forwarder + namespace: cattle-logging-system +spec: + selector: + output: syslog + ports: + - protocol: TCP + port: 24224 + targetPort: 24224 +--- +apiVersion: logging.banzaicloud.io/v1beta1 +kind: ClusterFlow +metadata: + name: all-logs + namespace: cattle-logging-system +spec: + globalOutputRefs: + - syslog +--- +apiVersion: logging.banzaicloud.io/v1beta1 +kind: ClusterOutput +metadata: + name: syslog + namespace: cattle-logging-system +spec: + forward: + servers: + - host: "syslog-forwarder.cattle-logging-system" + require_ack_response: false + ignore_network_errors_at_startup: false +``` + +if we break down what is happening, first we create a deployment of a container that has the additional syslog plugin and accepts logs forwarded from another fluentd. Next we create an output configured as a forwarder to our deployment. The deployment fluentd will then forward all logs to the configured syslog destination. + + diff --git a/content/rancher/v2.x/en/logging/legacy/_index.md b/content/rancher/v2.x/en/logging/legacy/_index.md new file mode 100644 index 00000000000..5d4cd1420dc --- /dev/null +++ b/content/rancher/v2.x/en/logging/legacy/_index.md @@ -0,0 +1,7 @@ +--- +title: Legacy Logging Documentation +shortTitle: Legacy +weight: 1 +--- + +This section contains documentation for the logging features that were available in Rancher prior to v2.5. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-admin/tools/logging/_index.md b/content/rancher/v2.x/en/logging/legacy/cluster-logging/_index.md similarity index 95% rename from content/rancher/v2.x/en/cluster-admin/tools/logging/_index.md rename to content/rancher/v2.x/en/logging/legacy/cluster-logging/_index.md index 07c80a651cf..2fae22bbe8b 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/logging/_index.md +++ b/content/rancher/v2.x/en/logging/legacy/cluster-logging/_index.md @@ -1,12 +1,15 @@ --- -title: Rancher Integration with Logging Services +title: Cluster Logging description: Rancher integrates with popular logging services. Learn the requirements and benefits of integrating with logging services, and enable logging on your cluster. metaDescription: "Rancher integrates with popular logging services. Learn the requirements and benefits of integrating with logging services, and enable logging on your cluster." weight: 3 aliases: - /rancher/v2.x/en/tasks/logging/ + - /rancher/v2.x/en/cluster-admin/tools/logging --- +> In Rancher 2.5, the logging application was improved. There are now two ways to enable logging. The older way is documented in this section, and the new application for logging is documented in the [dashboard section.]({{}}/rancher/v2.x/en/dashboard/logging) + Logging is helpful because it allows you to: - Capture and analyze the state of your cluster diff --git a/content/rancher/v2.x/en/cluster-admin/tools/logging/elasticsearch/_index.md b/content/rancher/v2.x/en/logging/legacy/cluster-logging/elasticsearch/_index.md similarity index 97% rename from content/rancher/v2.x/en/cluster-admin/tools/logging/elasticsearch/_index.md rename to content/rancher/v2.x/en/logging/legacy/cluster-logging/elasticsearch/_index.md index 1b50e42bad0..a27b5674187 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/logging/elasticsearch/_index.md +++ b/content/rancher/v2.x/en/logging/legacy/cluster-logging/elasticsearch/_index.md @@ -3,6 +3,7 @@ title: Elasticsearch weight: 200 aliases: - /rancher/v2.x/en/tools/logging/elasticsearch/ + - /rancher/v2.x/en/cluster-admin/tools/logging/elasticsearch --- If your organization uses [Elasticsearch](https://www.elastic.co/), either on premise or in the cloud, you can configure Rancher to send it Kubernetes logs. Afterwards, you can log into your Elasticsearch deployment to view logs. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/logging/fluentd/_index.md b/content/rancher/v2.x/en/logging/legacy/cluster-logging/fluentd/_index.md similarity index 96% rename from content/rancher/v2.x/en/cluster-admin/tools/logging/fluentd/_index.md rename to content/rancher/v2.x/en/logging/legacy/cluster-logging/fluentd/_index.md index 42f54794862..111cb33e007 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/logging/fluentd/_index.md +++ b/content/rancher/v2.x/en/logging/legacy/cluster-logging/fluentd/_index.md @@ -1,6 +1,8 @@ --- title: Fluentd weight: 600 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/logging/fluentd --- If your organization uses [Fluentd](https://www.fluentd.org/), you can configure Rancher to send it Kubernetes logs. Afterwards, you can log into your Fluentd server to view logs. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/logging/kafka/_index.md b/content/rancher/v2.x/en/logging/legacy/cluster-logging/kafka/_index.md similarity index 97% rename from content/rancher/v2.x/en/cluster-admin/tools/logging/kafka/_index.md rename to content/rancher/v2.x/en/logging/legacy/cluster-logging/kafka/_index.md index bb2b5ac2344..06ef0b12b2b 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/logging/kafka/_index.md +++ b/content/rancher/v2.x/en/logging/legacy/cluster-logging/kafka/_index.md @@ -3,6 +3,7 @@ title: Kafka weight: 400 aliases: - /rancher/v2.x/en/tools/logging/kafka/ + - /rancher/v2.x/en/cluster-admin/tools/logging/kafka --- If your organization uses [Kafka](https://kafka.apache.org/), you can configure Rancher to send it Kubernetes logs. Afterwards, you can log into your Kafka server to view logs. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/logging/splunk/_index.md b/content/rancher/v2.x/en/logging/legacy/cluster-logging/splunk/_index.md similarity index 100% rename from content/rancher/v2.x/en/cluster-admin/tools/logging/splunk/_index.md rename to content/rancher/v2.x/en/logging/legacy/cluster-logging/splunk/_index.md diff --git a/content/rancher/v2.x/en/cluster-admin/tools/logging/syslog/_index.md b/content/rancher/v2.x/en/logging/legacy/cluster-logging/syslog/_index.md similarity index 100% rename from content/rancher/v2.x/en/cluster-admin/tools/logging/syslog/_index.md rename to content/rancher/v2.x/en/logging/legacy/cluster-logging/syslog/_index.md diff --git a/content/rancher/v2.x/en/project-admin/tools/logging/_index.md b/content/rancher/v2.x/en/logging/legacy/project-logging/_index.md similarity index 98% rename from content/rancher/v2.x/en/project-admin/tools/logging/_index.md rename to content/rancher/v2.x/en/logging/legacy/project-logging/_index.md index 8c60ddf64eb..6099cbdd09f 100644 --- a/content/rancher/v2.x/en/project-admin/tools/logging/_index.md +++ b/content/rancher/v2.x/en/logging/legacy/project-logging/_index.md @@ -1,6 +1,8 @@ --- -title: Logging +title: Project Logging weight: 2527 +aliases: + - /rancher/v2.x/en/project-admin/tools/logging --- Rancher can integrate with a variety of popular logging services and tools that exist outside of your Kubernetes clusters. diff --git a/content/rancher/v2.x/en/longhorn/_index.md b/content/rancher/v2.x/en/longhorn/_index.md new file mode 100644 index 00000000000..350e67e1068 --- /dev/null +++ b/content/rancher/v2.x/en/longhorn/_index.md @@ -0,0 +1,60 @@ +--- +title: Longhorn - Cloud native distributed block storage for Kubernetes +shortTitle: Longhorn Storage +weight: 19 +--- + +[Longhorn](https://longhorn.io/) is a lightweight, reliable and easy-to-use distributed block storage system for Kubernetes. + +Longhorn is free, open source software. Originally developed by Rancher Labs, it is now being developed as a sandbox project of the Cloud Native Computing Foundation. It can be installed on any Kubernetes cluster with Helm, with kubectl, or with the Rancher UI. + +With Longhorn, you can: + +- Use Longhorn volumes as persistent storage for the distributed stateful applications in your Kubernetes cluster +- Partition your block storage into Longhorn volumes so that you can use Kubernetes volumes with or without a cloud provider +- Replicate block storage across multiple nodes and data centers to increase availability +- Store backup data in external storage such as NFS or AWS S3 +- Create cross-cluster disaster recovery volumes so that data from a primary Kubernetes cluster can be quickly recovered from backup in a second Kubernetes cluster +- Schedule recurring snapshots of a volume, and schedule recurring backups to NFS or S3-compatible secondary storage +- Restore volumes from backup +- Upgrade Longhorn without disrupting persistent volumes + +### New in Rancher v2.5 + +Prior to Rancher v2.5, Longhorn could be installed as a Rancher catalog app. In Rancher v2.5, the catalog system was replaced by the **Apps & Marketplace,** and it became possible to install Longhorn as an app from that page. The **Cluster Explorer** now allows you to manipulate Longhorn's Kubernetes resources from the Rancher UI. So now you can control the Longhorn functionality with the Longhorn UI, or with kubectl, or by manipulating Longhorn's Kubernetes custom resources in the Rancher UI. + +These instructions assume you are using Rancher v2.5, but Longhorn can be installed with earlier Rancher versions. For documentation about installing Longhorn as a catalog app using the legacy Rancher UI, refer to the [Longhorn documentation.](https://longhorn.io/docs/1.0.2/deploy/install/install-with-rancher/) + +### Installing Longhorn with Rancher + +1. Go to the **Cluster Explorer** in the Rancher UI. +1. Click **Apps.** +1. Click `longhorn`. +1. Optional: To customize the initial settings, click **Longhorn Default Settings** and edit the configuration. For help customizing the settings, refer to the [Longhorn documentation.](https://longhorn.io/docs/1.0.2/references/settings/) +1. Click **Install.** + +**Result:** Longhorn is deployed in the Kubernetes cluster. + +### Accessing Longhorn from the Rancher UI + +1. From the **Cluster Explorer," go to the top left dropdown menu and click **Cluster Explorer > Longhorn.** +1. On this page, you can edit Kubernetes resources managed by Longhorn. To view the Longhorn UI, click the **Longhorn** button in the **Overview**section. + +**Result:** You will be taken to the Longhorn UI, where you can manage your Longhorn volumes and their replicas in the Kubernetes cluster, as well as secondary backups of your Longhorn storage that may exist in another Kubernetes cluster or in S3. + +### Uninstalling Longhorn from the Rancher UI + +1. Click **Cluster Explorer > Apps & Marketplace.** +1. Click **Installed Apps.** +1. Go to the `longhorn-system` namespace and check the boxes next to the `longhorn` and `longhorn-crd` apps. +1. Click **Delete,** and confirm **Delete.** + +**Result:** Longhorn is uninstalled. + +### GitHub Repository + +The Longhorn project is available [here.](https://github.com/longhorn/longhorn) + +### Documentation + +The Longhorn documentation is [here.](https://longhorn.io/docs/) \ No newline at end of file diff --git a/content/rancher/v2.x/en/monitoring-alerting/2.5.x/rbac/_index.md b/content/rancher/v2.x/en/monitoring-alerting/2.5.x/rbac/_index.md new file mode 100644 index 00000000000..3301d45b9f4 --- /dev/null +++ b/content/rancher/v2.x/en/monitoring-alerting/2.5.x/rbac/_index.md @@ -0,0 +1,63 @@ +--- +title: RBAC +weight: 3 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/monitoring/rbac +--- + +This section describes the permissions required to access Monitoring features. + +The `rancher-monitoring` chart installs three `ClusterRoles`. + +# Cluster-Admin Access + +By default, only those with the cluster-admin `ClusterRole` can: + +- Install the `rancher-monitoring` App onto a cluster and all other relevant configuration performed on the chart deploy + - e.g. whether default dashboards are created, what exporters are deployed onto the cluster to collect metrics, etc. +- Create / modify / delete Prometheus deployments in the cluster via Prometheus CRs +- Create / modify / delete Alertmanager deployments in the cluster via Alertmanager CRs +- Persist new Grafana dashboards or datasources via creating ConfigMaps in the appropriate namespace +- Expose certain Prometheus metrics to the k8s Custom Metrics API for HPA via a Secret in the `cattle-monitoring-system` namespace + +## Admin and Edit access + +By default, only Admin and Edit roles can: + +- View the configuration of Prometheuses that are deployed within the cluster +- View the configuraiton of Alertmanagers that are deployed within the cluster +- Modify the scrape configuration of Prometheus deployments via ServiceMonitor and PodMonitor CRs +- Modify the alerting / recording rules of a Prometheus deployment via PrometheusRules CRs + +# Summary of Default Permissions for Kubernetes Default Roles + +Monitoring creates three `ClusterRoles` and adds Monitoring CRD access to the following default K8s `ClusterRoles`: + +| ClusterRole created by chart | Default K8s ClusterRole | +| ------------------------------| ---------------------------| +| `monitoring-admin` | `admin`| +| `monitoring-edit`| `edit` | +| `monitoring-view` | `view `| + +Rancher will continue to use cluster-owner, cluster-member, project-owner, project-member, etc as role names, but will utilize default roles to determine access. For each default K8s `ClusterRole` there are different Istio CRD permissions and K8s actions (Create (C), Get (G), List (L), Update (U), Patch (P), Delete(D), All (*)) that can be performed. + + +|CRDs | Admin | Edit | View | +|----------------------------| ------| -----| -----| +|
  • `monitoring.coreos.com`
    • `prometheuses`
    • `alertmanagers`
| GLW | GLW | GLW| +|
  • `monitoring.coreos.com`
    • `servicemonitors`
    • `podmonitors`
    • `prometheusrules`
| * | * | GLW| + +# Additional Roles + +Monitoring also creates six `Roles` to enable admins to assign more fine-grained access to monitoring within a cluster: + +| Role created by chart | Purpose | +| ------------------------------| ---------------------------| +monitoring-config-admin | Allow admins to assign roles to users to be able to view / modify Secrets and ConfigMaps within the cattle-monitoring-system namespace. Modifying Secrets / ConfigMaps in this namespace could allow users to alter the cluster's Alertmanager configuration, Prometheus Adapter configuration, additional Grafana datasources, TLS secrets, etc. | +monitoring-config-edit | Allow admins to assign roles to users to be able to view / modify Secrets and ConfigMaps within the cattle-monitoring-system namespace. Modifying Secrets / ConfigMaps in this namespace could allow users to alter the cluster's Alertmanager configuration, Prometheus Adapter configuration, additional Grafana datasources, TLS secrets, etc. | +monitoring-config-view | Allow admins to assign roles to users to be able to view Secrets and ConfigMaps within the cattle-monitoring-system namespace. Viewing Secrets / ConfigMaps in this namespace could allow users to observe the cluster's Alertmanager configuration, Prometheus Adapter configuration, additional Grafana datasources, TLS secrets, etc. | +monitoring-dashboard-admin | Allow admins to assign roles to users to be able to edit / view ConfigMaps within the cattle-dashboards namespace. ConfigMaps in this namespace will correspond to Grafana Dashboards that are persisted onto the cluster. | +monitoring-dashboard-edit | Allow admins to assign roles to users to be able to edit / view ConfigMaps within the cattle-dashboards namespace. ConfigMaps in this namespace will correspond to Grafana Dashboards that are persisted onto the cluster. | +monitoring-dashboard-view | Allow admins to assign roles to users to be able to view ConfigMaps within the cattle-dashboards namespace. ConfigMaps in this namespace will correspond to Grafana Dashboards that are persisted onto the cluster. | + +These Roles are not assigned by default but will be created in the cluster. \ No newline at end of file diff --git a/content/rancher/v2.x/en/monitoring-alerting/_index.md b/content/rancher/v2.x/en/monitoring-alerting/_index.md new file mode 100644 index 00000000000..c4889c05aa2 --- /dev/null +++ b/content/rancher/v2.x/en/monitoring-alerting/_index.md @@ -0,0 +1,98 @@ +--- +title: Monitoring and Alerting +shortTitle: Monitoring/Alerting +description: Prometheus lets you view metrics from your different Rancher and Kubernetes objects. Learn about the scope of monitoring and how to enable cluster monitoring +weight: 14 +--- + +Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with [Prometheus](https://prometheus.io/), a leading open-source monitoring solution. + +This page describes how to enable monitoring for a cluster. + +This section covers the following topics: + +- [Changes in Rancher v2.5](#changes-in-rancher-v2-5) +- [About Prometheus](#about-prometheus) +- [Monitoring scope](#monitoring-scope) +- [Enabling cluster monitoring](#enabling-cluster-monitoring) +- [Configuration](#configuration) +- [Examples](#examples) + - [Create ServiceMonitor Custom Resource](#create-servicemonitor-custom-resource) + - [PodMonitor](#podmonitor) + - [PrometheusRule](#prometheusrule) + - [Alertmanager Config](#alertmanager-config) + - [Configuring a Persistent Grafana Dashboard](#configuring-a-persistent-grafana-dashboard) + - [Configuring Grafana to Use Multiple Data Sources](#configuring-grafana-to-use-multiple-data-sources) + + +# Changes in Rancher v2.5 + +Rancher's monitoring application is powered by the Prometheus operator, and it now relies less on Rancher's in-house monitoring tools. + +This change allows Rancher to automatically support new features of the Prometheus operator API. Now all of the features exposed by the upstream Prometheus operator are available in the monitoring application, and you have more flexibility to configure monitoring. + +Previously, you would use the Rancher UI to configure monitoring. The Rancher UI created CRDs that were maintained by Rancher and updated the Prometheus state. In Rancher v2.5, you directly create CRDs for the monitoring application, and those CRDs are exposed in the Rancher UI. + +The differences between Rancher's monitoring feature and the upstream Prometheus operator can be found in the [changelog.](https://github.com/rancher/charts/blob/dev-v2.5/packages/rancher-monitoring/overlay/CHANGELOG.md) + +# About Prometheus + +Prometheus provides a _time series_ of your data, which is, according to [Prometheus documentation](https://prometheus.io/docs/concepts/data_model/): + +>A stream of timestamped values belonging to the same metric and the same set of labeled dimensions, along with comprehensive statistics and metrics of the monitored cluster. + +In other words, Prometheus lets you view metrics from your different Rancher and Kubernetes objects. Using timestamps, Prometheus lets you query and view these metrics in easy-to-read graphs and visuals, either through the Rancher UI or [Grafana](https://grafana.com/), which is an analytics viewing platform deployed along with Prometheus. + +By viewing data that Prometheus scrapes from your cluster control plane, nodes, and deployments, you can stay on top of everything happening in your cluster. You can then use these analytics to better run your organization: stop system emergencies before they start, develop maintenance strategies, restore crashed servers, etc. + +# Monitoring Scope + +Cluster monitoring allows you to view the health of your Kubernetes cluster. Prometheus collects metrics from the cluster components below, which you can view in graphs and charts. + +- [Kubernetes control plane]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#kubernetes-components-metrics) +- [etcd database]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#etcd-metrics) +- [All nodes (including workers)]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#cluster-metrics) + +# Enabling Cluster Monitoring + +As an [administrator]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to deploy Prometheus to monitor your Kubernetes cluster. + +> **Prerequisite:** Make sure that you are allowing traffic on port 9796 for each of your nodes because Prometheus will scrape metrics from here. + +> The default username and password for the Grafana instance will be `admin/admin`. However, Grafana dashboards are served via the Rancher authentication proxy, so only users who are currently authenticated into the Rancher server have access to the Grafana dashboard. + +# Configuration + +For information on configuring custom Prometheus metrics and alerting rules, refer to the upstream documentation for the [Prometheus operator.](https://github.com/prometheus-operator/prometheus-operator) This documentation can help you set up RBAC, Thanos, or custom configuration. + +To create an additional scrape configuration, refer to [this page.](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/additional-scrape-config.md) + +# Examples + +### Create ServiceMonitor Custom Resource + +An example ServiceMonitor custom resource can be found [here.](https://github.com/prometheus-operator/prometheus-operator/blob/master/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) + +### PodMonitor + +An example PodMonitor can be found [here.](https://github.com/prometheus-operator/prometheus-operator/blob/master/example/user-guides/getting-started/example-app-pod-monitor.yaml) and an example Prometheus resource that refers to it can be found [here.](https://github.com/prometheus-operator/prometheus-operator/blob/master/example/user-guides/getting-started/prometheus-pod-monitor.yaml) + +### PrometheusRule + +Prometheus rule files are held in PrometheusRule custom resources. Use the label selector field ruleSelector in the Prometheus object to define the rule files that you want to be mounted into Prometheus. An example PrometheusRule is on [this page.](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/alerting.md) + +### Alertmanager Config + +The Prometheus Operator introduces an Alertmanager resource, which allows users to declaratively describe an Alertmanager cluster. + +The upstream Prometheus documentation includes information on how to [set up](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/alerting.md) and [configure](https://prometheus.io/docs/alerting/latest/configuration/) Alertmanager. + +### Configuring a Persistent Grafana Dashboard + +To allow the Grafana dashboard to persist after it restarts, you will need to add the configuration JSON into a ConfigMap. + +You can add this configuration to the ConfigMap using the Rancher UI. + +### Configuring Grafana to Use Multiple Data Sources + +The data from Prometheus is used as the data source for the Grafana dashboard. Multiple data sources can be configured for Grafana. \ No newline at end of file diff --git a/content/rancher/v2.x/en/monitoring-alerting/legacy/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/_index.md new file mode 100644 index 00000000000..fbe8842f68c --- /dev/null +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/_index.md @@ -0,0 +1,7 @@ +--- +title: Legacy Monitoring/Alerting Documentation +shortTitle: Legacy +weight: 1 +--- + +This section contains documentation related to the monitoring features available in Rancher prior to v2.5. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-admin/tools/alerts/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/cluster-alerts/_index.md similarity index 97% rename from content/rancher/v2.x/en/cluster-admin/tools/alerts/_index.md rename to content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/cluster-alerts/_index.md index 1b0b9685295..98bdc97bdfb 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/alerts/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/cluster-alerts/_index.md @@ -1,8 +1,13 @@ --- -title: Alerts +title: Cluster Alerts weight: 2 +aliases: + - rancher/v2.x/en/cluster-admin/tools/alerts --- + +> In Rancher 2.5, the monitoring application was improved. There are now two ways to enable monitoring and alerting. The older way is documented in this section, and the new application for monitoring and alerting is documented in the [dashboard section.]({{}}/rancher/v2.x/en/dashboard/monitoring-alerting) + To keep your clusters and applications healthy and driving your organizational productivity forward, you need to stay informed of events occurring in your clusters and projects, both planned and unplanned. When an event occurs, your alert is triggered, and you are sent a notification. You can then, if necessary, follow up with corrective actions. Notifiers and alerts are built on top of the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/alertmanager/). Leveraging these tools, Rancher can notify [cluster owners]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) and [project owners]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) of events they need to address. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/alerts/default-alerts/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/cluster-alerts/default-alerts/_index.md similarity index 98% rename from content/rancher/v2.x/en/cluster-admin/tools/alerts/default-alerts/_index.md rename to content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/cluster-alerts/default-alerts/_index.md index ea7f91ff0e0..81feba5aeb0 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/alerts/default-alerts/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/cluster-alerts/default-alerts/_index.md @@ -1,6 +1,8 @@ --- title: Default Alerts for Cluster Monitoring weight: 1 +aliases: + - rancher/v2.x/en/cluster-admin/tools/alerts/default-alerts --- When you create a cluster, some alert rules are predefined. These alerts notify you about signs that the cluster could be unhealthy. You can receive these alerts if you configure a [notifier]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers) for them. diff --git a/content/rancher/v2.x/en/project-admin/tools/alerts/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/project-alerts/_index.md similarity index 99% rename from content/rancher/v2.x/en/project-admin/tools/alerts/_index.md rename to content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/project-alerts/_index.md index 786722a3827..a9386817c72 100644 --- a/content/rancher/v2.x/en/project-admin/tools/alerts/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/project-alerts/_index.md @@ -1,6 +1,8 @@ --- -title: Alerts +title: Project Alerts weight: 2526 +aliases: + - rancher/v2.x/en/project-admin/tools/alerts --- To keep your clusters and applications healthy and driving your organizational productivity forward, you need to stay informed of events occurring in your clusters and projects, both planned and unplanned. When an event occurs, your alert is triggered, and you are sent a notification. You can then, if necessary, follow up with corrective actions. diff --git a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/_index.md new file mode 100644 index 00000000000..1cb62c544bf --- /dev/null +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/_index.md @@ -0,0 +1,4 @@ +--- +title: Monitoring +weight: 1 +--- \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/_index.md similarity index 95% rename from content/rancher/v2.x/en/cluster-admin/tools/monitoring/_index.md rename to content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/_index.md index 7620e43b643..b52ec1fe810 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/_index.md @@ -1,11 +1,16 @@ --- title: Integrating Rancher and Prometheus for Cluster Monitoring +shortTitle: Cluster Monitoring description: Prometheus lets you view metrics from your different Rancher and Kubernetes objects. Learn about the scope of monitoring and how to enable cluster monitoring weight: 4 +aliases: + - rancher/v2.x/en/project-admin/tools/monitoring --- _Available as of v2.2.0_ +> In Rancher 2.5, the monitoring application was improved. There are now two ways to enable monitoring and alerting. The older way is documented in this section, and the new application for monitoring and alerting is documented in the [dashboard section.]({{}}/rancher/v2.x/en/dashboard/monitoring-alerting) + Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with [Prometheus](https://prometheus.io/), a leading open-source monitoring solution. This section covers the following topics: diff --git a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/_index.md similarity index 98% rename from content/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/_index.md rename to content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/_index.md index 61c20f040c0..1f117ec5f65 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/_index.md @@ -1,6 +1,8 @@ --- title: Cluster Metrics weight: 3 +aliases: + - rancher/v2.x/en/project-admin/tools/monitoring/cluster-metrics --- _Available as of v2.2.0_ diff --git a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/custom-metrics/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/custom-metrics/_index.md new file mode 100644 index 00000000000..4ae3893e895 --- /dev/null +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/custom-metrics/_index.md @@ -0,0 +1,491 @@ +--- +title: Prometheus Custom Metrics Adapter +weight: 5 +aliases: + - rancher/v2.x/en/project-admin/tools/monitoring/custom-metrics +--- + +After you've enabled [cluster level monitoring]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring), You can view the metrics data from Rancher. You can also deploy the Prometheus custom metrics adapter then you can use the HPA with metrics stored in cluster monitoring. + +## Deploy Prometheus Custom Metrics Adapter + +We are going to use the [Prometheus custom metrics adapter](https://github.com/DirectXMan12/k8s-prometheus-adapter/releases/tag/v0.5.0), version v0.5.0. This is a great example for the [custom metrics server](https://github.com/kubernetes-incubator/custom-metrics-apiserver). And you must be the *cluster owner* to execute following steps. + +- Get the service account of the cluster monitoring is using. It should be configured in the workload ID: `statefulset:cattle-prometheus:prometheus-cluster-monitoring`. And if you didn't customize anything, the service account name should be `cluster-monitoring`. + +- Grant permission to that service account. You will need two kinds of permission. +One role is `extension-apiserver-authentication-reader` in `kube-system`, so you will need to create a `Rolebinding` to in `kube-system`. This permission is to get api aggregation configuration from config map in `kube-system`. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: custom-metrics-auth-reader + namespace: kube-system +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: extension-apiserver-authentication-reader +subjects: +- kind: ServiceAccount + name: cluster-monitoring + namespace: cattle-prometheus +``` + +The other one is cluster role `system:auth-delegator`, so you will need to create a `ClusterRoleBinding`. This permission is to have subject access review permission. + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: custom-metrics:system:auth-delegator +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: system:auth-delegator +subjects: +- kind: ServiceAccount + name: cluster-monitoring + namespace: cattle-prometheus +``` + +- Create configuration for custom metrics adapter. Following is an example configuration. There will be a configuration details in next session. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: adapter-config + namespace: cattle-prometheus +data: + config.yaml: | + rules: + - seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}' + seriesFilters: [] + resources: + overrides: + namespace: + resource: namespace + pod_name: + resource: pod + name: + matches: ^container_(.*)_seconds_total$ + as: "" + metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[1m])) by (<<.GroupBy>>) + - seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}' + seriesFilters: + - isNot: ^container_.*_seconds_total$ + resources: + overrides: + namespace: + resource: namespace + pod_name: + resource: pod + name: + matches: ^container_(.*)_total$ + as: "" + metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[1m])) by (<<.GroupBy>>) + - seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}' + seriesFilters: + - isNot: ^container_.*_total$ + resources: + overrides: + namespace: + resource: namespace + pod_name: + resource: pod + name: + matches: ^container_(.*)$ + as: "" + metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}) by (<<.GroupBy>>) + - seriesQuery: '{namespace!="",__name__!~"^container_.*"}' + seriesFilters: + - isNot: .*_total$ + resources: + template: <<.Resource>> + name: + matches: "" + as: "" + metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>) + - seriesQuery: '{namespace!="",__name__!~"^container_.*"}' + seriesFilters: + - isNot: .*_seconds_total + resources: + template: <<.Resource>> + name: + matches: ^(.*)_total$ + as: "" + metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>) + - seriesQuery: '{namespace!="",__name__!~"^container_.*"}' + seriesFilters: [] + resources: + template: <<.Resource>> + name: + matches: ^(.*)_seconds_total$ + as: "" + metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>) + resourceRules: + cpu: + containerQuery: sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>) + nodeQuery: sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>, id='/'}[1m])) by (<<.GroupBy>>) + resources: + overrides: + instance: + resource: node + namespace: + resource: namespace + pod_name: + resource: pod + containerLabel: container_name + memory: + containerQuery: sum(container_memory_working_set_bytes{<<.LabelMatchers>>}) by (<<.GroupBy>>) + nodeQuery: sum(container_memory_working_set_bytes{<<.LabelMatchers>>,id='/'}) by (<<.GroupBy>>) + resources: + overrides: + instance: + resource: node + namespace: + resource: namespace + pod_name: + resource: pod + containerLabel: container_name + window: 1m +``` + +- Create HTTPS TLS certs for your api server. You can use following command to create a self-signed cert. + +```bash +openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out serving.crt -keyout serving.key -subj "/C=CN/CN=custom-metrics-apiserver.cattle-prometheus.svc.cluster.local" +# And you will find serving.crt and serving.key in your path. And then you are going to create a secret in cattle-prometheus namespace. +kubectl create secret generic -n cattle-prometheus cm-adapter-serving-certs --from-file=serving.key=./serving.key --from-file=serving.crt=./serving.crt +``` + +- Then you can create the prometheus custom metrics adapter. And you will need a service for this deployment too. Creating it via Import YAML or Rancher would do. Please create those resources in `cattle-prometheus` namespaces. + +Here is the prometheus custom metrics adapter deployment. +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: custom-metrics-apiserver + name: custom-metrics-apiserver + namespace: cattle-prometheus +spec: + replicas: 1 + selector: + matchLabels: + app: custom-metrics-apiserver + template: + metadata: + labels: + app: custom-metrics-apiserver + name: custom-metrics-apiserver + spec: + serviceAccountName: cluster-monitoring + containers: + - name: custom-metrics-apiserver + image: directxman12/k8s-prometheus-adapter-amd64:v0.5.0 + args: + - --secure-port=6443 + - --tls-cert-file=/var/run/serving-cert/serving.crt + - --tls-private-key-file=/var/run/serving-cert/serving.key + - --logtostderr=true + - --prometheus-url=http://prometheus-operated/ + - --metrics-relist-interval=1m + - --v=10 + - --config=/etc/adapter/config.yaml + ports: + - containerPort: 6443 + volumeMounts: + - mountPath: /var/run/serving-cert + name: volume-serving-cert + readOnly: true + - mountPath: /etc/adapter/ + name: config + readOnly: true + - mountPath: /tmp + name: tmp-vol + volumes: + - name: volume-serving-cert + secret: + secretName: cm-adapter-serving-certs + - name: config + configMap: + name: adapter-config + - name: tmp-vol + emptyDir: {} + +``` + +Here is the service of the deployment. +```yaml +apiVersion: v1 +kind: Service +metadata: + name: custom-metrics-apiserver + namespace: cattle-prometheus +spec: + ports: + - port: 443 + targetPort: 6443 + selector: + app: custom-metrics-apiserver +``` + +- Create API service for your custom metric server. + +```yaml +apiVersion: apiregistration.k8s.io/v1beta1 +kind: APIService +metadata: + name: v1beta1.custom.metrics.k8s.io +spec: + service: + name: custom-metrics-apiserver + namespace: cattle-prometheus + group: custom.metrics.k8s.io + version: v1beta1 + insecureSkipTLSVerify: true + groupPriorityMinimum: 100 + versionPriority: 100 + +``` + +- Then you can verify your custom metrics server by `kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1`. If you see the return datas from the api, it means that the metrics server has been successfully set up. + +- You create HPA with custom metrics now. Here is an example of HPA. You will need to create a nginx deployment in your namespace first. + +```yaml +kind: HorizontalPodAutoscaler +apiVersion: autoscaling/v2beta1 +metadata: + name: nginx +spec: + scaleTargetRef: + # point the HPA at the nginx deployment you just created + apiVersion: apps/v1 + kind: Deployment + name: nginx + # autoscale between 1 and 10 replicas + minReplicas: 1 + maxReplicas: 10 + metrics: + # use a "Pods" metric, which takes the average of the + # given metric across all pods controlled by the autoscaling target + - type: Pods + pods: + metricName: memory_usage_bytes + targetAverageValue: 5000000 +``` + +And then, you should see your nginx is scaling up. HPA with custom metrics works. + +## Configuration of prometheus custom metrics adapter + +> Refer to https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/config.md + +The adapter determines which metrics to expose, and how to expose them, +through a set of "discovery" rules. Each rule is executed independently +(so make sure that your rules are mutually exclusive), and specifies each +of the steps the adapter needs to take to expose a metric in the API. + +Each rule can be broken down into roughly four parts: + +- *Discovery*, which specifies how the adapter should find all Prometheus + metrics for this rule. + +- *Association*, which specifies how the adapter should determine which + Kubernetes resources a particular metric is associated with. + +- *Naming*, which specifies how the adapter should expose the metric in + the custom metrics API. + +- *Querying*, which specifies how a request for a particular metric on one + or more Kubernetes objects should be turned into a query to Prometheus. + +A more comprehensive configuration file can be found in +[sample-config.yaml](sample-config.yaml), but a basic config with one rule +might look like: + +```yaml +rules: +# this rule matches cumulative cAdvisor metrics measured in seconds +- seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}' + resources: + # skip specifying generic resource<->label mappings, and just + # attach only pod and namespace resources by mapping label names to group-resources + overrides: + namespace: {resource: "namespace"}, + pod_name: {resource: "pod"}, + # specify that the `container_` and `_seconds_total` suffixes should be removed. + # this also introduces an implicit filter on metric family names + name: + # we use the value of the capture group implicitly as the API name + # we could also explicitly write `as: "$1"` + matches: "^container_(.*)_seconds_total$" + # specify how to construct a query to fetch samples for a given series + # This is a Go template where the `.Series` and `.LabelMatchers` string values + # are available, and the delimiters are `<<` and `>>` to avoid conflicts with + # the prometheus query language + metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[2m])) by (<<.GroupBy>>)" +``` + +### Discovery + +Discovery governs the process of finding the metrics that you want to +expose in the custom metrics API. There are two fields that factor into +discovery: `seriesQuery` and `seriesFilters`. + +`seriesQuery` specifies Prometheus series query (as passed to the +`/api/v1/series` endpoint in Prometheus) to use to find some set of +Prometheus series. The adapter will strip the label values from this +series, and then use the resulting metric-name-label-names combinations +later on. + +In many cases, `seriesQuery` will be sufficient to narrow down the list of +Prometheus series. However, sometimes (especially if two rules might +otherwise overlap), it's useful to do additional filtering on metric +names. In this case, `seriesFilters` can be used. After the list of +series is returned from `seriesQuery`, each series has its metric name +filtered through any specified filters. + +Filters may be either: + +- `is: `, which matches any series whose name matches the specified + regex. + +- `isNot: `, which matches any series whose name does not match the + specified regex. + +For example: + +```yaml +# match all cAdvisor metrics that aren't measured in seconds +seriesQuery: '{__name__=~"^container_.*_total",container_name!="POD",namespace!="",pod_name!=""}' +seriesFilters: + isNot: "^container_.*_seconds_total" +``` + +### Association + +Association governs the process of figuring out which Kubernetes resources +a particular metric could be attached to. The `resources` field controls +this process. + +There are two ways to associate resources with a particular metric. In +both cases, the value of the label becomes the name of the particular +object. + +One way is to specify that any label name that matches some particular +pattern refers to some group-resource based on the label name. This can +be done using the `template` field. The pattern is specified as a Go +template, with the `Group` and `Resource` fields representing group and +resource. You don't necessarily have to use the `Group` field (in which +case the group is guessed by the system). For instance: + +```yaml +# any label `kube__` becomes . in Kubernetes +resources: + template: "kube_<<.Group>>_<<.Resource>>" +``` + +The other way is to specify that some particular label represents some +particular Kubernetes resource. This can be done using the `overrides` +field. Each override maps a Prometheus label to a Kubernetes +group-resource. For instance: + +```yaml +# the microservice label corresponds to the apps.deployment resource +resource: + overrides: + microservice: {group: "apps", resource: "deployment"} +``` + +These two can be combined, so you can specify both a template and some +individual overrides. + +The resources mentioned can be any resource available in your kubernetes +cluster, as long as you've got a corresponding label. + +### Naming + +Naming governs the process of converting a Prometheus metric name into +a metric in the custom metrics API, and vice versa. It's controlled by +the `name` field. + +Naming is controlled by specifying a pattern to extract an API name from +a Prometheus name, and potentially a transformation on that extracted +value. + +The pattern is specified in the `matches` field, and is just a regular +expression. If not specified, it defaults to `.*`. + +The transformation is specified by the `as` field. You can use any +capture groups defined in the `matches` field. If the `matches` field +doesn't contain capture groups, the `as` field defaults to `$0`. If it +contains a single capture group, the `as` field defautls to `$1`. +Otherwise, it's an error not to specify the as field. + +For example: + +```yaml +# match turn any name _total to _per_second +# e.g. http_requests_total becomes http_requests_per_second +name: + matches: "^(.*)_total$" + as: "${1}_per_second" +``` + +### Querying + +Querying governs the process of actually fetching values for a particular +metric. It's controlled by the `metricsQuery` field. + +The `metricsQuery` field is a Go template that gets turned into +a Prometheus query, using input from a particular call to the custom +metrics API. A given call to the custom metrics API is distilled down to +a metric name, a group-resource, and one or more objects of that +group-resource. These get turned into the following fields in the +template: + +- `Series`: the metric name +- `LabelMatchers`: a comma-separated list of label matchers matching the + given objects. Currently, this is the label for the particular + group-resource, plus the label for namespace, if the group-resource is + namespaced. +- `GroupBy`: a comma-separated list of labels to group by. Currently, + this contains the group-resource label used in `LabelMatchers`. + +For instance, suppose we had a series `http_requests_total` (exposed as +`http_requests_per_second` in the API) with labels `service`, `pod`, +`ingress`, `namespace`, and `verb`. The first four correspond to +Kubernetes resources. Then, if someone requested the metric +`pods/http_request_per_second` for the pods `pod1` and `pod2` in the +`somens` namespace, we'd have: + +- `Series: "http_requests_total"` +- `LabelMatchers: "pod=~\"pod1|pod2",namespace="somens"` +- `GroupBy`: `pod` + +Additionally, there are two advanced fields that are "raw" forms of other +fields: + +- `LabelValuesByName`: a map mapping the labels and values from the + `LabelMatchers` field. The values are pre-joined by `|` + (for used with the `=~` matcher in Prometheus). +- `GroupBySlice`: the slice form of `GroupBy`. + +In general, you'll probably want to use the `Series`, `LabelMatchers`, and +`GroupBy` fields. The other two are for advanced usage. + +The query is expected to return one value for each object requested. The +adapter will use the labels on the returned series to associate a given +series back to its corresponding object. + +For example: + +```yaml +# convert cumulative cAdvisor metrics into rates calculated over 2 minutes +metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[2m])) by (<<.GroupBy>>)" +``` diff --git a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/expression/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/expression/_index.md new file mode 100644 index 00000000000..ad6981f3a66 --- /dev/null +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/expression/_index.md @@ -0,0 +1,432 @@ +--- +title: Prometheus Expressions +weight: 4 +aliases: + - rancher/v2.x/en/project-admin/tools/monitoring/expression +--- + +The PromQL expressions in this doc can be used to configure [alerts.]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/) + +> Before expression can be used in alerts, monitoring must be enabled. For more information, refer to the documentation on enabling monitoring [at the cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) or [at the project level.]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring) + +For more information about querying Prometheus, refer to the official [Prometheus documentation.](https://prometheus.io/docs/prometheus/latest/querying/basics/) + + + +- [Cluster Metrics](#cluster-metrics) + - [Cluster CPU Utilization](#cluster-cpu-utilization) + - [Cluster Load Average](#cluster-load-average) + - [Cluster Memory Utilization](#cluster-memory-utilization) + - [Cluster Disk Utilization](#cluster-disk-utilization) + - [Cluster Disk I/O](#cluster-disk-i-o) + - [Cluster Network Packets](#cluster-network-packets) + - [Cluster Network I/O](#cluster-network-i-o) +- [Node Metrics](#node-metrics) + - [Node CPU Utilization](#node-cpu-utilization) + - [Node Load Average](#node-load-average) + - [Node Memory Utilization](#node-memory-utilization) + - [Node Disk Utilization](#node-disk-utilization) + - [Node Disk I/O](#node-disk-i-o) + - [Node Network Packets](#node-network-packets) + - [Node Network I/O](#node-network-i-o) +- [Etcd Metrics](#etcd-metrics) + - [Etcd Has a Leader](#etcd-has-a-leader) + - [Number of Times the Leader Changes](#number-of-times-the-leader-changes) + - [Number of Failed Proposals](#number-of-failed-proposals) + - [GRPC Client Traffic](#grpc-client-traffic) + - [Peer Traffic](#peer-traffic) + - [DB Size](#db-size) + - [Active Streams](#active-streams) + - [Raft Proposals](#raft-proposals) + - [RPC Rate](#rpc-rate) + - [Disk Operations](#disk-operations) + - [Disk Sync Duration](#disk-sync-duration) +- [Kubernetes Components Metrics](#kubernetes-components-metrics) + - [API Server Request Latency](#api-server-request-latency) + - [API Server Request Rate](#api-server-request-rate) + - [Scheduling Failed Pods](#scheduling-failed-pods) + - [Controller Manager Queue Depth](#controller-manager-queue-depth) + - [Scheduler E2E Scheduling Latency](#scheduler-e2e-scheduling-latency) + - [Scheduler Preemption Attempts](#scheduler-preemption-attempts) + - [Ingress Controller Connections](#ingress-controller-connections) + - [Ingress Controller Request Process Time](#ingress-controller-request-process-time) +- [Rancher Logging Metrics](#rancher-logging-metrics) + - [Fluentd Buffer Queue Rate](#fluentd-buffer-queue-rate) + - [Fluentd Input Rate](#fluentd-input-rate) + - [Fluentd Output Errors Rate](#fluentd-output-errors-rate) + - [Fluentd Output Rate](#fluentd-output-rate) +- [Workload Metrics](#workload-metrics) + - [Workload CPU Utilization](#workload-cpu-utilization) + - [Workload Memory Utilization](#workload-memory-utilization) + - [Workload Network Packets](#workload-network-packets) + - [Workload Network I/O](#workload-network-i-o) + - [Workload Disk I/O](#workload-disk-i-o) +- [Pod Metrics](#pod-metrics) + - [Pod CPU Utilization](#pod-cpu-utilization) + - [Pod Memory Utilization](#pod-memory-utilization) + - [Pod Network Packets](#pod-network-packets) + - [Pod Network I/O](#pod-network-i-o) + - [Pod Disk I/O](#pod-disk-i-o) +- [Container Metrics](#container-metrics) + - [Container CPU Utilization](#container-cpu-utilization) + - [Container Memory Utilization](#container-memory-utilization) + - [Container Disk I/O](#container-disk-i-o) + + + +# Cluster Metrics + +### Cluster CPU Utilization + +| Catalog | Expression | +| --- | --- | +| Detail | `1 - (avg(irate(node_cpu_seconds_total{mode="idle"}[5m])) by (instance))` | +| Summary | `1 - (avg(irate(node_cpu_seconds_total{mode="idle"}[5m])))` | + +### Cluster Load Average + +| Catalog | Expression | +| --- | --- | +| Detail |
load1`sum(node_load1) by (instance) / count(node_cpu_seconds_total{mode="system"}) by (instance)`
load5`sum(node_load5) by (instance) / count(node_cpu_seconds_total{mode="system"}) by (instance)`
load15`sum(node_load15) by (instance) / count(node_cpu_seconds_total{mode="system"}) by (instance)`
| +| Summary |
load1`sum(node_load1) by (instance) / count(node_cpu_seconds_total{mode="system"})`
load5`sum(node_load5) by (instance) / count(node_cpu_seconds_total{mode="system"})`
load15`sum(node_load15) by (instance) / count(node_cpu_seconds_total{mode="system"})`
| + +### Cluster Memory Utilization + +| Catalog | Expression | +| --- | --- | +| Detail | `1 - sum(node_memory_MemAvailable_bytes) by (instance) / sum(node_memory_MemTotal_bytes) by (instance)` | +| Summary | `1 - sum(node_memory_MemAvailable_bytes) / sum(node_memory_MemTotal_bytes)` | + +### Cluster Disk Utilization + +| Catalog | Expression | +| --- | --- | +| Detail | `(sum(node_filesystem_size_bytes{device!="rootfs"}) by (instance) - sum(node_filesystem_free_bytes{device!="rootfs"}) by (instance)) / sum(node_filesystem_size_bytes{device!="rootfs"}) by (instance)` | +| Summary | `(sum(node_filesystem_size_bytes{device!="rootfs"}) - sum(node_filesystem_free_bytes{device!="rootfs"})) / sum(node_filesystem_size_bytes{device!="rootfs"})` | + +### Cluster Disk I/O + +| Catalog | Expression | +| --- | --- | +| Detail |
read`sum(rate(node_disk_read_bytes_total[5m])) by (instance)`
written`sum(rate(node_disk_written_bytes_total[5m])) by (instance)`
| +| Summary |
read`sum(rate(node_disk_read_bytes_total[5m]))`
written`sum(rate(node_disk_written_bytes_total[5m]))`
| + +### Cluster Network Packets + +| Catalog | Expression | +| --- | --- | +| Detail |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
| +| Summary |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
| + +### Cluster Network I/O + +| Catalog | Expression | +| --- | --- | +| Detail |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
| +| Summary |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
| + +# Node Metrics + +### Node CPU Utilization + +| Catalog | Expression | +| --- | --- | +| Detail | `avg(irate(node_cpu_seconds_total{mode!="idle", instance=~"$instance"}[5m])) by (mode)` | +| Summary | `1 - (avg(irate(node_cpu_seconds_total{mode="idle", instance=~"$instance"}[5m])))` | + +### Node Load Average + +| Catalog | Expression | +| --- | --- | +| Detail |
load1`sum(node_load1{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load5`sum(node_load5{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load15`sum(node_load15{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
| +| Summary |
load1`sum(node_load1{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load5`sum(node_load5{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load15`sum(node_load15{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
| + +### Node Memory Utilization + +| Catalog | Expression | +| --- | --- | +| Detail | `1 - sum(node_memory_MemAvailable_bytes{instance=~"$instance"}) / sum(node_memory_MemTotal_bytes{instance=~"$instance"})` | +| Summary | `1 - sum(node_memory_MemAvailable_bytes{instance=~"$instance"}) / sum(node_memory_MemTotal_bytes{instance=~"$instance"}) ` | + +### Node Disk Utilization + +| Catalog | Expression | +| --- | --- | +| Detail | `(sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"}) by (device) - sum(node_filesystem_free_bytes{device!="rootfs",instance=~"$instance"}) by (device)) / sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"}) by (device)` | +| Summary | `(sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"}) - sum(node_filesystem_free_bytes{device!="rootfs",instance=~"$instance"})) / sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"})` | + +### Node Disk I/O + +| Catalog | Expression | +| --- | --- | +| Detail |
read`sum(rate(node_disk_read_bytes_total{instance=~"$instance"}[5m]))`
written`sum(rate(node_disk_written_bytes_total{instance=~"$instance"}[5m]))`
| +| Summary |
read`sum(rate(node_disk_read_bytes_total{instance=~"$instance"}[5m]))`
written`sum(rate(node_disk_written_bytes_total{instance=~"$instance"}[5m]))`
| + +### Node Network Packets + +| Catalog | Expression | +| --- | --- | +| Detail |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
| +| Summary |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
| + +### Node Network I/O + +| Catalog | Expression | +| --- | --- | +| Detail |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
| +| Summary |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
| + +# Etcd Metrics + +### Etcd Has a Leader + +`max(etcd_server_has_leader)` + +### Number of Times the Leader Changes + +`max(etcd_server_leader_changes_seen_total)` + +### Number of Failed Proposals + +`sum(etcd_server_proposals_failed_total)` + +### GRPC Client Traffic + +| Catalog | Expression | +| --- | --- | +| Detail |
in`sum(rate(etcd_network_client_grpc_received_bytes_total[5m])) by (instance)`
out`sum(rate(etcd_network_client_grpc_sent_bytes_total[5m])) by (instance)`
| +| Summary |
in`sum(rate(etcd_network_client_grpc_received_bytes_total[5m]))`
out`sum(rate(etcd_network_client_grpc_sent_bytes_total[5m]))`
| + +### Peer Traffic + +| Catalog | Expression | +| --- | --- | +| Detail |
in`sum(rate(etcd_network_peer_received_bytes_total[5m])) by (instance)`
out`sum(rate(etcd_network_peer_sent_bytes_total[5m])) by (instance)`
| +| Summary |
in`sum(rate(etcd_network_peer_received_bytes_total[5m]))`
out`sum(rate(etcd_network_peer_sent_bytes_total[5m]))`
| + +### DB Size + +| Catalog | Expression | +| --- | --- | +| Detail | `sum(etcd_debugging_mvcc_db_total_size_in_bytes) by (instance)` | +| Summary | `sum(etcd_debugging_mvcc_db_total_size_in_bytes)` | + +### Active Streams + +| Catalog | Expression | +| --- | --- | +| Detail |
lease-watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"}) by (instance) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"}) by (instance)`
watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"}) by (instance) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"}) by (instance)`
| +| Summary |
lease-watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"}) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"})`
watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"}) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"})`
| + +### Raft Proposals + +| Catalog | Expression | +| --- | --- | +| Detail |
applied`sum(increase(etcd_server_proposals_applied_total[5m])) by (instance)`
committed`sum(increase(etcd_server_proposals_committed_total[5m])) by (instance)`
pending`sum(increase(etcd_server_proposals_pending[5m])) by (instance)`
failed`sum(increase(etcd_server_proposals_failed_total[5m])) by (instance)`
| +| Summary |
applied`sum(increase(etcd_server_proposals_applied_total[5m]))`
committed`sum(increase(etcd_server_proposals_committed_total[5m]))`
pending`sum(increase(etcd_server_proposals_pending[5m]))`
failed`sum(increase(etcd_server_proposals_failed_total[5m]))`
| + +### RPC Rate + +| Catalog | Expression | +| --- | --- | +| Detail |
total`sum(rate(grpc_server_started_total{grpc_type="unary"}[5m])) by (instance)`
fail`sum(rate(grpc_server_handled_total{grpc_type="unary",grpc_code!="OK"}[5m])) by (instance)`
| +| Summary |
total`sum(rate(grpc_server_started_total{grpc_type="unary"}[5m]))`
fail`sum(rate(grpc_server_handled_total{grpc_type="unary",grpc_code!="OK"}[5m]))`
| + +### Disk Operations + +| Catalog | Expression | +| --- | --- | +| Detail |
commit-called-by-backend`sum(rate(etcd_disk_backend_commit_duration_seconds_sum[1m])) by (instance)`
fsync-called-by-wal`sum(rate(etcd_disk_wal_fsync_duration_seconds_sum[1m])) by (instance)`
| +| Summary |
commit-called-by-backend`sum(rate(etcd_disk_backend_commit_duration_seconds_sum[1m]))`
fsync-called-by-wal`sum(rate(etcd_disk_wal_fsync_duration_seconds_sum[1m]))`
| + +### Disk Sync Duration + +| Catalog | Expression | +| --- | --- | +| Detail |
wal`histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m])) by (instance, le))`
db`histogram_quantile(0.99, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket[5m])) by (instance, le))`
| +| Summary |
wal`sum(histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m])) by (instance, le)))`
db`sum(histogram_quantile(0.99, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket[5m])) by (instance, le)))`
| + +# Kubernetes Components Metrics + +### API Server Request Latency + +| Catalog | Expression | +| --- | --- | +| Detail | `avg(apiserver_request_latencies_sum / apiserver_request_latencies_count) by (instance, verb) /1e+06` | +| Summary | `avg(apiserver_request_latencies_sum / apiserver_request_latencies_count) by (instance) /1e+06` | + +### API Server Request Rate + +| Catalog | Expression | +| --- | --- | +| Detail | `sum(rate(apiserver_request_count[5m])) by (instance, code)` | +| Summary | `sum(rate(apiserver_request_count[5m])) by (instance)` | + +### Scheduling Failed Pods + +| Catalog | Expression | +| --- | --- | +| Detail | `sum(kube_pod_status_scheduled{condition="false"})` | +| Summary | `sum(kube_pod_status_scheduled{condition="false"})` | + +### Controller Manager Queue Depth + +| Catalog | Expression | +| --- | --- | +| Detail |
volumes`sum(volumes_depth) by instance`
deployment`sum(deployment_depth) by instance`
replicaset`sum(replicaset_depth) by instance`
service`sum(service_depth) by instance`
serviceaccount`sum(serviceaccount_depth) by instance`
endpoint`sum(endpoint_depth) by instance`
daemonset`sum(daemonset_depth) by instance`
statefulset`sum(statefulset_depth) by instance`
replicationmanager`sum(replicationmanager_depth) by instance`
| +| Summary |
volumes`sum(volumes_depth)`
deployment`sum(deployment_depth)`
replicaset`sum(replicaset_depth)`
service`sum(service_depth)`
serviceaccount`sum(serviceaccount_depth)`
endpoint`sum(endpoint_depth)`
daemonset`sum(daemonset_depth)`
statefulset`sum(statefulset_depth)`
replicationmanager`sum(replicationmanager_depth)`
| + +### Scheduler E2E Scheduling Latency + +| Catalog | Expression | +| --- | --- | +| Detail | `histogram_quantile(0.99, sum(scheduler_e2e_scheduling_latency_microseconds_bucket) by (le, instance)) / 1e+06` | +| Summary | `sum(histogram_quantile(0.99, sum(scheduler_e2e_scheduling_latency_microseconds_bucket) by (le, instance)) / 1e+06)` | + +### Scheduler Preemption Attempts + +| Catalog | Expression | +| --- | --- | +| Detail | `sum(rate(scheduler_total_preemption_attempts[5m])) by (instance)` | +| Summary | `sum(rate(scheduler_total_preemption_attempts[5m]))` | + +### Ingress Controller Connections + +| Catalog | Expression | +| --- | --- | +| Detail |
reading`sum(nginx_ingress_controller_nginx_process_connections{state="reading"}) by (instance)`
waiting`sum(nginx_ingress_controller_nginx_process_connections{state="waiting"}) by (instance)`
writing`sum(nginx_ingress_controller_nginx_process_connections{state="writing"}) by (instance)`
accepted`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="accepted"}[5m]))) by (instance)`
active`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="active"}[5m]))) by (instance)`
handled`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="handled"}[5m]))) by (instance)`
| +| Summary |
reading`sum(nginx_ingress_controller_nginx_process_connections{state="reading"})`
waiting`sum(nginx_ingress_controller_nginx_process_connections{state="waiting"})`
writing`sum(nginx_ingress_controller_nginx_process_connections{state="writing"})`
accepted`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="accepted"}[5m])))`
active`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="active"}[5m])))`
handled`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="handled"}[5m])))`
| + +### Ingress Controller Request Process Time + +| Catalog | Expression | +| --- | --- | +| Detail | `topk(10, histogram_quantile(0.95,sum by (le, host, path)(rate(nginx_ingress_controller_request_duration_seconds_bucket{host!="_"}[5m]))))` | +| Summary | `topk(10, histogram_quantile(0.95,sum by (le, host)(rate(nginx_ingress_controller_request_duration_seconds_bucket{host!="_"}[5m]))))` | + +# Rancher Logging Metrics + + +### Fluentd Buffer Queue Rate + +| Catalog | Expression | +| --- | --- | +| Detail | `sum(rate(fluentd_output_status_buffer_queue_length[5m])) by (instance)` | +| Summary | `sum(rate(fluentd_output_status_buffer_queue_length[5m]))` | + +### Fluentd Input Rate + +| Catalog | Expression | +| --- | --- | +| Detail | `sum(rate(fluentd_input_status_num_records_total[5m])) by (instance)` | +| Summary | `sum(rate(fluentd_input_status_num_records_total[5m]))` | + +### Fluentd Output Errors Rate + +| Catalog | Expression | +| --- | --- | +| Detail | `sum(rate(fluentd_output_status_num_errors[5m])) by (type)` | +| Summary | `sum(rate(fluentd_output_status_num_errors[5m]))` | + +### Fluentd Output Rate + +| Catalog | Expression | +| --- | --- | +| Detail | `sum(rate(fluentd_output_status_num_records_total[5m])) by (instance)` | +| Summary | `sum(rate(fluentd_output_status_num_records_total[5m]))` | + +# Workload Metrics + +### Workload CPU Utilization + +| Catalog | Expression | +| --- | --- | +| Detail |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
user seconds`sum(rate(container_cpu_user_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
system seconds`sum(rate(container_cpu_system_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
usage seconds`sum(rate(container_cpu_usage_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| +| Summary |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
user seconds`sum(rate(container_cpu_user_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
system seconds`sum(rate(container_cpu_system_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
usage seconds`sum(rate(container_cpu_usage_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| + +### Workload Memory Utilization + +| Catalog | Expression | +| --- | --- | +| Detail | `sum(container_memory_working_set_bytes{namespace="$namespace",pod_name=~"$podName", container_name!=""}) by (pod_name)` | +| Summary | `sum(container_memory_working_set_bytes{namespace="$namespace",pod_name=~"$podName", container_name!=""})` | + +### Workload Network Packets + +| Catalog | Expression | +| --- | --- | +| Detail |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| +| Summary |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| + +### Workload Network I/O + +| Catalog | Expression | +| --- | --- | +| Detail |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| +| Summary |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| + +### Workload Disk I/O + +| Catalog | Expression | +| --- | --- | +| Detail |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| +| Summary |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| + +# Pod Metrics + +### Pod CPU Utilization + +| Catalog | Expression | +| --- | --- | +| Detail |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
usage seconds`sum(rate(container_cpu_usage_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
system seconds`sum(rate(container_cpu_system_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
user seconds`sum(rate(container_cpu_user_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
| +| Summary |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
usage seconds`sum(rate(container_cpu_usage_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
system seconds`sum(rate(container_cpu_system_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
user seconds`sum(rate(container_cpu_user_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
| + +### Pod Memory Utilization + +| Catalog | Expression | +| --- | --- | +| Detail | `sum(container_memory_working_set_bytes{container_name!="POD",namespace="$namespace",pod_name="$podName",container_name!=""}) by (container_name)` | +| Summary | `sum(container_memory_working_set_bytes{container_name!="POD",namespace="$namespace",pod_name="$podName",container_name!=""})` | + +### Pod Network Packets + +| Catalog | Expression | +| --- | --- | +| Detail |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| +| Summary |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| + +### Pod Network I/O + +| Catalog | Expression | +| --- | --- | +| Detail |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| +| Summary |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| + +### Pod Disk I/O + +| Catalog | Expression | +| --- | --- | +| Detail |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m])) by (container_name)`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m])) by (container_name)`
| +| Summary |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| + +# Container Metrics + +### Container CPU Utilization + +| Catalog | Expression | +| --- | --- | +| cfs throttled seconds | `sum(rate(container_cpu_cfs_throttled_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | +| usage seconds | `sum(rate(container_cpu_usage_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | +| system seconds | `sum(rate(container_cpu_system_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | +| user seconds | `sum(rate(container_cpu_user_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | + +### Container Memory Utilization + +`sum(container_memory_working_set_bytes{namespace="$namespace",pod_name="$podName",container_name="$containerName"})` + +### Container Disk I/O + +| Catalog | Expression | +| --- | --- | +| read | `sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | +| write | `sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | diff --git a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/prometheus/_index.md similarity index 98% rename from content/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/_index.md rename to content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/prometheus/_index.md index 939bd5d732a..954b6550440 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/prometheus/_index.md @@ -1,6 +1,8 @@ --- title: Prometheus Configuration weight: 1 +aliases: + - rancher/v2.x/en/project-admin/tools/monitoring/prometheus --- _Available as of v2.2.0_ diff --git a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/viewing-metrics/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/viewing-metrics/_index.md similarity index 98% rename from content/rancher/v2.x/en/cluster-admin/tools/monitoring/viewing-metrics/_index.md rename to content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/viewing-metrics/_index.md index a1dd3946219..af4c34d5fbd 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/viewing-metrics/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/viewing-metrics/_index.md @@ -1,6 +1,8 @@ --- title: Viewing Metrics weight: 2 +aliases: + - rancher/v2.x/en/project-admin/tools/monitoring/viewing-metrics --- _Available as of v2.2.0_ diff --git a/content/rancher/v2.x/en/project-admin/tools/monitoring/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/project-monitoring/_index.md similarity index 98% rename from content/rancher/v2.x/en/project-admin/tools/monitoring/_index.md rename to content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/project-monitoring/_index.md index 80f8c7e5474..770fdbbc59e 100644 --- a/content/rancher/v2.x/en/project-admin/tools/monitoring/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/project-monitoring/_index.md @@ -1,6 +1,8 @@ --- -title: Monitoring +title: Project Monitoring weight: 2528 +aliases: + - rancher/v2.x/en/project-admin/tools/monitoring --- _Available as of v2.2.4_ diff --git a/content/rancher/v2.x/en/cluster-admin/tools/notifiers/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/_index.md similarity index 95% rename from content/rancher/v2.x/en/cluster-admin/tools/notifiers/_index.md rename to content/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/_index.md index c5860f0c33e..42c4073f85f 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/notifiers/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/_index.md @@ -1,8 +1,12 @@ --- title: Notifiers weight: 1 +aliases: + - rancher/v2.x/en/project-admin/tools/notifiers --- +> In Rancher 2.5, the notifier application was improved. There are now two ways to enable notifiers. The older way is documented in this section, and the new application for notifiers is documented in the [dashboard section.]({{}}/rancher/v2.x/en/dashboard/notifiers) + Notifiers are services that inform you of alert events. You can configure notifiers to send alert notifications to staff best suited to take corrective action. Rancher integrates with a variety of popular IT services, including: diff --git a/content/rancher/v2.x/en/cluster-admin/tools/opa-gatekeper/_index.md b/content/rancher/v2.x/en/opa-gatekeper/_index.md similarity index 78% rename from content/rancher/v2.x/en/cluster-admin/tools/opa-gatekeper/_index.md rename to content/rancher/v2.x/en/opa-gatekeper/_index.md index dceb610f935..f73ef16be98 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/opa-gatekeper/_index.md +++ b/content/rancher/v2.x/en/opa-gatekeper/_index.md @@ -1,12 +1,13 @@ --- title: OPA Gatekeeper -weight: 1 +weight: 17 aliases: - /rancher/v2.x/en/cluster-admin/tools/opa-gatekeeper + --- _Available as of v2.4.0_ -> This is an experimental feature for the Rancher v2.4 release. +> This is an experimental feature. To ensure consistency and compliance, every organization needs the ability to define and enforce policies in its environment in an automated way. OPA [https://www.openpolicyagent.org/] (Open Policy Agent) is a policy engine that facilitates policy-based control for cloud native environments. Rancher provides the ability to enable OPA Gatekeeper in Kubernetes clusters, and also installs a couple of built-in policy definitions, which are also called constraint templates. @@ -25,10 +26,39 @@ To read more about OPA, please refer to the [official documentation.](https://ww Kubernetes provides the ability to extend API server functionality via admission controller webhooks, which are invoked whenever a resource is created, updated or deleted. Gatekeeper is installed as a validating webhook and enforces policies defined by Kubernetes custom resource definitions. In addition to the admission control usage, Gatekeeper provides the capability to audit existing resources in Kubernetes clusters and mark current violations of enabled policies. -OPA Gatekeeper is made availale via Rancher's Helm system chart, and it is installed in a namespace named `gatekeeper-system.` +OPA Gatekeeper is made available via Rancher's Helm system chart, and it is installed in a namespace named `gatekeeper-system.` # Enabling OPA Gatekeeper in a Cluster +In Rancher v2.5, the OPA Gatekeeper application was improved. The Rancher v2.4 feature can't be upgraded to the new version in Rancher v2.5. If you installed OPA Gatekeeper in Rancher v2.4, you will need to uninstall OPA Gatekeeper and its CRDs from the old UI, then reinstall it in Rancher v2.5. To uninstall the CRDs run the following command in the kubectl console `kubectl delete crd configs.config.gatekeeper.sh constrainttemplates.templates.gatekeeper.sh`. + +{{% tabs %}} +{{% tab "Rancher v2.5" %}} + +> **Prerequisite:** Only administrators and cluster owners can enable OPA Gatekeeper. + +OPA Gatekeeper can be installed from the new **Cluster Explorer** view in Rancher v2.5, or from the legacy cluster view. + +### Enabling OPA Gatekeeper from Cluster Explorer + +1. Go to the cluster view in the Rancher UI. Click **Cluster Explorer.** +1. Click **Apps** in the top navigation bar. +1. Click **rancher-gatekeeper.** +1. Click **Install.** + +**Result:** OPA Gatekeeper is deployed in your Kubernetes cluster. + +### Enabling OPA Gatekeeper from the Legacy Cluster View + +1. Go to the cluster view in the Rancher UI. +1. Click **Tools > OPA Gatekeeper.** +1. Click **Install.** + +**Result:** OPA Gatekeeper is deployed in your Kubernetes cluster. + +{{% /tab %}} +{{% tab "Rancher v2.4" %}} + > **Prerequisites:** > > - Only administrators and cluster owners can enable OPA Gatekeeper. @@ -38,7 +68,9 @@ OPA Gatekeeper is made availale via Rancher's Helm system chart, and it is insta 1. On the left side menu, expand the cluster menu and click on **OPA Gatekeeper.** 1. To install Gatekeeper with the default configuration, click on **Enable Gatekeeper (v0.1.0) with defaults.** 1. To change any default configuration, click on **Customize Gatekeeper yaml configuration.** - +{{% /tab %}} +{{% /tabs %}} + # Constraint Templates [Constraint templates](https://github.com/open-policy-agent/gatekeeper#constraint-templates) are Kubernetes custom resources that define the schema and Rego logic of the OPA policy to be applied by Gatekeeper. For more information on the Rego policy language, refer to the [official documentation.](https://www.openpolicyagent.org/docs/latest/policy-language/) @@ -61,7 +93,7 @@ New constraints can be created from a constraint template. Rancher provides the ability to create a constraint by using a convenient form that lets you input the various constraint fields. -The **Edit as yaml** option is also availble to configure the the constraint's yaml definition. +The **Edit as yaml** option is also available to configure the the constraint's yaml definition. ### Exempting Rancher's System Namespaces from Constraints diff --git a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/_index.md b/content/rancher/v2.x/en/pipelines/_index.md similarity index 99% rename from content/rancher/v2.x/en/k8s-in-rancher/pipelines/_index.md rename to content/rancher/v2.x/en/pipelines/_index.md index e20e1794245..58467078313 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/_index.md +++ b/content/rancher/v2.x/en/pipelines/_index.md @@ -1,9 +1,6 @@ --- title: Pipelines -weight: 3047 -aliases: - - /rancher/v2.x/en/tools/pipelines/concepts/ - +weight: 11 --- Rancher's pipeline provides a simple CI/CD experience. Use it to automatically checkout code, run builds or scripts, publish Docker images or catalog applications, and deploy the updated software to users. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/concepts/_index.md b/content/rancher/v2.x/en/pipelines/concepts/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/pipelines/concepts/_index.md rename to content/rancher/v2.x/en/pipelines/concepts/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/config/_index.md b/content/rancher/v2.x/en/pipelines/config/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/pipelines/config/_index.md rename to content/rancher/v2.x/en/pipelines/config/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/docs-for-v2.0.x/_index.md b/content/rancher/v2.x/en/pipelines/docs-for-v2.0.x/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/pipelines/docs-for-v2.0.x/_index.md rename to content/rancher/v2.x/en/pipelines/docs-for-v2.0.x/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/example-repos/_index.md b/content/rancher/v2.x/en/pipelines/example-repos/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/pipelines/example-repos/_index.md rename to content/rancher/v2.x/en/pipelines/example-repos/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/example/_index.md b/content/rancher/v2.x/en/pipelines/example/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/pipelines/example/_index.md rename to content/rancher/v2.x/en/pipelines/example/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/storage/_index.md b/content/rancher/v2.x/en/pipelines/storage/_index.md similarity index 100% rename from content/rancher/v2.x/en/k8s-in-rancher/pipelines/storage/_index.md rename to content/rancher/v2.x/en/pipelines/storage/_index.md diff --git a/content/rancher/v2.x/en/project-admin/_index.md b/content/rancher/v2.x/en/project-admin/_index.md index 508e627147d..ade4e5ac6e8 100644 --- a/content/rancher/v2.x/en/project-admin/_index.md +++ b/content/rancher/v2.x/en/project-admin/_index.md @@ -1,6 +1,6 @@ --- title: Project Administration -weight: 2500 +weight: 9 aliases: - /rancher/v2.x/en/project-admin/editing-projects/ --- diff --git a/content/rancher/v2.x/en/project-admin/istio/_index.md b/content/rancher/v2.x/en/project-admin/istio/_index.md deleted file mode 100644 index 82ef83353cf..00000000000 --- a/content/rancher/v2.x/en/project-admin/istio/_index.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Istio -weight: 3528 ---- - -_Available as of v2.3.0_ - -Using Rancher, you can connect, secure, control, and observe services through integration with [Istio](https://istio.io/), a leading open-source service mesh solution. Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications. - -This service mesh provides features that include but are not limited to the following: - -- Traffic management features -- Enhanced monitoring and tracing -- Service discovery and routing -- Secure connections and service-to-service authentication with mutual TLS -- Load balancing -- Automatic retries, backoff, and circuit breaking - -Istio needs to be set up by a Rancher administrator or cluster administrator before it can be used in a project for [comprehensive data visualizations,]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/#accessing-visualizations) traffic management, or any of its other features. - -For information on how Istio is integrated with Rancher and how to set it up, refer to the [section about Istio.]({{}}/rancher/v2.x/en/cluster-admin/tools/istio) diff --git a/content/rancher/v2.x/en/quick-start-guide/_index.md b/content/rancher/v2.x/en/quick-start-guide/_index.md index be103b469ef..1aae6c15c02 100644 --- a/content/rancher/v2.x/en/quick-start-guide/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/_index.md @@ -2,7 +2,7 @@ title: Rancher Deployment Quick Start Guides metaDescription: Use this section to jump start your Rancher deployment and testing. It contains instructions for a simple Rancher setup and some common use cases. short title: Use this section to jump start your Rancher deployment and testing. It contains instructions for a simple Rancher setup and some common use cases. -weight: 25 +weight: 2 --- >**Note:** The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation]({{}}/rancher/v2.x/en/installation/). diff --git a/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md b/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md index b4c2457eeaa..f0ee9913026 100644 --- a/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md @@ -49,7 +49,7 @@ To install Rancher on your host, connect to it and then use a shell to install. 2. From your shell, enter the following command: ``` -sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher +sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher ``` **Result:** Rancher is installed. diff --git a/content/rancher/v2.x/en/security/_index.md b/content/rancher/v2.x/en/security/_index.md index d0b2dd70089..7107c1a8802 100644 --- a/content/rancher/v2.x/en/security/_index.md +++ b/content/rancher/v2.x/en/security/_index.md @@ -1,6 +1,6 @@ --- title: Security -weight: 7505 +weight: 20 --- @@ -47,7 +47,7 @@ The Benchmark provides recommendations of two types: Scored and Not Scored. We r When Rancher runs a CIS security scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests. -For details, refer to the section on [security scans.]({{}}/rancher/v2.x/en/security/security-scan) +For details, refer to the section on [security scans.]({{}}/rancher/v2.x/en/cis-scans) ### Rancher Hardening Guide diff --git a/content/rancher/v2.x/en/security/rancher-2.1/_index.md b/content/rancher/v2.x/en/security/rancher-2.1/_index.md new file mode 100644 index 00000000000..31ca2f58b5a --- /dev/null +++ b/content/rancher/v2.x/en/security/rancher-2.1/_index.md @@ -0,0 +1,20 @@ +--- +title: Rancher v2.1 +weight: 5 +--- + +### Self Assessment Guide + +This [guide](./benchmark-2.1) corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark: + +Self Assessment Guide Version | Rancher Version | Hardening Guide Version | Kubernetes Version | CIS Benchmark Version +---------------------------|----------|---------|-------|----- +Self Assessment Guide v2.1 | Rancher v2.1.x | Hardening Guide v2.1 | Kubernetes 1.11 | Benchmark 1.3.0 + +### Hardening Guide + +This hardening [guide](./hardening-2.1) is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: + +Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version +------------------------|----------------|-----------------------|------------------ +Hardening Guide v2.1 | Rancher v2.1.x | Benchmark v1.3.0 | Kubernetes 1.11 diff --git a/content/rancher/v2.x/en/security/benchmark-2.1/_index.md b/content/rancher/v2.x/en/security/rancher-2.1/benchmark-2.1/_index.md similarity index 99% rename from content/rancher/v2.x/en/security/benchmark-2.1/_index.md rename to content/rancher/v2.x/en/security/rancher-2.1/benchmark-2.1/_index.md index 50b79795bf2..84112b8af6a 100644 --- a/content/rancher/v2.x/en/security/benchmark-2.1/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.1/benchmark-2.1/_index.md @@ -1,6 +1,8 @@ --- title: CIS Benchmark Rancher Self-Assessment Guide v2.1 weight: 209 +aliases: + - /rancher/v2.x/en/security/benchmark-2.1 --- This document is a companion to the Rancher v2.1 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark. diff --git a/content/rancher/v2.x/en/security/hardening-2.1/_index.md b/content/rancher/v2.x/en/security/rancher-2.1/hardening-2.1/_index.md similarity index 99% rename from content/rancher/v2.x/en/security/hardening-2.1/_index.md rename to content/rancher/v2.x/en/security/rancher-2.1/hardening-2.1/_index.md index 0248d9f3f9d..7244d56d823 100644 --- a/content/rancher/v2.x/en/security/hardening-2.1/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.1/hardening-2.1/_index.md @@ -1,6 +1,8 @@ --- title: Hardening Guide v2.1 weight: 104 +aliases: + - /rancher/v2.x/en/security/hardening-2.1 --- This document provides prescriptive guidance for hardening a production installation of Rancher v2.1.x. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). diff --git a/content/rancher/v2.x/en/security/rancher-2.2/_index.md b/content/rancher/v2.x/en/security/rancher-2.2/_index.md new file mode 100644 index 00000000000..457ecb4477d --- /dev/null +++ b/content/rancher/v2.x/en/security/rancher-2.2/_index.md @@ -0,0 +1,20 @@ +--- +title: Rancher v2.2 +weight: 4 +--- + +### Self Assessment Guide + +This [guide](./benchmark-2.2) corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark: + +Self Assessment Guide Version | Rancher Version | Hardening Guide Version | Kubernetes Version | CIS Benchmark Version +---------------------------|----------|---------|-------|----- +Self Assessment Guide v2.2 | Rancher v2.2.x | Hardening Guide v2.2 | Kubernetes 1.13 | Benchmark v1.4.0 and v1.4.1 + +### Hardening Guide + +This hardening [guide](./hardening-2.2) is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: + +Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version +------------------------|----------------|-----------------------|------------------ +Hardening Guide v2.2 | Rancher v2.2.x | Benchmark v1.4.1, 1.4.0 | Kubernetes 1.13 \ No newline at end of file diff --git a/content/rancher/v2.x/en/security/benchmark-2.2/_index.md b/content/rancher/v2.x/en/security/rancher-2.2/benchmark-2.2/_index.md similarity index 99% rename from content/rancher/v2.x/en/security/benchmark-2.2/_index.md rename to content/rancher/v2.x/en/security/rancher-2.2/benchmark-2.2/_index.md index 68bbaa1ad7b..9ae8594599c 100644 --- a/content/rancher/v2.x/en/security/benchmark-2.2/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.2/benchmark-2.2/_index.md @@ -1,6 +1,8 @@ --- title: CIS Benchmark Rancher Self-Assessment Guide v2.2 weight: 208 +aliases: + - /rancher/v2.x/en/security/benchmark-2.2 --- This document is a companion to the Rancher v2.2 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark. diff --git a/content/rancher/v2.x/en/security/hardening-2.2/_index.md b/content/rancher/v2.x/en/security/rancher-2.2/hardening-2.2/_index.md similarity index 99% rename from content/rancher/v2.x/en/security/hardening-2.2/_index.md rename to content/rancher/v2.x/en/security/rancher-2.2/hardening-2.2/_index.md index de19613499f..4afb7f76d8d 100644 --- a/content/rancher/v2.x/en/security/hardening-2.2/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.2/hardening-2.2/_index.md @@ -1,6 +1,8 @@ --- title: Hardening Guide v2.2 weight: 103 +aliases: + - /rancher/v2.x/en/security/hardening-2.2 --- This document provides prescriptive guidance for hardening a production installation of Rancher v2.2.x. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). diff --git a/content/rancher/v2.x/en/security/rancher-2.3.x/_index.md b/content/rancher/v2.x/en/security/rancher-2.3.x/_index.md new file mode 100644 index 00000000000..0f3f04da692 --- /dev/null +++ b/content/rancher/v2.x/en/security/rancher-2.3.x/_index.md @@ -0,0 +1,10 @@ +--- +title: Rancher v2.3.x +weight: 3 +--- + +The relevant Hardening Guide and Self Assessment guide depends on your Rancher version: + +- [Rancher v2.3.5](./rancher-v2.3.5) +- [Rancher v2.3.3](./rancher-v2.3.3) +- [Rancher v2.3.0](./rancher-v2.3.0) \ No newline at end of file diff --git a/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.0/_index.md b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.0/_index.md new file mode 100644 index 00000000000..aa31c9c9af7 --- /dev/null +++ b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.0/_index.md @@ -0,0 +1,20 @@ +--- +title: Rancher v2.3.0 +weight: 3 +--- + +### Self Assessment Guide + +This [guide](./benchmark-2.3) corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark: + +Self Assessment Guide Version | Rancher Version | Hardening Guide Version | Kubernetes Version | CIS Benchmark Version +---------------------------|----------|---------|-------|----- +Self Assessment Guide v2.3 | Rancher v2.3.0-2.3.2 | Hardening Guide v2.3 | Kubernetes 1.15 | Benchmark v1.4.1 + +### Hardening Guide + +This hardening [guide](./hardening-2.3) is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: + +Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version +------------------------|----------------|-----------------------|------------------ +Hardening Guide v2.3 | Rancher v2.3.0-v2.3.2 | Benchmark v1.4.1 | Kubernetes 1.15 \ No newline at end of file diff --git a/content/rancher/v2.x/en/security/benchmark-2.3/_index.md b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.0/benchmark-2.3/_index.md similarity index 99% rename from content/rancher/v2.x/en/security/benchmark-2.3/_index.md rename to content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.0/benchmark-2.3/_index.md index 09b6915dca7..1b705633948 100644 --- a/content/rancher/v2.x/en/security/benchmark-2.3/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.0/benchmark-2.3/_index.md @@ -1,6 +1,8 @@ --- title: CIS Benchmark Rancher Self-Assessment Guide v2.3 weight: 207 +aliases: + - /rancher/v2.x/en/security/benchmark-2.3 --- This document is a companion to the Rancher v2.3 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark. diff --git a/content/rancher/v2.x/en/security/hardening-2.3/_index.md b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.0/hardening-2.3/_index.md similarity index 99% rename from content/rancher/v2.x/en/security/hardening-2.3/_index.md rename to content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.0/hardening-2.3/_index.md index fb495c04b2a..4c6907e9f64 100644 --- a/content/rancher/v2.x/en/security/hardening-2.3/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.0/hardening-2.3/_index.md @@ -1,6 +1,8 @@ --- title: Hardening Guide v2.3 weight: 102 +aliases: + - /rancher/v2.x/en/security/hardening-2.3 --- This document provides prescriptive guidance for hardening a production installation of Rancher v2.3.0-v2.3.2. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). diff --git a/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.3/_index.md b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.3/_index.md new file mode 100644 index 00000000000..77c1c408ad9 --- /dev/null +++ b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.3/_index.md @@ -0,0 +1,20 @@ +--- +title: Rancher v2.3.3 +weight: 2 +--- + +### Self Assessment Guide + +This [guide](./benchmark-2.3.3) corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark: + +Self Assessment Guide Version | Rancher Version | Hardening Guide Version | Kubernetes Version | CIS Benchmark Version +---------------------------|----------|---------|-------|----- +Self Assessment Guide v2.3.3 | Rancher v2.3.3 | Hardening Guide v2.3.3 | Kubernetes v1.16 | Benchmark v1.4.1 + +### Hardening Guide + +This hardening [guide](./hardening-2.3.3) is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: + +Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version +------------------------|----------------|-----------------------|------------------ +Hardening Guide v2.3.3 | Rancher v2.3.3 | Benchmark v1.4.1 | Kubernetes 1.14, 1.15, and 1.16 \ No newline at end of file diff --git a/content/rancher/v2.x/en/security/benchmark-2.3.3/_index.md b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.3/benchmark-2.3.3/_index.md similarity index 99% rename from content/rancher/v2.x/en/security/benchmark-2.3.3/_index.md rename to content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.3/benchmark-2.3.3/_index.md index 4f8d2d1b1f6..385d077c025 100644 --- a/content/rancher/v2.x/en/security/benchmark-2.3.3/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.3/benchmark-2.3.3/_index.md @@ -1,6 +1,8 @@ --- title: CIS Benchmark Rancher Self-Assessment Guide - Rancher v2.3.3 weight: 206 +aliases: + - /rancher/v2.x/en/security/benchmark-2.3.3 --- This document is a companion to the Rancher v2.3.3 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark. diff --git a/content/rancher/v2.x/en/security/hardening-2.3.3/_index.md b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.3/hardening-2.3.3/_index.md similarity index 99% rename from content/rancher/v2.x/en/security/hardening-2.3.3/_index.md rename to content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.3/hardening-2.3.3/_index.md index d25489d2e06..90ba7608d1a 100644 --- a/content/rancher/v2.x/en/security/hardening-2.3.3/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.3/hardening-2.3.3/_index.md @@ -1,6 +1,8 @@ --- title: Hardening Guide v2.3.3 weight: 101 +aliases: + - /rancher/v2.x/en/security/hardening-2.3.3 --- This document provides prescriptive guidance for hardening a production installation of Rancher v2.3.3. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). diff --git a/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.5/_index.md b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.5/_index.md new file mode 100644 index 00000000000..d6bbefc794c --- /dev/null +++ b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.5/_index.md @@ -0,0 +1,20 @@ +--- +title: Rancher v2.3.5 +weight: 1 +--- + +### Self Assessment Guide + +This [guide](./benchmark-2.3.5) corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark: + +Self Assessment Guide Version | Rancher Version | Hardening Guide Version | Kubernetes Version | CIS Benchmark Version +---------------------------|----------|---------|-------|----- +Self Assessment Guide v2.3.5 | Rancher v2.3.5 | Hardening Guide v2.3.5 | Kubernetes v1.15 | Benchmark v1.5 + +### Hardening Guide + +This hardening [guide](./hardening-2.3.5) is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: + +Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version +------------------------|----------------|-----------------------|------------------ +Hardening Guide v2.3.5 | Rancher v2.3.5 | Benchmark v1.5 | Kubernetes 1.15 \ No newline at end of file diff --git a/content/rancher/v2.x/en/security/benchmark-2.3.5/_index.md b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.5/benchmark-2.3.5/_index.md similarity index 99% rename from content/rancher/v2.x/en/security/benchmark-2.3.5/_index.md rename to content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.5/benchmark-2.3.5/_index.md index a67a0c6cbad..6d0734a8bc8 100644 --- a/content/rancher/v2.x/en/security/benchmark-2.3.5/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.5/benchmark-2.3.5/_index.md @@ -1,6 +1,8 @@ --- title: CIS Benchmark Rancher Self-Assessment Guide - v2.3.5 weight: 205 +aliases: + - /rancher/v2.x/en/security/benchmark-2.3.5 --- ### CIS Kubernetes Benchmark v1.5 - Rancher v2.3.5 with Kubernetes v1.15 diff --git a/content/rancher/v2.x/en/security/hardening-2.3.5/_index.md b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.5/hardening-2.3.5/_index.md similarity index 99% rename from content/rancher/v2.x/en/security/hardening-2.3.5/_index.md rename to content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.5/hardening-2.3.5/_index.md index 4f9aa6849e8..1701e56ff39 100644 --- a/content/rancher/v2.x/en/security/hardening-2.3.5/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.3.x/rancher-v2.3.5/hardening-2.3.5/_index.md @@ -1,6 +1,8 @@ --- title: Hardening Guide v2.3.5 weight: 100 +aliases: + - /rancher/v2.x/en/security/hardening-2.3.5 --- This document provides prescriptive guidance for hardening a production installation of Rancher v2.3.5. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). diff --git a/content/rancher/v2.x/en/security/rancher-2.4/_index.md b/content/rancher/v2.x/en/security/rancher-2.4/_index.md new file mode 100644 index 00000000000..67cda4137b8 --- /dev/null +++ b/content/rancher/v2.x/en/security/rancher-2.4/_index.md @@ -0,0 +1,20 @@ +--- +title: Rancher v2.4 +weight: 2 +--- + +### Self Assessment Guide + +This [guide](./benchmark-2.4) corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark: + +Self Assessment Guide Version | Rancher Version | Hardening Guide Version | Kubernetes Version | CIS Benchmark Version +---------------------------|----------|---------|-------|----- +Self Assessment Guide v2.4 | Rancher v2.4 | Hardening Guide v2.4 | Kubernetes v1.15 | Benchmark v1.5 + +### Hardening Guide + +This hardening [guide](./hardening-2.4) is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: + +Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version +------------------------|----------------|-----------------------|------------------ +Hardening Guide v2.4 | Rancher v2.4 | Benchmark v1.5 | Kubernetes 1.15 diff --git a/content/rancher/v2.x/en/security/rancher-2.4/benchmark-2.4/_index.md b/content/rancher/v2.x/en/security/rancher-2.4/benchmark-2.4/_index.md new file mode 100644 index 00000000000..2f6baa62064 --- /dev/null +++ b/content/rancher/v2.x/en/security/rancher-2.4/benchmark-2.4/_index.md @@ -0,0 +1,2268 @@ +--- +title: CIS Benchmark Rancher Self-Assessment Guide - v2.4 +weight: 204 +aliases: + - /rancher/v2.x/en/security/benchmark-2.4 +--- + +### CIS Kubernetes Benchmark v1.5 - Rancher v2.4 with Kubernetes v1.15 + +[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.4/Rancher_Benchmark_Assessment.pdf) + +#### Overview + +This document is a companion to the Rancher v2.4 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark. + +This guide corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark: + +Self Assessment Guide Version | Rancher Version | Hardening Guide Version | Kubernetes Version | CIS Benchmark Version +---------------------------|----------|---------|-------|----- +Self Assessment Guide v2.4 | Rancher v2.4 | Hardening Guide v2.4 | Kubernetes v1.15 | Benchmark v1.5 + +Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply and will have a result of `Not Applicable`. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher-created clusters. + +This document is to be used by Rancher operators, security teams, auditors and decision makers. + +For more detail about each audit, including rationales and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.5. You can download the benchmark after logging in to [CISecurity.org]( https://www.cisecurity.org/benchmark/kubernetes/). + +#### Testing controls methodology + +Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files. + +Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher Labs are provided for testing. +When performing the tests, you will need access to the Docker command line on the hosts of all three RKE roles. The commands also make use of the the [jq](https://stedolan.github.io/jq/) and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) (with valid config) tools to and are required in the testing and evaluation of test results. + +> NOTE: only scored tests are covered in this guide. + +### Controls + +--- +## 1 Master Node Security Configuration +### 1.1 Master Node Configuration Files + +#### 1.1.1 Ensure that the API server pod specification file permissions are set to `644` or more restrictive (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for the API server. All configuration is passed in as arguments at container run time. + +#### 1.1.2 Ensure that the API server pod specification file ownership is set to `root:root` (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for the API server. All configuration is passed in as arguments at container run time. + +#### 1.1.3 Ensure that the controller manager pod specification file permissions are set to `644` or more restrictive (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. + +#### 1.1.4 Ensure that the controller manager pod specification file ownership is set to `root:root` (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. + +#### 1.1.5 Ensure that the scheduler pod specification file permissions are set to `644` or more restrictive (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. + +#### 1.1.6 Ensure that the scheduler pod specification file ownership is set to `root:root` (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. + +#### 1.1.7 Ensure that the etcd pod specification file permissions are set to `644` or more restrictive (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. + +#### 1.1.8 Ensure that the etcd pod specification file ownership is set to `root:root` (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. + +#### 1.1.11 Ensure that the etcd data directory permissions are set to `700` or more restrictive (Scored) + +**Result:** PASS + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument `--data-dir`, +from the below command: + +``` bash +ps -ef | grep etcd +``` + +Run the below command (based on the etcd data directory found above). For example, + +``` bash +chmod 700 /var/lib/etcd +``` + +**Audit Script:** 1.1.11.sh + +``` +#!/bin/bash -e + +etcd_bin=${1} + +test_dir=$(ps -ef | grep ${etcd_bin} | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%') + +docker inspect etcd | jq -r '.[].HostConfig.Binds[]' | grep "${test_dir}" | cut -d ":" -f 1 | xargs stat -c %a +``` + +**Audit Execution:** + +``` +./1.1.11.sh etcd +``` + +**Expected result**: + +``` +'700' is equal to '700' +``` + +#### 1.1.12 Ensure that the etcd data directory ownership is set to `etcd:etcd` (Scored) + +**Result:** PASS + +**Remediation:** +On the etcd server node, get the etcd data directory, passed as an argument `--data-dir`, +from the below command: + +``` bash +ps -ef | grep etcd +``` + +Run the below command (based on the etcd data directory found above). +For example, +``` bash +chown etcd:etcd /var/lib/etcd +``` + +**Audit Script:** 1.1.12.sh + +``` +#!/bin/bash -e + +etcd_bin=${1} + +test_dir=$(ps -ef | grep ${etcd_bin} | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%') + +docker inspect etcd | jq -r '.[].HostConfig.Binds[]' | grep "${test_dir}" | cut -d ":" -f 1 | xargs stat -c %U:%G +``` + +**Audit Execution:** + +``` +./1.1.12.sh etcd +``` + +**Expected result**: + +``` +'etcd:etcd' is present +``` + +#### 1.1.13 Ensure that the `admin.conf` file permissions are set to `644` or more restrictive (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE does not store the kubernetes default kubeconfig credentials file on the nodes. It’s presented to user where RKE is run. +We recommend that this `kube_config_cluster.yml` file be kept in secure store. + +#### 1.1.14 Ensure that the admin.conf file ownership is set to `root:root` (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE does not store the kubernetes default kubeconfig credentials file on the nodes. It’s presented to user where RKE is run. +We recommend that this `kube_config_cluster.yml` file be kept in secure store. + +#### 1.1.15 Ensure that the `scheduler.conf` file permissions are set to `644` or more restrictive (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. + +#### 1.1.16 Ensure that the `scheduler.conf` file ownership is set to `root:root` (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. + +#### 1.1.17 Ensure that the `controller-manager.conf` file permissions are set to `644` or more restrictive (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. + +#### 1.1.18 Ensure that the `controller-manager.conf` file ownership is set to `root:root` (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. + +#### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to `root:root` (Scored) + +**Result:** PASS + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, + +``` bash +chown -R root:root /etc/kubernetes/ssl +``` + +**Audit:** + +``` +stat -c %U:%G /etc/kubernetes/ssl +``` + +**Expected result**: + +``` +'root:root' is present +``` + +#### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to `644` or more restrictive (Scored) + +**Result:** PASS + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, + +``` bash +chmod -R 644 /etc/kubernetes/ssl +``` + +**Audit Script:** check_files_permissions.sh + +``` +#!/usr/bin/env bash + +# This script is used to ensure the file permissions are set to 644 or +# more restrictive for all files in a given directory or a wildcard +# selection of files +# +# inputs: +# $1 = /full/path/to/directory or /path/to/fileswithpattern +# ex: !(*key).pem +# +# $2 (optional) = permission (ex: 600) +# +# outputs: +# true/false + +# Turn on "extended glob" for use of '!' in wildcard +shopt -s extglob + +# Turn off history to avoid surprises when using '!' +set -H + +USER_INPUT=$1 + +if [[ "${USER_INPUT}" == "" ]]; then + echo "false" + exit +fi + + +if [[ -d ${USER_INPUT} ]]; then + PATTERN="${USER_INPUT}/*" +else + PATTERN="${USER_INPUT}" +fi + +PERMISSION="" +if [[ "$2" != "" ]]; then + PERMISSION=$2 +fi + +FILES_PERMISSIONS=$(stat -c %n\ %a ${PATTERN}) + +while read -r fileInfo; do + p=$(echo ${fileInfo} | cut -d' ' -f2) + + if [[ "${PERMISSION}" != "" ]]; then + if [[ "$p" != "${PERMISSION}" ]]; then + echo "false" + exit + fi + else + if [[ "$p" != "644" && "$p" != "640" && "$p" != "600" ]]; then + echo "false" + exit + fi + fi +done <<< "${FILES_PERMISSIONS}" + + +echo "true" +exit +``` + +**Audit Execution:** + +``` +./check_files_permissions.sh '/etc/kubernetes/ssl/*.pem' +``` + +**Expected result**: + +``` +'true' is present +``` + +#### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to `600` (Scored) + +**Result:** PASS + +**Remediation:** +Run the below command (based on the file location on your system) on the master node. +For example, + +``` bash +chmod -R 600 /etc/kubernetes/ssl/certs/serverca +``` + +**Audit Script:** 1.1.21.sh + +``` +#!/bin/bash -e +check_dir=${1:-/etc/kubernetes/ssl} + +for file in $(find ${check_dir} -name "*key.pem"); do + file_permission=$(stat -c %a ${file}) + if [[ "${file_permission}" == "600" ]]; then + continue + else + echo "FAIL: ${file} ${file_permission}" + exit 1 + fi +done + +echo "pass" +``` + +**Audit Execution:** + +``` +./1.1.21.sh /etc/kubernetes/ssl +``` + +**Expected result**: + +``` +'pass' is present +``` + +### 1.2 API Server + +#### 1.2.2 Ensure that the `--basic-auth-file` argument is not set (Scored) + +**Result:** PASS + +**Remediation:** +Follow the documentation and configure alternate mechanisms for authentication. Then, +edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and remove the `--basic-auth-file=` parameter. + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--basic-auth-file' is not present +``` + +#### 1.2.3 Ensure that the `--token-auth-file` parameter is not set (Scored) + +**Result:** PASS + +**Remediation:** +Follow the documentation and configure alternate mechanisms for authentication. Then, +edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and remove the `--token-auth-file=` parameter. + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--token-auth-file' is not present +``` + +#### 1.2.4 Ensure that the `--kubelet-https` argument is set to true (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml +on the master node and remove the `--kubelet-https` parameter. + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--kubelet-https' is present OR '--kubelet-https' is not present +``` + +#### 1.2.5 Ensure that the `--kubelet-client-certificate` and `--kubelet-client-key` arguments are set as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the +apiserver and kubelets. Then, edit API server pod specification file +`/etc/kubernetes/manifests/kube-apiserver.yaml` on the master node and set the +kubelet client certificate and key parameters as below. + +``` bash +--kubelet-client-certificate= +--kubelet-client-key= +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present +``` + +#### 1.2.6 Ensure that the `--kubelet-certificate-authority` argument is set as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Follow the Kubernetes documentation and setup the TLS connection between +the apiserver and kubelets. Then, edit the API server pod specification file +`/etc/kubernetes/manifests/kube-apiserver.yaml` on the master node and set the +`--kubelet-certificate-authority` parameter to the path to the cert file for the certificate authority. +`--kubelet-certificate-authority=` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--kubelet-certificate-authority' is present +``` + +#### 1.2.7 Ensure that the `--authorization-mode` argument is not set to `AlwaysAllow` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the `--authorization-mode` parameter to values other than `AlwaysAllow`. +One such example could be as below. + +``` bash +--authorization-mode=RBAC +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'Node,RBAC' not have 'AlwaysAllow' +``` + +#### 1.2.8 Ensure that the `--authorization-mode` argument includes `Node` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the `--authorization-mode` parameter to a value that includes `Node`. + +``` bash +--authorization-mode=Node,RBAC +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'Node,RBAC' has 'Node' +``` + +#### 1.2.9 Ensure that the `--authorization-mode` argument includes `RBAC` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the `--authorization-mode` parameter to a value that includes RBAC, +for example: + +``` bash +--authorization-mode=Node,RBAC +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'Node,RBAC' has 'RBAC' +``` + +#### 1.2.11 Ensure that the admission control plugin `AlwaysAdmit` is not set (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and either remove the `--enable-admission-plugins` parameter, or set it to a +value that does not include `AlwaysAdmit`. + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present +``` + +#### 1.2.14 Ensure that the admission control plugin `ServiceAccount` is set (Scored) + +**Result:** PASS + +**Remediation:** +Follow the documentation and create ServiceAccount objects as per your environment. +Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and ensure that the `--disable-admission-plugins` parameter is set to a +value that does not include `ServiceAccount`. + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'ServiceAccount' OR '--enable-admission-plugins' is not present +``` + +#### 1.2.15 Ensure that the admission control plugin `NamespaceLifecycle` is set (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the `--disable-admission-plugins` parameter to +ensure it does not include `NamespaceLifecycle`. + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present +``` + +#### 1.2.16 Ensure that the admission control plugin `PodSecurityPolicy` is set (Scored) + +**Result:** PASS + +**Remediation:** +Follow the documentation and create Pod Security Policy objects as per your environment. +Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the `--enable-admission-plugins` parameter to a +value that includes `PodSecurityPolicy`: + +``` bash +--enable-admission-plugins=...,PodSecurityPolicy,... +``` + +Then restart the API Server. + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'PodSecurityPolicy' +``` + +#### 1.2.17 Ensure that the admission control plugin `NodeRestriction` is set (Scored) + +**Result:** PASS + +**Remediation:** +Follow the Kubernetes documentation and configure `NodeRestriction` plug-in on kubelets. +Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the `--enable-admission-plugins` parameter to a +value that includes `NodeRestriction`. + +``` bash +--enable-admission-plugins=...,NodeRestriction,... +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'NodeRestriction' +``` + +#### 1.2.18 Ensure that the `--insecure-bind-address` argument is not set (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and remove the `--insecure-bind-address` parameter. + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--insecure-bind-address' is not present +``` + +#### 1.2.19 Ensure that the `--insecure-port` argument is set to `0` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the below parameter. + +``` bash +--insecure-port=0 +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'0' is equal to '0' +``` + +#### 1.2.20 Ensure that the `--secure-port` argument is not set to `0` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and either remove the `--secure-port` parameter or +set it to a different **(non-zero)** desired port. + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +6443 is greater than 0 OR '--secure-port' is not present +``` + +#### 1.2.21 Ensure that the `--profiling` argument is set to `false` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the below parameter. + +``` bash +--profiling=false +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'false' is equal to 'false' +``` + +#### 1.2.22 Ensure that the `--audit-log-path` argument is set (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the `--audit-log-path` parameter to a suitable path and +file where you would like audit logs to be written, for example: + +``` bash +--audit-log-path=/var/log/apiserver/audit.log +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--audit-log-path' is present +``` + +#### 1.2.23 Ensure that the `--audit-log-maxage` argument is set to `30` or as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the `--audit-log-maxage` parameter to `30` or as an appropriate number of days: + +``` bash +--audit-log-maxage=30 +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +30 is greater or equal to 30 +``` + +#### 1.2.24 Ensure that the `--audit-log-maxbackup` argument is set to `10` or as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the `--audit-log-maxbackup` parameter to `10` or to an appropriate +value. + +``` bash +--audit-log-maxbackup=10 +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +10 is greater or equal to 10 +``` + +#### 1.2.25 Ensure that the `--audit-log-maxsize` argument is set to `100` or as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the `--audit-log-maxsize` parameter to an appropriate size in **MB**. +For example, to set it as `100` **MB**: + +``` bash +--audit-log-maxsize=100 +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +100 is greater or equal to 100 +``` + +#### 1.2.26 Ensure that the `--request-timeout` argument is set as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +and set the below parameter as appropriate and if needed. +For example, + +``` bash +--request-timeout=300s +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--request-timeout' is not present OR '--request-timeout' is present +``` + +#### 1.2.27 Ensure that the `--service-account-lookup` argument is set to `true` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the below parameter. + +``` bash +--service-account-lookup=true +``` + +Alternatively, you can delete the `--service-account-lookup` parameter from this file so +that the default takes effect. + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--service-account-lookup' is not present OR 'true' is equal to 'true' +``` + +#### 1.2.28 Ensure that the `--service-account-key-file` argument is set as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the `--service-account-key-file` parameter +to the public key file for service accounts: + +``` bash +--service-account-key-file= +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--service-account-key-file' is present +``` + +#### 1.2.29 Ensure that the `--etcd-certfile` and `--etcd-keyfile` arguments are set as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the **etcd** certificate and **key** file parameters. + +``` bash +--etcd-certfile= +--etcd-keyfile= +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--etcd-certfile' is present AND '--etcd-keyfile' is present +``` + +#### 1.2.30 Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the TLS certificate and private key file parameters. + +``` bash +--tls-cert-file= +--tls-private-key-file= +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--tls-cert-file' is present AND '--tls-private-key-file' is present +``` + +#### 1.2.31 Ensure that the `--client-ca-file` argument is set as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection on the apiserver. +Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the client certificate authority file. + +``` bash +--client-ca-file= +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--client-ca-file' is present +``` + +#### 1.2.32 Ensure that the `--etcd-cafile` argument is set as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. +Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the etcd certificate authority file parameter. + +``` bash +--etcd-cafile= +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--etcd-cafile' is present +``` + +#### 1.2.33 Ensure that the `--encryption-provider-config` argument is set as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Follow the Kubernetes documentation and configure a EncryptionConfig file. +Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` +on the master node and set the `--encryption-provider-config` parameter to the path of that file: + +``` bash +--encryption-provider-config= +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-apiserver | grep -v grep +``` + +**Expected result**: + +``` +'--encryption-provider-config' is present +``` + +#### 1.2.34 Ensure that encryption providers are appropriately configured (Scored) + +**Result:** PASS + +**Remediation:** +Follow the Kubernetes documentation and configure a `EncryptionConfig` file. +In this file, choose **aescbc**, **kms** or **secretbox** as the encryption provider. + +**Audit Script:** 1.2.34.sh + +``` +#!/bin/bash -e + +check_file=${1} + +grep -q -E 'aescbc|kms|secretbox' ${check_file} +if [ $? -eq 0 ]; then + echo "--pass" + exit 0 +else + echo "fail: encryption provider found in ${check_file}" + exit 1 +fi +``` + +**Audit Execution:** + +``` +./1.2.34.sh /etc/kubernetes/ssl/encryption.yaml +``` + +**Expected result**: + +``` +'--pass' is present +``` + +### 1.3 Controller Manager + +#### 1.3.1 Ensure that the `--terminated-pod-gc-threshold` argument is set as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` +on the master node and set the `--terminated-pod-gc-threshold` to an appropriate threshold, +for example: + +``` bash +--terminated-pod-gc-threshold=10 +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected result**: + +``` +'--terminated-pod-gc-threshold' is present +``` + +#### 1.3.2 Ensure that the `--profiling` argument is set to false (Scored) + +**Result:** PASS + +**Remediation:** +Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` +on the master node and set the below parameter. + +``` bash +--profiling=false +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected result**: + +``` +'false' is equal to 'false' +``` + +#### 1.3.3 Ensure that the `--use-service-account-credentials` argument is set to `true` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` +on the master node to set the below parameter. + +``` bash +--use-service-account-credentials=true +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected result**: + +``` +'true' is not equal to 'false' +``` + +#### 1.3.4 Ensure that the `--service-account-private-key-file` argument is set as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` +on the master node and set the `--service-account-private-key-file` parameter +to the private key file for service accounts. + +``` bash +--service-account-private-key-file= +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected result**: + +``` +'--service-account-private-key-file' is present +``` + +#### 1.3.5 Ensure that the `--root-ca-file` argument is set as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` +on the master node and set the `--root-ca-file` parameter to the certificate bundle file`. + +``` bash +--root-ca-file= +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected result**: + +``` +'--root-ca-file' is present +``` + +#### 1.3.6 Ensure that the `RotateKubeletServerCertificate` argument is set to `true` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` +on the master node and set the `--feature-gates` parameter to include `RotateKubeletServerCertificate=true`. + +``` bash +--feature-gates=RotateKubeletServerCertificate=true +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected result**: + +``` +'RotateKubeletServerCertificate=true' is equal to 'RotateKubeletServerCertificate=true' +``` + +#### 1.3.7 Ensure that the `--bind-address argument` is set to `127.0.0.1` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` +on the master node and ensure the correct value for the `--bind-address` parameter. + +**Audit:** + +``` +/bin/ps -ef | grep kube-controller-manager | grep -v grep +``` + +**Expected result**: + +``` +'--bind-address' is present OR '--bind-address' is not present +``` + +### 1.4 Scheduler + +#### 1.4.1 Ensure that the `--profiling` argument is set to `false` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the Scheduler pod specification file `/etc/kubernetes/manifests/kube-scheduler.yaml` file +on the master node and set the below parameter. + +``` bash +--profiling=false +``` + +**Audit:** + +``` +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected result**: + +``` +'false' is equal to 'false' +``` + +#### 1.4.2 Ensure that the `--bind-address` argument is set to `127.0.0.1` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the Scheduler pod specification file `/etc/kubernetes/manifests/kube-scheduler.yaml` +on the master node and ensure the correct value for the `--bind-address` parameter. + +**Audit:** + +``` +/bin/ps -ef | grep kube-scheduler | grep -v grep +``` + +**Expected result**: + +``` +'--bind-address' is present OR '--bind-address' is not present +``` + +## 2 Etcd Node Configuration +### 2 Etcd Node Configuration Files + +#### 2.1 Ensure that the `--cert-file` and `--key-file` arguments are set as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Follow the etcd service documentation and configure TLS encryption. +Then, edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` +on the master node and set the below parameters. + +``` bash +--cert-file= +--key-file= +``` + +**Audit:** + +``` +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected result**: + +``` +'--cert-file' is present AND '--key-file' is present +``` + +#### 2.2 Ensure that the `--client-cert-auth` argument is set to `true` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the master +node and set the below parameter. + +``` bash +--client-cert-auth="true" +``` + +**Audit:** + +``` +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected result**: + +``` +'true' is equal to 'true' +``` + +#### 2.3 Ensure that the `--auto-tls` argument is not set to `true` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the master +node and either remove the `--auto-tls` parameter or set it to `false`. + +``` bash + --auto-tls=false +``` + +**Audit:** + +``` +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected result**: + +``` +'--auto-tls' is not present OR '--auto-tls' is not present +``` + +#### 2.4 Ensure that the `--peer-cert-file` and `--peer-key-file` arguments are set as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +Follow the etcd service documentation and configure peer TLS encryption as appropriate +for your etcd cluster. Then, edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the +master node and set the below parameters. + +``` bash +--peer-client-file= +--peer-key-file= +``` + +**Audit:** + +``` +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected result**: + +``` +'--peer-cert-file' is present AND '--peer-key-file' is present +``` + +#### 2.5 Ensure that the `--peer-client-cert-auth` argument is set to `true` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the master +node and set the below parameter. + +``` bash +--peer-client-cert-auth=true +``` + +**Audit:** + +``` +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected result**: + +``` +'true' is equal to 'true' +``` + +#### 2.6 Ensure that the `--peer-auto-tls` argument is not set to `true` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the master +node and either remove the `--peer-auto-tls` parameter or set it to `false`. + +``` bash +--peer-auto-tls=false +``` + +**Audit:** + +``` +/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep +``` + +**Expected result**: + +``` +'--peer-auto-tls' is not present OR '--peer-auto-tls' is present +``` + +## 3 Control Plane Configuration +### 3.2 Logging + +#### 3.2.1 Ensure that a minimal audit policy is created (Scored) + +**Result:** PASS + +**Remediation:** +Create an audit policy file for your cluster. + +**Audit Script:** 3.2.1.sh + +``` +#!/bin/bash -e + +api_server_bin=${1} + +/bin/ps -ef | /bin/grep ${api_server_bin} | /bin/grep -v ${0} | /bin/grep -v grep +``` + +**Audit Execution:** + +``` +./3.2.1.sh kube-apiserver +``` + +**Expected result**: + +``` +'--audit-policy-file' is present +``` + +## 4 Worker Node Security Configuration +### 4.1 Worker Node Configuration Files + +#### 4.1.1 Ensure that the kubelet service file permissions are set to `644` or more restrictive (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. + +#### 4.1.2 Ensure that the kubelet service file ownership is set to `root:root` (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. + +#### 4.1.3 Ensure that the proxy kubeconfig file permissions are set to `644` or more restrictive (Scored) + +**Result:** PASS + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, + +``` bash +chmod 644 /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml +``` + +**Audit:** + +``` +/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %a /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected result**: + +``` +'644' is present OR '640' is present OR '600' is equal to '600' OR '444' is present OR '440' is present OR '400' is present OR '000' is present +``` + +#### 4.1.4 Ensure that the proxy kubeconfig file ownership is set to `root:root` (Scored) + +**Result:** PASS + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, + +``` bash +chown root:root /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml +``` + +**Audit:** + +``` +/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' +``` + +**Expected result**: + +``` +'root:root' is present +``` + +#### 4.1.5 Ensure that the kubelet.conf file permissions are set to `644` or more restrictive (Scored) + +**Result:** PASS + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, + +``` bash +chmod 644 /etc/kubernetes/ssl/kubecfg-kube-node.yaml +``` + +**Audit:** + +``` +/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %a /etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected result**: + +``` +'644' is present OR '640' is present OR '600' is equal to '600' OR '444' is present OR '440' is present OR '400' is present OR '000' is present +``` + +#### 4.1.6 Ensure that the kubelet.conf file ownership is set to `root:root` (Scored) + +**Result:** PASS + +**Remediation:** +Run the below command (based on the file location on your system) on the each worker node. +For example, + +``` bash +chown root:root /etc/kubernetes/ssl/kubecfg-kube-node.yaml +``` + +**Audit:** + +``` +/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' +``` + +**Expected result**: + +``` +'root:root' is equal to 'root:root' +``` + +#### 4.1.7 Ensure that the certificate authorities file permissions are set to `644` or more restrictive (Scored) + +**Result:** PASS + +**Remediation:** +Run the following command to modify the file permissions of the + +``` bash +--client-ca-file chmod 644 +``` + +**Audit:** + +``` +stat -c %a /etc/kubernetes/ssl/kube-ca.pem +``` + +**Expected result**: + +``` +'644' is equal to '644' OR '640' is present OR '600' is present +``` + +#### 4.1.8 Ensure that the client certificate authorities file ownership is set to `root:root` (Scored) + +**Result:** PASS + +**Remediation:** +Run the following command to modify the ownership of the `--client-ca-file`. + +``` bash +chown root:root +``` + +**Audit:** + +``` +/bin/sh -c 'if test -e /etc/kubernetes/ssl/kube-ca.pem; then stat -c %U:%G /etc/kubernetes/ssl/kube-ca.pem; fi' +``` + +**Expected result**: + +``` +'root:root' is equal to 'root:root' +``` + +#### 4.1.9 Ensure that the kubelet configuration file has permissions set to `644` or more restrictive (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. + +#### 4.1.10 Ensure that the kubelet configuration file ownership is set to `root:root` (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. + +### 4.2 Kubelet + +#### 4.2.1 Ensure that the `--anonymous-auth argument` is set to false (Scored) + +**Result:** PASS + +**Remediation:** +If using a Kubelet config file, edit the file to set authentication: `anonymous`: enabled to +`false`. +If using executable arguments, edit the kubelet service file +`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and +set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. + +``` bash +--anonymous-auth=false +``` + +Based on your system, restart the kubelet service. For example: + +``` bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +**Audit:** + +``` +/bin/ps -fC kubelet +``` + +**Audit Config:** + +``` +/bin/cat /var/lib/kubelet/config.yaml +``` + +**Expected result**: + +``` +'false' is equal to 'false' +``` + +#### 4.2.2 Ensure that the `--authorization-mode` argument is not set to `AlwaysAllow` (Scored) + +**Result:** PASS + +**Remediation:** +If using a Kubelet config file, edit the file to set authorization: `mode` to `Webhook`. If +using executable arguments, edit the kubelet service file +`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and +set the below parameter in `KUBELET_AUTHZ_ARGS` variable. + +``` bash +--authorization-mode=Webhook +``` + +Based on your system, restart the kubelet service. For example: + +``` bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +**Audit:** + +``` +/bin/ps -fC kubelet +``` + +**Audit Config:** + +``` +/bin/cat /var/lib/kubelet/config.yaml +``` + +**Expected result**: + +``` +'Webhook' not have 'AlwaysAllow' +``` + +#### 4.2.3 Ensure that the `--client-ca-file` argument is set as appropriate (Scored) + +**Result:** PASS + +**Remediation:** +If using a Kubelet config file, edit the file to set authentication: `x509`: `clientCAFile` to +the location of the client CA file. +If using command line arguments, edit the kubelet service file +`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and +set the below parameter in `KUBELET_AUTHZ_ARGS` variable. + +``` bash +--client-ca-file= +``` + +Based on your system, restart the kubelet service. For example: + +``` bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +**Audit:** + +``` +/bin/ps -fC kubelet +``` + +**Audit Config:** + +``` +/bin/cat /var/lib/kubelet/config.yaml +``` + +**Expected result**: + +``` +'--client-ca-file' is present +``` + +#### 4.2.4 Ensure that the `--read-only-port` argument is set to `0` (Scored) + +**Result:** PASS + +**Remediation:** +If using a Kubelet config file, edit the file to set `readOnlyPort` to `0`. +If using command line arguments, edit the kubelet service file +`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and +set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. + +``` bash +--read-only-port=0 +``` + +Based on your system, restart the kubelet service. For example: + +``` bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +**Audit:** + +``` +/bin/ps -fC kubelet +``` + +**Audit Config:** + +``` +/bin/cat /var/lib/kubelet/config.yaml +``` + +**Expected result**: + +``` +'0' is equal to '0' +``` + +#### 4.2.5 Ensure that the `--streaming-connection-idle-timeout` argument is not set to `0` (Scored) + +**Result:** PASS + +**Remediation:** +If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a +value other than `0`. +If using command line arguments, edit the kubelet service file +`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and +set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. + +``` bash +--streaming-connection-idle-timeout=5m +``` + +Based on your system, restart the kubelet service. For example: + +``` bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +**Audit:** + +``` +/bin/ps -fC kubelet +``` + +**Audit Config:** + +``` +/bin/cat /var/lib/kubelet/config.yaml +``` + +**Expected result**: + +``` +'30m' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present +``` + +#### 4.2.6 Ensure that the ```--protect-kernel-defaults``` argument is set to `true` (Scored) + +**Result:** PASS + +**Remediation:** +If using a Kubelet config file, edit the file to set `protectKernelDefaults`: `true`. +If using command line arguments, edit the kubelet service file +`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and +set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. + +``` bash +--protect-kernel-defaults=true +``` + +Based on your system, restart the kubelet service. For example: + +``` bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +**Audit:** + +``` +/bin/ps -fC kubelet +``` + +**Audit Config:** + +``` +/bin/cat /var/lib/kubelet/config.yaml +``` + +**Expected result**: + +``` +'true' is equal to 'true' +``` + +#### 4.2.7 Ensure that the `--make-iptables-util-chains` argument is set to `true` (Scored) + +**Result:** PASS + +**Remediation:** +If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains`: `true`. +If using command line arguments, edit the kubelet service file +`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and +remove the `--make-iptables-util-chains` argument from the +`KUBELET_SYSTEM_PODS_ARGS` variable. +Based on your system, restart the kubelet service. For example: + +```bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +**Audit:** + +``` +/bin/ps -fC kubelet +``` + +**Audit Config:** + +``` +/bin/cat /var/lib/kubelet/config.yaml +``` + +**Expected result**: + +``` +'true' is equal to 'true' OR '--make-iptables-util-chains' is not present +``` + +#### 4.2.10 Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate (Scored) + +**Result:** Not Applicable + +**Remediation:** +RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. + +#### 4.2.11 Ensure that the `--rotate-certificates` argument is not set to `false` (Scored) + +**Result:** PASS + +**Remediation:** +If using a Kubelet config file, edit the file to add the line `rotateCertificates`: `true` or +remove it altogether to use the default value. +If using command line arguments, edit the kubelet service file +`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and +remove `--rotate-certificates=false` argument from the `KUBELET_CERTIFICATE_ARGS` +variable. +Based on your system, restart the kubelet service. For example: + +``` bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +**Audit:** + +``` +/bin/ps -fC kubelet +``` + +**Audit Config:** + +``` +/bin/cat /var/lib/kubelet/config.yaml +``` + +**Expected result**: + +``` +'--rotate-certificates' is present OR '--rotate-certificates' is not present +``` + +#### 4.2.12 Ensure that the `RotateKubeletServerCertificate` argument is set to `true` (Scored) + +**Result:** PASS + +**Remediation:** +Edit the kubelet service file `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` +on each worker node and set the below parameter in `KUBELET_CERTIFICATE_ARGS` variable. + +``` bash +--feature-gates=RotateKubeletServerCertificate=true +``` + +Based on your system, restart the kubelet service. For example: + +``` bash +systemctl daemon-reload +systemctl restart kubelet.service +``` + +**Audit:** + +``` +/bin/ps -fC kubelet +``` + +**Audit Config:** + +``` +/bin/cat /var/lib/kubelet/config.yaml +``` + +**Expected result**: + +``` +'true' is equal to 'true' +``` + +## 5 Kubernetes Policies +### 5.1 RBAC and Service Accounts + +#### 5.1.5 Ensure that default service accounts are not actively used. (Scored) + +**Result:** PASS + +**Remediation:** +Create explicit service accounts wherever a Kubernetes workload requires specific access +to the Kubernetes API server. +Modify the configuration of each default service account to include this value + +``` bash +automountServiceAccountToken: false +``` + +**Audit Script:** 5.1.5.sh + +``` +#!/bin/bash + +export KUBECONFIG=${KUBECONFIG:-/root/.kube/config} + +kubectl version > /dev/null +if [ $? -ne 0 ]; then + echo "fail: kubectl failed" + exit 1 +fi + +accounts="$(kubectl --kubeconfig=${KUBECONFIG} get serviceaccounts -A -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true)) | "fail \(.metadata.name) \(.metadata.namespace)"')" + +if [[ "${accounts}" != "" ]]; then + echo "fail: automountServiceAccountToken not false for accounts: ${accounts}" + exit 1 +fi + +default_binding="$(kubectl get rolebindings,clusterrolebindings -A -o json | jq -r '.items[] | select(.subjects[].kind=="ServiceAccount" and .subjects[].name=="default" and .metadata.name=="default").metadata.uid' | wc -l)" + +if [[ "${default_binding}" -gt 0 ]]; then + echo "fail: default service accounts have non default bindings" + exit 1 +fi + +echo "--pass" +exit 0 +``` + +**Audit Execution:** + +``` +./5.1.5.sh +``` + +**Expected result**: + +``` +'--pass' is present +``` + +### 5.2 Pod Security Policies + +#### 5.2.2 Minimize the admission of containers wishing to share the host process ID namespace (Scored) + +**Result:** PASS + +**Remediation:** +Create a PSP as described in the Kubernetes documentation, ensuring that the +`.spec.hostPID` field is omitted or set to `false`. + +**Audit:** + +``` +kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +**Expected result**: + +``` +1 is greater than 0 +``` + +#### 5.2.3 Minimize the admission of containers wishing to share the host IPC namespace (Scored) + +**Result:** PASS + +**Remediation:** +Create a PSP as described in the Kubernetes documentation, ensuring that the +`.spec.hostIPC` field is omitted or set to `false`. + +**Audit:** + +``` +kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +**Expected result**: + +``` +1 is greater than 0 +``` + +#### 5.2.4 Minimize the admission of containers wishing to share the host network namespace (Scored) + +**Result:** PASS + +**Remediation:** +Create a PSP as described in the Kubernetes documentation, ensuring that the +`.spec.hostNetwork` field is omitted or set to `false`. + +**Audit:** + +``` +kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +**Expected result**: + +``` +1 is greater than 0 +``` + +#### 5.2.5 Minimize the admission of containers with `allowPrivilegeEscalation` (Scored) + +**Result:** PASS + +**Remediation:** +Create a PSP as described in the Kubernetes documentation, ensuring that the +`.spec.allowPrivilegeEscalation` field is omitted or set to `false`. + +**Audit:** + +``` +kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +**Expected result**: + +``` +1 is greater than 0 +``` + +### 5.3 Network Policies and CNI + +#### 5.3.2 Ensure that all Namespaces have Network Policies defined (Scored) + +**Result:** PASS + +**Remediation:** +Follow the documentation and create `NetworkPolicy` objects as you need them. + +**Audit Script:** 5.3.2.sh + +``` +#!/bin/bash -e + +export KUBECONFIG=${KUBECONFIG:-"/root/.kube/config"} + +kubectl version > /dev/null +if [ $? -ne 0 ]; then + echo "fail: kubectl failed" + exit 1 +fi + +for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do + policy_count=$(kubectl get networkpolicy -n ${namespace} -o json | jq '.items | length') + if [ ${policy_count} -eq 0 ]; then + echo "fail: ${namespace}" + exit 1 + fi +done + +echo "pass" +``` + +**Audit Execution:** + +``` +./5.3.2.sh +``` + +**Expected result**: + +``` +'pass' is present +``` + +### 5.6 General Policies + +#### 5.6.4 The default namespace should not be used (Scored) + +**Result:** PASS + +**Remediation:** +Ensure that namespaces are created to allow for appropriate segregation of Kubernetes +resources and that all new resources are created in a specific namespace. + +**Audit Script:** 5.6.4.sh + +``` +#!/bin/bash -e + +export KUBECONFIG=${KUBECONFIG:-/root/.kube/config} + +kubectl version > /dev/null +if [[ $? -gt 0 ]]; then + echo "fail: kubectl failed" + exit 1 +fi + +default_resources=$(kubectl get all -o json | jq --compact-output '.items[] | select((.kind == "Service") and (.metadata.name == "kubernetes") and (.metadata.namespace == "default") | not)' | wc -l) + +echo "--count=${default_resources}" +``` + +**Audit Execution:** + +``` +./5.6.4.sh +``` + +**Expected result**: + +``` +'0' is equal to '0' +``` + diff --git a/content/rancher/v2.x/en/security/rancher-2.4/hardening-2.4/_index.md b/content/rancher/v2.x/en/security/rancher-2.4/hardening-2.4/_index.md new file mode 100644 index 00000000000..583080c10af --- /dev/null +++ b/content/rancher/v2.x/en/security/rancher-2.4/hardening-2.4/_index.md @@ -0,0 +1,722 @@ +--- +title: Hardening Guide v2.4 +weight: 99 +aliases: + - /rancher/v2.x/en/security/hardening-2.4 +--- + +This document provides prescriptive guidance for hardening a production installation of Rancher v2.4. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). + +> This hardening guide describes how to secure the nodes in your cluster, and it is recommended to follow this guide before installing Kubernetes. + +This hardening guide is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: + +Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version +------------------------|----------------|-----------------------|------------------ +Hardening Guide v2.4 | Rancher v2.4 | Benchmark v1.5 | Kubernetes 1.15 + + +[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.4/Rancher_Hardening_Guide.pdf) + +### Overview + +This document provides prescriptive guidance for hardening a production installation of Rancher v2.4 with Kubernetes v1.15. It outlines the configurations required to address Kubernetes benchmark controls from the Center for Information Security (CIS). + +For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS Benchmark Rancher Self-Assessment Guide - Rancher v2.4]({{< baseurl >}}/rancher/v2.x/en/security/benchmark-2.4/). + +#### Known Issues + +- Rancher **exec shell** and **view logs** for pods are **not** functional in a CIS 1.5 hardened setup when only public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes. +- When setting the `default_pod_security_policy_template_id:` to `restricted` Rancher creates **RoleBindings** and **ClusterRoleBindings** on the default service accounts. The CIS 1.5 5.1.5 check requires the default service accounts have no roles or cluster roles bound to it apart from the defaults. In addition the default service accounts should be configured such that it does not provide a service account token and does not have any explicit rights assignments. + +### Configure Kernel Runtime Parameters + +The following `sysctl` configuration is recommended for all nodes type in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`: + +``` +vm.overcommit_memory=1 +vm.panic_on_oom=0 +kernel.panic=10 +kernel.panic_on_oops=1 +kernel.keys.root_maxbytes=25000000 +``` + +Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings. + +### Configure `etcd` user and group +A user account and group for the **etcd** service is required to be setup prior to installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time. + +#### create `etcd` user and group +To create the **etcd** group run the following console commands. + +The commands below use `52034` for **uid** and **gid** are for example purposes. Any valid unused **uid** or **gid** could also be used in lieu of `52034`. + +``` +groupadd --gid 52034 etcd +useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd +``` + +Update the RKE **config.yml** with the **uid** and **gid** of the **etcd** user: + +``` yaml +services: + etcd: + gid: 52034 + uid: 52034 +``` + +#### Set `automountServiceAccountToken` to `false` for `default` service accounts +Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. + +For each namespace including **default** and **kube-system** on a standard RKE install the **default** service account must include this value: + +``` +automountServiceAccountToken: false +``` + +Save the following yaml to a file called `account_update.yaml` + +``` yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + name: default +automountServiceAccountToken: false +``` + +Create a bash script file called `account_update.sh`. Be sure to `chmod +x account_update.sh` so the script has execute permissions. + +``` +#!/bin/bash -e + +for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do + kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)" +done +``` + +### Ensure that all Namespaces have Network Policies defined + +Running different applications on the same Kubernetes cluster creates a risk of one +compromised application attacking a neighboring application. Network segmentation is +important to ensure that containers can communicate only with those they are supposed +to. A network policy is a specification of how selections of pods are allowed to +communicate with each other and other network endpoints. + +Network Policies are namespace scoped. When a network policy is introduced to a given +namespace, all traffic not allowed by the policy is denied. However, if there are no network +policies in a namespace all traffic will be allowed into and out of the pods in that +namespace. To enforce network policies, a CNI (container network interface) plugin must be enabled. +This guide uses [canal](https://github.com/projectcalico/canal) to provide the policy enforcement. +Additional information about CNI providers can be found +[here](https://rancher.com/blog/2019/2019-03-21-comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/) + +Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a +**permissive** example is provide below. If you want to allow all traffic to all pods in a namespace +(even if policies are added that cause some pods to be treated as “isolated”), +you can create a policy that explicitly allows all traffic in that namespace. Save the following `yaml` as +`default-allow-all.yaml`. Additional [documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) +about network policies can be found on the Kubernetes site. + +> This `NetworkPolicy` is not recommended for production use + +``` yaml +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-allow-all +spec: + podSelector: {} + ingress: + - {} + egress: + - {} + policyTypes: + - Ingress + - Egress +``` + +Create a bash script file called `apply_networkPolicy_to_all_ns.sh`. Be sure to +`chmod +x apply_networkPolicy_to_all_ns.sh` so the script has execute permissions. + +``` +#!/bin/bash -e + +for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do + kubectl apply -f default-allow-all.yaml -n ${namespace} +done +``` +Execute this script to apply the `default-allow-all.yaml` the **permissive** `NetworkPolicy` to all namespaces. + +### Reference Hardened RKE `cluster.yml` configuration +The reference `cluster.yml` is used by the RKE CLI that provides the configuration needed to achieve a hardened install +of Rancher Kubernetes Engine (RKE). Install [documentation](https://rancher.com/docs/rke/latest/en/installation/) is +provided with additional details about the configuration items. This reference `cluster.yml` does not include the required **nodes** directive which will vary depending on your environment. Documentation for node configuration can be found here: https://rancher.com/docs/rke/latest/en/config-options/nodes + + +``` yaml +# If you intend to deploy Kubernetes in an air-gapped environment, +# please consult the documentation on how to configure custom RKE images. +kubernetes_version: "v1.15.9-rancher1-1" +enable_network_policy: true +default_pod_security_policy_template_id: "restricted" +# the nodes directive is required and will vary depending on your environment +# documentation for node configuration can be found here: +# https://rancher.com/docs/rke/latest/en/config-options/nodes +nodes: +services: + etcd: + uid: 52034 + gid: 52034 + kube-api: + pod_security_policy: true + secrets_encryption_config: + enabled: true + audit_log: + enabled: true + admission_configuration: + event_rate_limit: + enabled: true + kube-controller: + extra_args: + feature-gates: "RotateKubeletServerCertificate=true" + scheduler: + image: "" + extra_args: {} + extra_binds: [] + extra_env: [] + kubelet: + generate_serving_certificate: true + extra_args: + feature-gates: "RotateKubeletServerCertificate=true" + protect-kernel-defaults: "true" + tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256" + extra_binds: [] + extra_env: [] + cluster_domain: "" + infra_container_image: "" + cluster_dns_server: "" + fail_swap_on: false + kubeproxy: + image: "" + extra_args: {} + extra_binds: [] + extra_env: [] +network: + plugin: "" + options: {} + mtu: 0 + node_selector: {} +authentication: + strategy: "" + sans: [] + webhook: null +addons: | + --- + apiVersion: v1 + kind: Namespace + metadata: + name: ingress-nginx + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: Role + metadata: + name: default-psp-role + namespace: ingress-nginx + rules: + - apiGroups: + - extensions + resourceNames: + - default-psp + resources: + - podsecuritypolicies + verbs: + - use + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: RoleBinding + metadata: + name: default-psp-rolebinding + namespace: ingress-nginx + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: default-psp-role + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:authenticated + --- + apiVersion: v1 + kind: Namespace + metadata: + name: cattle-system + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: Role + metadata: + name: default-psp-role + namespace: cattle-system + rules: + - apiGroups: + - extensions + resourceNames: + - default-psp + resources: + - podsecuritypolicies + verbs: + - use + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: RoleBinding + metadata: + name: default-psp-rolebinding + namespace: cattle-system + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: default-psp-role + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:authenticated + --- + apiVersion: policy/v1beta1 + kind: PodSecurityPolicy + metadata: + name: restricted + spec: + requiredDropCapabilities: + - NET_RAW + privileged: false + allowPrivilegeEscalation: false + defaultAllowPrivilegeEscalation: false + fsGroup: + rule: RunAsAny + runAsUser: + rule: MustRunAsNonRoot + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + volumes: + - emptyDir + - secret + - persistentVolumeClaim + - downwardAPI + - configMap + - projected + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: psp:restricted + rules: + - apiGroups: + - extensions + resourceNames: + - restricted + resources: + - podsecuritypolicies + verbs: + - use + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: psp:restricted + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp:restricted + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:authenticated + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: tiller + namespace: kube-system + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: tiller + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: cluster-admin + subjects: + - kind: ServiceAccount + name: tiller + namespace: kube-system + +addons_include: [] +system_images: + etcd: "" + alpine: "" + nginx_proxy: "" + cert_downloader: "" + kubernetes_services_sidecar: "" + kubedns: "" + dnsmasq: "" + kubedns_sidecar: "" + kubedns_autoscaler: "" + coredns: "" + coredns_autoscaler: "" + kubernetes: "" + flannel: "" + flannel_cni: "" + calico_node: "" + calico_cni: "" + calico_controllers: "" + calico_ctl: "" + calico_flexvol: "" + canal_node: "" + canal_cni: "" + canal_flannel: "" + canal_flexvol: "" + weave_node: "" + weave_cni: "" + pod_infra_container: "" + ingress: "" + ingress_backend: "" + metrics_server: "" + windows_pod_infra_container: "" +ssh_key_path: "" +ssh_cert_path: "" +ssh_agent_auth: false +authorization: + mode: "" + options: {} +ignore_docker_version: false +private_registries: [] +ingress: + provider: "" + options: {} + node_selector: {} + extra_args: {} + dns_policy: "" + extra_envs: [] + extra_volumes: [] + extra_volume_mounts: [] +cluster_name: "" +prefix_path: "" +addon_job_timeout: 0 +bastion_host: + address: "" + port: "" + user: "" + ssh_key: "" + ssh_key_path: "" + ssh_cert: "" + ssh_cert_path: "" +monitoring: + provider: "" + options: {} + node_selector: {} +restore: + restore: false + snapshot_name: "" +dns: null +``` + +### Reference Hardened RKE Template configuration + +The reference RKE Template provides the configuration needed to achieve a hardened install of Kubenetes. +RKE Templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher +[documentaion](https://rancher.com/docs/rancher/v2.x/en/installation) for additional installation and RKE Template details. + +``` yaml +# +# Cluster Config +# +default_pod_security_policy_template_id: restricted +docker_root_dir: /var/lib/docker +enable_cluster_alerting: false +enable_cluster_monitoring: false +enable_network_policy: true +# +# Rancher Config +# +rancher_kubernetes_engine_config: + addon_job_timeout: 30 + addons: |- + --- + apiVersion: v1 + kind: Namespace + metadata: + name: ingress-nginx + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: Role + metadata: + name: default-psp-role + namespace: ingress-nginx + rules: + - apiGroups: + - extensions + resourceNames: + - default-psp + resources: + - podsecuritypolicies + verbs: + - use + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: RoleBinding + metadata: + name: default-psp-rolebinding + namespace: ingress-nginx + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: default-psp-role + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:authenticated + --- + apiVersion: v1 + kind: Namespace + metadata: + name: cattle-system + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: Role + metadata: + name: default-psp-role + namespace: cattle-system + rules: + - apiGroups: + - extensions + resourceNames: + - default-psp + resources: + - podsecuritypolicies + verbs: + - use + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: RoleBinding + metadata: + name: default-psp-rolebinding + namespace: cattle-system + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: default-psp-role + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:authenticated + --- + apiVersion: policy/v1beta1 + kind: PodSecurityPolicy + metadata: + name: restricted + spec: + requiredDropCapabilities: + - NET_RAW + privileged: false + allowPrivilegeEscalation: false + defaultAllowPrivilegeEscalation: false + fsGroup: + rule: RunAsAny + runAsUser: + rule: MustRunAsNonRoot + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + volumes: + - emptyDir + - secret + - persistentVolumeClaim + - downwardAPI + - configMap + - projected + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: psp:restricted + rules: + - apiGroups: + - extensions + resourceNames: + - restricted + resources: + - podsecuritypolicies + verbs: + - use + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: psp:restricted + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp:restricted + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:authenticated + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: tiller + namespace: kube-system + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: tiller + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: cluster-admin + subjects: + - kind: ServiceAccount + name: tiller + namespace: kube-system + ignore_docker_version: true + kubernetes_version: v1.15.9-rancher1-1 +# +# If you are using calico on AWS +# +# network: +# plugin: calico +# calico_network_provider: +# cloud_provider: aws +# +# # To specify flannel interface +# +# network: +# plugin: flannel +# flannel_network_provider: +# iface: eth1 +# +# # To specify flannel interface for canal plugin +# +# network: +# plugin: canal +# canal_network_provider: +# iface: eth1 +# + network: + mtu: 0 + plugin: canal +# +# services: +# kube-api: +# service_cluster_ip_range: 10.43.0.0/16 +# kube-controller: +# cluster_cidr: 10.42.0.0/16 +# service_cluster_ip_range: 10.43.0.0/16 +# kubelet: +# cluster_domain: cluster.local +# cluster_dns_server: 10.43.0.10 +# + services: + etcd: + backup_config: + enabled: false + interval_hours: 12 + retention: 6 + safe_timestamp: false + creation: 12h + extra_args: + election-timeout: '5000' + heartbeat-interval: '500' + gid: 52034 + retention: 72h + snapshot: false + uid: 52034 + kube_api: + always_pull_images: false + audit_log: + enabled: true + event_rate_limit: + enabled: true + pod_security_policy: true + secrets_encryption_config: + enabled: true + service_node_port_range: 30000-32767 + kube_controller: + extra_args: + address: 127.0.0.1 + feature-gates: RotateKubeletServerCertificate=true + profiling: 'false' + terminated-pod-gc-threshold: '1000' + kubelet: + extra_args: + anonymous-auth: 'false' + event-qps: '0' + feature-gates: RotateKubeletServerCertificate=true + make-iptables-util-chains: 'true' + protect-kernel-defaults: 'true' + streaming-connection-idle-timeout: 1800s + tls-cipher-suites: >- + TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 + fail_swap_on: false + generate_serving_certificate: true + scheduler: + extra_args: + address: 127.0.0.1 + profiling: 'false' + ssh_agent_auth: false +windows_prefered_cluster: false +``` + +### Hardened Reference Ubuntu 18.04 LTS **cloud-config**: + +The reference **cloud-config** is generally used in cloud infrastructure environments to allow for +configuration management of compute instances. The reference config configures Ubuntu operating system level settings +needed before installing kubernetes. + +``` yaml +#cloud-config +packages: + - curl + - jq +runcmd: + - sysctl -w vm.overcommit_memory=1 + - sysctl -w kernel.panic=10 + - sysctl -w kernel.panic_on_oops=1 + - curl https://releases.rancher.com/install-docker/18.09.sh | sh + - usermod -aG docker ubuntu + - return=1; while [ $return != 0 ]; do sleep 2; docker ps; return=$?; done + - addgroup --gid 52034 etcd + - useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd +write_files: + - path: /etc/sysctl.d/kubelet.conf + owner: root:root + permissions: "0644" + content: | + vm.overcommit_memory=1 + kernel.panic=10 + kernel.panic_on_oops=1 +``` diff --git a/content/rancher/v2.x/en/security/rancher-2.5/_index.md b/content/rancher/v2.x/en/security/rancher-2.5/_index.md new file mode 100644 index 00000000000..02736328b26 --- /dev/null +++ b/content/rancher/v2.x/en/security/rancher-2.5/_index.md @@ -0,0 +1,6 @@ +--- +title: Rancher v2.5 +weight: 1 +--- + +This section contains the hardening and self-assessment guides for Rancher v2.5. \ No newline at end of file diff --git a/content/rancher/v2.x/en/security/rancher-2.5/benchmark-2.5/_index.md b/content/rancher/v2.x/en/security/rancher-2.5/benchmark-2.5/_index.md new file mode 100644 index 00000000000..4631d4c984f --- /dev/null +++ b/content/rancher/v2.x/en/security/rancher-2.5/benchmark-2.5/_index.md @@ -0,0 +1,4 @@ +--- +title: Self Assessment Guide +weight: 2 +--- \ No newline at end of file diff --git a/content/rancher/v2.x/en/security/rancher-2.5/hardening-2.5/_index.md b/content/rancher/v2.x/en/security/rancher-2.5/hardening-2.5/_index.md new file mode 100644 index 00000000000..cfb41e6e16a --- /dev/null +++ b/content/rancher/v2.x/en/security/rancher-2.5/hardening-2.5/_index.md @@ -0,0 +1,4 @@ +--- +title: Hardening Guide +weight: 1 +--- \ No newline at end of file diff --git a/content/rancher/v2.x/en/security/security-scan/_index.md b/content/rancher/v2.x/en/security/security-scan/_index.md index 3782ebb6a3e..644b7f906af 100644 --- a/content/rancher/v2.x/en/security/security-scan/_index.md +++ b/content/rancher/v2.x/en/security/security-scan/_index.md @@ -1,256 +1,6 @@ --- title: Security Scans -weight: 1 +weight: 299 --- -_Available as of v2.4.0_ - -Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. - -The Center for Internet Security (CIS) is a 501(c)(3) nonprofit organization, formed in October 2000, with a mission is to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace". The organization is headquartered in East Greenbush, New York, with members including large corporations, government agencies, and academic institutions. - -CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team. - -The Benchmark provides recommendations of two types: Scored and Not Scored. We run tests related to only Scored recommendations. - -- [About the CIS Benchmark](#about-the-cis-benchmark) -- [About the generated report](#about-the-generated-report) -- [Test profiles](#test-profiles) -- [Skipped and not applicable tests](#skipped-and-not-applicable-tests) - - [CIS Benchmark v1.4 skipped tests](#cis-benchmark-v1-4-skipped-tests) - - [CIS Benchmark v1.4 not applicable tests](#cis-benchmark-v1-4-not-applicable-tests) -- [Prerequisites](#prerequisites) -- [Running a scan](#running-a-scan) -- [Scheduling recurring scans](#scheduling-recurring-scans) -- [Skipping tests](#skipping-tests) -- [Setting alerts](#setting-alerts) -- [Deleting a report](#deleting-a-report) -- [Downloading a report](#downloading-a-report) - -# About the CIS Benchmark - -The Center for Internet Security is a 501(c)(3) nonprofit organization, formed in October 2000, with a mission is to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace". The organization is headquartered in East Greenbush, New York, with members including large corporations, government agencies, and academic institutions. - -CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team. - -The official Benchmark documents are available through the CIS website. The sign-up form to access the documents is [here.](https://learn.cisecurity.org/benchmarks) - -To check clusters for CIS Kubernetes Benchmark compliance, the security scan leverages [kube-bench,](https://github.com/aquasecurity/kube-bench) an open-source tool from Aqua Security. - -# About the Generated Report - -Each scan generates a report can be viewed in the Rancher UI and can be downloaded in CSV format. - -As of Rancher v2.4, the scan will use the CIS Benchmark v1.4. The Benchmark version is included in the generated report. - -The Benchmark provides recommendations of two types: Scored and Not Scored. Recommendations marked as Not Scored in the Benchmark are not included in the generated report. - -Some tests are designated as "Not Applicable." These tests will not be run on any CIS scan because of the way that Rancher provisions RKE clusters. For information on how test results can be audited, and why some tests are designated to be not applicable, refer to Rancher's [self-assessment guide for the corresponding Kubernetes version.]({{}}/rancher/v2.x/en/security/#the-cis-benchmark-and-self-assessment) - -The report contains the following information: - -| Column in Report | Description | -|------------------|-------------| -| ID | The ID number of the CIS Benchmark. | -| Description | The description of the CIS Benchmark test. | -| Remediation | What needs to be fixed in order to pass the test. | -| State of Test | Indicates if the test passed, failed, was skipped, or was not applicable. | -| Node type | The node role, which affects which tests are run on the node. Master tests are run on controlplane nodes, etcd tests are run on etcd nodes, and node tests are run on the worker nodes. | -| Nodes | The name(s) of the node that the test was run on. | -| Passed_Nodes | The name(s) of the nodes that the test passed on. | -| Failed_Nodes | The name(s) of the nodes that the test failed on. | - -Refer to [the table in the cluster hardening guide]({{}}/rancher/v2.x/en/security/#rancher-hardening-guide) for information on which versions of Kubernetes, the Benchmark, Rancher, and our cluster hardening guide correspond to each other. Also refer to the hardening guide for configuration files of CIS-compliant clusters and information on remediating failed tests. - -# Test Profiles - -For every CIS benchmark version, Rancher ships with two types of profiles. These profiles are named based on the type of cluster (e.g. `RKE`), the CIS benchmark version (e.g. CIS 1.4) and the profile type (e.g. `Permissive` or `Hardened`). For example, a full profile name would be `RKE-CIS-1.4-Permissive` - -All profiles will have a set of not applicable tests that will be skipped during the CIS scan. These tests are not applicable based on how a RKE cluster manages Kubernetes. - -There are 2 types of profiles: - -- **Permissive:** This profile has a set of tests that have been will be skipped as these tests will fail on a default RKE Kubernetes cluster. Besides the list of skipped tests, the profile will also not run the not applicable tests. -- **Hardened:** This profile will not skip any tests, except for the non-applicable tests. - -In order to pass the "Hardened" profile, you will need to follow the steps on the [hardening guide]({{}}/rancher/v2.x/en/security/#rancher-hardening-guide) and use the `cluster.yml` defined in the hardening guide to provision a hardened cluster. - -# Skipped and Not Applicable Tests - -### CIS Benchmark v1.4 Skipped Tests - -Number | Description | Reason for Skipping ----|---|--- -1.1.11 | "Ensure that the admission control plugin AlwaysPullImages is set (Scored)" | Enabling AlwaysPullImages can use significant bandwidth. -1.1.21 | "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Scored)" | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. -1.1.24 | "Ensure that the admission control plugin PodSecurityPolicy is set (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. -1.1.34 | "Ensure that the --encryption-provider-config argument is set as appropriate (Scored)" | Enabling encryption changes how data can be recovered as data is encrypted. -1.1.35 | "Ensure that the encryption provider is set to aescbc (Scored)" | Enabling encryption changes how data can be recovered as data is encrypted. -1.1.36 | "Ensure that the admission control plugin EventRateLimit is set (Scored)" | EventRateLimit needs to be tuned depending on the cluster. -1.2.2 | "Ensure that the --address argument is set to 127.0.0.1 (Scored)" | Adding this argument prevents Rancher's monitoring tool to collect metrics on the scheduler. -1.3.7 | "Ensure that the --address argument is set to 127.0.0.1 (Scored)" | Adding this argument prevents Rancher's monitoring tool to collect metrics on the controller manager. -1.4.12 | "Ensure that the etcd data directory ownership is set to etcd:etcd (Scored)" | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership. -1.7.2 | "Do not admit containers wishing to share the host process ID namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. -1.7.3 | "Do not admit containers wishing to share the host IPC namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. -1.7.4 | "Do not admit containers wishing to share the host network namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. -1.7.5 | " Do not admit containers with allowPrivilegeEscalation (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. -2.1.6 | "Ensure that the --protect-kernel-defaults argument is set to true (Scored)" | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true. -2.1.10 | "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored)" | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. - -### CIS Benchmark v1.4 Not Applicable Tests - -Number | Description | Reason for being not applicable ----|---|--- -1.1.9 | "Ensure that the --repair-malformed-updates argument is set to false (Scored)" | The argument --repair-malformed-updates has been removed as of Kubernetes version 1.14 -1.3.6 | "Ensure that the RotateKubeletServerCertificate argument is set to true" | Cluster provisioned by RKE handles certificate rotation directly through RKE. -1.4.1 | "Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -1.4.2 | "Ensure that the API server pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -1.4.3 | "Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -1.4.4 | "Ensure that the controller manager pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -1.4.5 | "Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -1.4.6 | "Ensure that the scheduler pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -1.4.7 | "Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd. -1.4.8 | "Ensure that the etcd pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd. -1.4.13 | "Ensure that the admin.conf file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. -1.4.14 | "Ensure that the admin.conf file ownership is set to root:root (Scored)" | Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. -2.1.8 | "Ensure that the --hostname-override argument is not set (Scored)" | Clusters provisioned by RKE clusters and most cloud providers require hostnames. -2.1.12 | "Ensure that the --rotate-certificates argument is not set to false (Scored)" | Cluster provisioned by RKE handles certificate rotation directly through RKE. -2.1.13 | "Ensure that the RotateKubeletServerCertificate argument is set to true (Scored)" | Cluster provisioned by RKE handles certificate rotation directly through RKE. -2.2.3 | "Ensure that the kubelet service file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. -2.2.4 | "Ensure that the kubelet service file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. -2.2.9 | "Ensure that the kubelet configuration file ownership is set to root:root (Scored)" | RKE doesn’t require or maintain a configuration file for the kubelet. -2.2.10 | "Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Scored)" | RKE doesn’t require or maintain a configuration file for the kubelet. - - -# Prerequisites - -To run security scans on a cluster and access the generated reports, you must be an [Administrator]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [Cluster Owner.]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/) - -Rancher can only run security scans on clusters that were created with RKE, which includes custom clusters and clusters that Rancher created in an infrastructure provider such as Amazon EC2 or GCE. Imported clusters and clusters in hosted Kubernetes providers can't be scanned by Rancher. - -The security scan cannot run in a cluster that has Windows nodes. - -You will only be able to see the CIS scan reports for clusters that you have access to. - -# Running a Scan - -1. From the cluster view in Rancher, click **Tools > CIS Scans.** -1. Click **Run Scan.** -1. Choose a CIS scan profile. - -**Result:** A report is generated and displayed in the **CIS Scans** page. To see details of the report, click the report's name. - -# Scheduling Recurring Scans - -Recurring scans can be scheduled to run on any RKE Kubernetes cluster. - -To enable recurring scans, edit the advanced options in the cluster configuration during cluster creation or after the cluster has been created. - -To schedule scans for an existing cluster: - -1. Go to the cluster view in Rancher. -1. Click **Tools > CIS Scans.** -1. Click **Add Schedule.** This takes you to the section of the cluster editing page that is applicable to configuring a schedule for CIS scans. (This section can also be reached by going to the cluster view, clicking **⋮ > Edit,** and going to the **Advanced Options.**) -1. In the **CIS Scan Enabled** field, click **Yes.** -1. In the **CIS Scan Profile** field, choose a **Permissive** or **Hardened** profile. The corresponding CIS Benchmark version is included in the profile name. Note: Any skipped tests [defined in a separate ConfigMap](#skipping-tests) will be skipped regardless of whether a **Permissive** or **Hardened** profile is selected. When selecting the the permissive profile, you should see which tests were skipped by Rancher (tests that are skipped by default for RKE clusters) and which tests were skipped by a Rancher user. In the hardened test profile, the only skipped tests will be skipped by users. -1. In the **CIS Scan Interval (cron)** job, enter a [cron expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) to define how often the cluster will be scanned. -1. In the **CIS Scan Report Retention** field, enter the number of past reports that should be kept. - -**Result:** The security scan will run and generate reports at the scheduled intervals. - -The test schedule can be configured in the `cluster.yml`: - -```yaml -scheduled_cluster_scan: -    enabled: true -    scan_config: -        cis_scan_config: -            override_benchmark_version: rke-cis-1.4 -            profile: permissive -    schedule_config: -        cron_schedule: 0 0 * * * -        retention: 24 -``` - - -# Skipping Tests - -You can define a set of tests that will be skipped by the CIS scan when the next report is generated. - -These tests will be skipped for subsequent CIS scans, including both manually triggered and scheduled scans, and the tests will be skipped with any profile. - -The skipped tests will be listed alongside the test profile name in the cluster configuration options when a test profile is selected for a recurring cluster scan. The skipped tests will also be shown every time a scan is triggered manually from the Rancher UI by clicking **Run Scan.** The display of skipped tests allows you to know ahead of time which tests will be run in each scan. - -To skip tests, you will need to define them in a Kubernetes ConfigMap resource. Each skipped CIS scan test is listed in the ConfigMap alongside the version of the CIS benchmark that the test belongs to. - -To skip tests by editing a ConfigMap resource, - -1. Create a `security-scan` namespace. -1. Create a ConfigMap named `security-scan-cfg`. -1. Enter the skip information under the key `config.json` in the following format: - - ```json - { - "skip": { - "rke-cis-1.4": [ - "1.1.1", - "1.2.2" - ] - } - } - ``` - - In the example above, the CIS benchmark version is specified alongside the tests to be skipped for that version. - -**Result:** These tests will be skipped on subsequent scans that use the defined CIS Benchmark version. - -# Setting Alerts - -Rancher provides a set of alerts for cluster scans. which are not configured to have notifiers by default: - -- A manual cluster scan was completed -- A manual cluster scan has failures -- A scheduled cluster scan was completed -- A scheduled cluster scan has failures - -> **Prerequisite:** You need to configure a [notifier]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) before configuring, sending, or receiving alerts. - -To activate an existing alert for a CIS scan result, - -1. From the cluster view in Rancher, click **Tools > Alerts.** -1. Go to the section called **A set of alerts for cluster scans.** -1. Go to the alert you want to activate and click **⋮ > Activate.** -1. Go to the alert rule group **A set of alerts for cluster scans** and click **⋮ > Edit.** -1. Scroll down to the **Alert** section. In the **To** field, select the notifier that you would like to use for sending alert notifications. -1. Optional: To limit the frequency of the notifications, click on **Show advanced options** and configure the time interval of the alerts. -1. Click **Save.** - -**Result:** The notifications will be triggered when the a scan is run on a cluster and the active alerts have satisfied conditions. - -To create a new alert, - -1. Go to the cluster view and click **Tools > CIS Scans.** -1. Click **Add Alert.** -1. Fill out the form. -1. Enter a name for the alert. -1. In the **Is** field, set the alert to be triggered when a scan is completed or when a scan has a failure. -1. In the **Send a** field, set the alert as a **Critical,** **Warning,** or **Info** alert level. -1. Choose a [notifier]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) for the alert. - -**Result:** The alert is created and activated. The notifications will be triggered when the a scan is run on a cluster and the active alerts have satisfied conditions. - -For more information about alerts, refer to [this page.]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/) - -# Deleting a Report - -1. From the cluster view in Rancher, click **Tools > CIS Scans.** -1. Go to the report that should be deleted. -1. Click the **⋮ > Delete.** -1. Click **Delete.** - -# Downloading a Report - -1. From the cluster view in Rancher, click **Tools > CIS Scans.** -1. Go to the report that you want to download. Click **⋮ > Download.** - -**Result:** The report is downloaded in CSV format. For more information on each columns, refer to the [section about the generated report.](#about-the-generated-report) +The documentation about CIS security scans has moved [here.]({{}}/rancher/v2.x/en/cis-scans) diff --git a/content/rancher/v2.x/en/system-tools/_index.md b/content/rancher/v2.x/en/system-tools/_index.md index 257f73cf171..b1dab07c826 100644 --- a/content/rancher/v2.x/en/system-tools/_index.md +++ b/content/rancher/v2.x/en/system-tools/_index.md @@ -1,6 +1,6 @@ --- title: System Tools -weight: 6001 +weight: 22 --- System Tools is a tool to perform operational tasks on [Rancher Launched Kubernetes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters or [installations of Rancher on an RKE cluster.]({{}}/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/) The tasks include: diff --git a/content/rancher/v2.x/en/troubleshooting/_index.md b/content/rancher/v2.x/en/troubleshooting/_index.md index edb5fb4f061..fc5f9154c7a 100644 --- a/content/rancher/v2.x/en/troubleshooting/_index.md +++ b/content/rancher/v2.x/en/troubleshooting/_index.md @@ -1,6 +1,6 @@ --- title: Troubleshooting -weight: 8100 +weight: 26 --- This section contains information to help you troubleshoot issues when using Rancher. diff --git a/content/rancher/v2.x/en/user-settings/_index.md b/content/rancher/v2.x/en/user-settings/_index.md index 6f163c24753..ba1a3bc6d64 100644 --- a/content/rancher/v2.x/en/user-settings/_index.md +++ b/content/rancher/v2.x/en/user-settings/_index.md @@ -1,6 +1,6 @@ --- title: User Settings -weight: 7000 +weight: 23 aliases: - /rancher/v2.x/en/tasks/user-settings/ --- diff --git a/content/rancher/v2.x/en/v1.6-migration/_index.md b/content/rancher/v2.x/en/v1.6-migration/_index.md index 0766c009821..27bb78e7b32 100644 --- a/content/rancher/v2.x/en/v1.6-migration/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/_index.md @@ -1,6 +1,6 @@ --- title: Migrating from v1.6 to v2.x -weight: 10000 +weight: 28 --- Rancher v2.x has been rearchitected and rewritten with the goal of providing a complete management solution for Kubernetes and Docker. Due to these extensive changes, there is no direct upgrade path from v1.6 to v2.x, but rather a migration of your v1.6 services into v2.x as Kubernetes workloads. In v1.6, the most common orchestration used was Rancher's own engine called Cattle. The following guide explains and educates our Cattle users on running workloads in a Kubernetes environment. diff --git a/content/rke/latest/en/tutorials/_index.md b/content/rke/latest/en/tutorials/_index.md new file mode 100644 index 00000000000..fa01f89e4ff --- /dev/null +++ b/content/rke/latest/en/tutorials/_index.md @@ -0,0 +1,4 @@ +--- +title: Tutorials +weight: 10000 +--- \ No newline at end of file diff --git a/content/rke/latest/en/tutorials/infra-for-ha/_index.md b/content/rke/latest/en/tutorials/infra-for-ha/_index.md new file mode 100644 index 00000000000..3ebc52eacdd --- /dev/null +++ b/content/rke/latest/en/tutorials/infra-for-ha/_index.md @@ -0,0 +1,62 @@ +--- +title: 'Set up Infrastructure for a High Availability RKE Kubernetes Cluster' +weight: 2 +--- + +> This page is under construction. + +In this section, you will create a high-availability RKE cluster that can be used to install a Rancher server. + +> **Note:** These nodes must be in the same region. You may place these servers in separate availability zones (datacenter). + +To install the Rancher management server on a high-availability RKE cluster, we recommend setting up the following infrastructure: + +- **Three Linux nodes,** typically virtual machines, in an infrastructure provider such as Amazon's EC2, Google Compute Engine, or vSphere. +- **A load balancer** to direct front-end traffic to the three nodes. +- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it. + +These nodes must be in the same region/data center. You may place these servers in separate availability zones. + +### Why three nodes? + +In an RKE cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes. + +The etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can suffer from [split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems), requiring the cluster to be restored from backup. If one of the three etcd nodes fails, the two remaining nodes can elect a leader because they have the majority of the total number of etcd nodes. + +### 1. Set up Linux Nodes + +Make sure that your nodes fulfill the general installation requirements for [OS, container runtime, hardware, and networking.]({{}}/rancher/v2.x/en/installation/requirements/) + +For an example of one way to set up Linux nodes, refer to this [tutorial]({{}}/rancher/v2.x/en/installation/options/ec2-node/) for setting up nodes as instances in Amazon EC2. + +### 2. Set up the Load Balancer + +You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server. + +When Kubernetes gets set up in a later step, the RKE tool will deploy an NGINX Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames. + +When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the NGINX Ingress controller to listen for traffic destined for the Rancher hostname. The NGINX Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster. + +For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer: + +- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment. +- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.]({{}}/rancher/v2.x/en/installation/options/chart-options/#external-tls-termination) + +For an example showing how to set up an NGINX load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nginx/) + +For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nlb/) + +> **Important:** +> Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications. + +### 3. Set up the DNS Record + +Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer. + +Depending on your environment, this may be an A record pointing to the LB IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on. + +You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one. + +For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer) + +### [Next: Set up a Kubernetes Cluster]({{}}/rancher/v2.x/en/installation/resources/k8s-tutorials/ka-rke/) \ No newline at end of file diff --git a/static/img/rancher/banzai-cloud-logging-operator.png b/static/img/rancher/banzai-cloud-logging-operator.png new file mode 100644 index 00000000000..816d2406aef Binary files /dev/null and b/static/img/rancher/banzai-cloud-logging-operator.png differ diff --git a/static/img/rancher/fleet-architecture.png b/static/img/rancher/fleet-architecture.png new file mode 100644 index 00000000000..f8584482ca2 Binary files /dev/null and b/static/img/rancher/fleet-architecture.png differ diff --git a/static/img/rancher/longhorn-logo.png b/static/img/rancher/longhorn-logo.png new file mode 100644 index 00000000000..b5112fc6795 Binary files /dev/null and b/static/img/rancher/longhorn-logo.png differ