Merge pull request #3040 from btat/btat-patch-2

Replace 'prior to' with 'before'
This commit is contained in:
Catherine Luse
2021-02-22 22:47:36 -07:00
committed by GitHub
109 changed files with 188 additions and 188 deletions
+1 -1
View File
@@ -356,7 +356,7 @@ The `--disable-selinux` option should not be used. It is deprecated and will be
Using a custom `--data-dir` under SELinux is not supported. To customize it, you would most likely need to write your own custom policy. For guidance, you could refer to the [containers/container-selinux](https://github.com/containers/container-selinux) repository, which contains the SELinux policy files for Container Runtimes, and the [rancher/k3s-selinux](https://github.com/rancher/k3s-selinux) repository, which contains the SELinux policy for K3s .
{{%/tab%}}
{{% tab "K3s prior to v1.19.1+k3s1" %}}
{{% tab "K3s before v1.19.1+k3s1" %}}
SELinux is automatically enabled for the built-in containerd.
@@ -7,7 +7,7 @@ weight: 190
### Pre-Requisites
Prior to launching RancherOS EC2 instances, the [ECS Container Instance IAM Role](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html) will need to have been created. This `ecsInstanceRole` will need to be used when launching EC2 instances. If you have been using ECS, you created this role if you followed the ECS "Get Started" interactive guide.
Before launching RancherOS EC2 instances, the [ECS Container Instance IAM Role](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html) will need to have been created. This `ecsInstanceRole` will need to be used when launching EC2 instances. If you have been using ECS, you created this role if you followed the ECS "Get Started" interactive guide.
### Launching an instance with ECS
@@ -21,7 +21,7 @@ If there are specific node drivers that you don't want to show to your users, yo
By default, Rancher only activates drivers for the most popular cloud providers, Amazon EC2, Azure, DigitalOcean and vSphere. If you want to show or hide any node driver, you can change its status.
1. From the **Global** view, choose **Tools > Drivers** in the navigation bar. From the **Drivers** page, select the **Node Drivers** tab. In version prior to v2.2.0, you can select **Node Drivers** directly in the navigation bar.
1. From the **Global** view, choose **Tools > Drivers** in the navigation bar. From the **Drivers** page, select the **Node Drivers** tab. In version before v2.2.0, you can select **Node Drivers** directly in the navigation bar.
2. Select the driver that you wish to **Activate** or **Deactivate** and select the appropriate icon.
@@ -29,7 +29,7 @@ By default, Rancher only activates drivers for the most popular cloud providers,
If you want to use a node driver that Rancher doesn't support out-of-the-box, you can add that provider's driver in order to start using them to create node templates and eventually node pools for your Kubernetes cluster.
1. From the **Global** view, choose **Tools > Drivers** in the navigation bar. From the **Drivers** page, select the **Node Drivers** tab. In version prior to v2.2.0, you can select **Node Drivers** directly in the navigation bar.
1. From the **Global** view, choose **Tools > Drivers** in the navigation bar. From the **Drivers** page, select the **Node Drivers** tab. In version before v2.2.0, you can select **Node Drivers** directly in the navigation bar.
2. Click **Add Node Driver**.
@@ -61,7 +61,7 @@ The steps to add custom roles differ depending on the version of Rancher.
1. Click **Create**.
{{% /tab %}}
{{% tab "Rancher prior to v2.0.7" %}}
{{% tab "Rancher before v2.0.7" %}}
1. From the **Global** view, select **Security > Roles** from the main menu.
@@ -55,7 +55,7 @@ The `restricted-admin` permissions are as follows:
### Upgrading from Rancher with a Hidden Local Cluster
Prior to Rancher v2.5, it was possible to run the Rancher server using this flag to hide the local cluster:
Before Rancher v2.5, it was possible to run the Rancher server using this flag to hide the local cluster:
```
--add-local=false
@@ -14,7 +14,7 @@ The Rancher version must be v2.5.0 and up to use this approach of backing up and
- [Changes in Rancher v2.5](#changes-in-rancher-v2-5)
- [Backup and Restore for Rancher v2.5 installed with Docker](#backup-and-restore-for-rancher-v2-5-installed-with-docker)
- [Backup and Restore for Rancher installed on a Kubernetes Cluster Prior to v2.5](#backup-and-restore-for-rancher-installed-on-a-kubernetes-cluster-prior-to-v2-5)
- [Backup and Restore for Rancher installed on a Kubernetes Cluster Before v2.5](#backup-and-restore-for-rancher-installed-on-a-kubernetes-cluster-before-v2-5)
- [How Backups and Restores Work](#how-backups-and-restores-work)
- [Installing the rancher-backup Operator](#installing-the-rancher-backup-operator)
- [Installing rancher-backup with the Rancher UI](#installing-rancher-backup-with-the-rancher-ui)
@@ -40,9 +40,9 @@ In Rancher v2.5, it is now supported to install Rancher hosted Kubernetes cluste
For Rancher installed with Docker, refer to the same steps used up till 2.5 for [backups](./docker-installs/docker-backups) and [restores.](./docker-installs/docker-backups)
### Backup and Restore for Rancher installed on a Kubernetes Cluster Prior to v2.5
### Backup and Restore for Rancher installed on a Kubernetes Cluster Before v2.5
For Rancher prior to v2.5, the way that Rancher is backed up and restored differs based on the way that Rancher was installed. Our legacy backup and restore documentation is here:
For Rancher before v2.5, the way that Rancher is backed up and restored differs based on the way that Rancher was installed. Our legacy backup and restore documentation is here:
- For Rancher installed on an RKE Kubernetes cluster, refer to the legacy [backup]({{<baseurl>}}/rancher/v2.x/en/backups/legacy/backup/ha-backups) and [restore]({{<baseurl>}}/rancher/v2.x/en/backups/legacy/restore/rke-restore) documentation.
- For Rancher installed on a K3s Kubernetes cluster, refer to the legacy [backup]({{<baseurl>}}/rancher/v2.x/en/backups/legacy/backup/k3s-backups) and [restore]({{<baseurl>}}/rancher/v2.x/en/backups/legacy/restore/k3s-restore) documentation.
@@ -11,7 +11,7 @@ In this guide, we recommend best practices for cluster-level logging and applica
# Changes in Logging in Rancher v2.5
Prior to Rancher v2.5, logging in Rancher has historically been a pretty static integration. There were a fixed list of aggregators to choose from (ElasticSearch, Splunk, Kafka, Fluentd and Syslog), and only two configuration points to choose (Cluster-level and Project-level).
Before Rancher v2.5, logging in Rancher has historically been a pretty static integration. There were a fixed list of aggregators to choose from (ElasticSearch, Splunk, Kafka, Fluentd and Syslog), and only two configuration points to choose (Cluster-level and Project-level).
Logging in 2.5 has been completely overhauled to provide a more flexible experience for log aggregation. With the new logging feature, administrators and users alike can deploy logging that meets fine-grained collection criteria while offering a wider array of destinations and configuration options.
@@ -23,7 +23,7 @@ All the tests that are skipped and not applicable on this page will be counted a
| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. |
| 1.2.34 | Ensure that encryption providers are appropriately configured (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. |
| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true. |
| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required before provisioning the cluster in order for this argument to be set to true. |
| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. |
| 5.1.5 | Ensure that default service accounts are not actively used. (Scored) | Kubernetes provides default service accounts to be used. |
| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
@@ -79,7 +79,7 @@ Number | Description | Reason for Skipping
1.7.3 | "Do not admit containers wishing to share the host IPC namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail.
1.7.4 | "Do not admit containers wishing to share the host network namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail.
1.7.5 | " Do not admit containers with allowPrivilegeEscalation (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail.
2.1.6 | "Ensure that the --protect-kernel-defaults argument is set to true (Scored)" | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true.
2.1.6 | "Ensure that the --protect-kernel-defaults argument is set to true (Scored)" | System level configurations are required before provisioning the cluster in order for this argument to be set to true.
2.1.10 | "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored)" | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers.
### CIS Benchmark v1.4 Not Applicable Tests
@@ -20,7 +20,7 @@ This section lists the tests that are skipped in the permissive test profile for
| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. |
| 1.2.34 | Ensure that encryption providers are appropriately configured (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. |
| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true. |
| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required before provisioning the cluster in order for this argument to be set to true. |
| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. |
| 5.1.5 | Ensure that default service accounts are not actively used. (Scored) | Kubernetes provides default service accounts to be used. |
| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
@@ -42,7 +42,7 @@ Because the Kubernetes version is now included in the snapshot, it is possible t
The multiple components of the snapshot allow you to select from the following options if you need to restore a cluster from a snapshot:
- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher prior to v2.4.0.
- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher before v2.4.0.
- **Restore etcd and Kubernetes version:** This option should be used if a Kubernetes upgrade is the reason that your cluster is failing, and you haven't made any cluster configuration changes.
- **Restore etcd, Kubernetes versions and cluster configuration:** This option should be used if you changed both the Kubernetes version and cluster configuration when upgrading.
@@ -85,7 +85,7 @@ On restore, the following process is used:
5. The cluster is restored and post-restore actions will be done in the cluster.
{{% /tab %}}
{{% tab "Rancher prior to v2.4.0" %}}
{{% tab "Rancher before v2.4.0" %}}
When Rancher creates a snapshot, only the etcd data is included in the snapshot.
Because the Kubernetes version is not included in the snapshot, there is no option to restore a cluster to a different Kubernetes version.
@@ -217,4 +217,4 @@ This option is not available directly in the UI, and is only available through t
# Enabling Snapshot Features for Clusters Created Before Rancher v2.2.0
If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Rancher, you must [edit the cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/restoring-etcd/).
If you have any Rancher launched Kubernetes clusters that were created before v2.2.0, after upgrading Rancher, you must [edit the cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots before v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/restoring-etcd/).
@@ -144,7 +144,7 @@ There are two drain modes: aggressive and safe.
If a node has standalone pods or ephemeral data it will be cordoned but not drained.
{{% /tab %}}
{{% tab "Rancher prior to v2.2.x" %}}
{{% tab "Rancher before v2.2.x" %}}
The following list describes each drain option:
@@ -170,7 +170,7 @@ The timeout given to each pod for cleaning things up, so they will have chance t
The amount of time drain should continue to wait before giving up.
>**Kubernetes Known Issue:** The [timeout setting](https://github.com/kubernetes/kubernetes/pull/64378) was not enforced while draining a node prior to Kubernetes 1.12.
>**Kubernetes Known Issue:** The [timeout setting](https://github.com/kubernetes/kubernetes/pull/64378) was not enforced while draining a node before Kubernetes 1.12.
### Drained and Cordoned State
@@ -37,7 +37,7 @@ Restores changed in Rancher v2.4.0.
Snapshots are composed of the cluster data in etcd, the Kubernetes version, and the cluster configuration in the `cluster.yml.` These components allow you to select from the following options when restoring a cluster from a snapshot:
- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher prior to v2.4.0.
- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher before v2.4.0.
- **Restore etcd and Kubernetes version:** This option should be used if a Kubernetes upgrade is the reason that your cluster is failing, and you haven't made any cluster configuration changes.
- **Restore etcd, Kubernetes versions and cluster configuration:** This option should be used if you changed both the Kubernetes version and cluster configuration when upgrading.
@@ -58,7 +58,7 @@ When rolling back to a prior Kubernetes version, the [upgrade strategy options](
**Result:** The cluster will go into `updating` state and the process of restoring the `etcd` nodes from the snapshot will start. The cluster is restored when it returns to an `active` state.
{{% /tab %}}
{{% tab "Rancher prior to v2.4.0" %}}
{{% tab "Rancher before v2.4.0" %}}
> **Prerequisites:**
>
@@ -110,4 +110,4 @@ If the group of etcd nodes loses quorum, the Kubernetes cluster will report a fa
# Enabling Snapshot Features for Clusters Created Before Rancher v2.2.0
If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Rancher, you must [edit the cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/restoring-etcd/).
If you have any Rancher launched Kubernetes clusters that were created before v2.2.0, after upgrading Rancher, you must [edit the cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots before v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/restoring-etcd/).
@@ -54,7 +54,7 @@ When upgrading the Kubernetes version of a cluster, we recommend that you:
The restore operation will work on a cluster that is not in a healthy or active state.
{{% /tab %}}
{{% tab "Rancher prior to v2.4" %}}
{{% tab "Rancher before v2.4" %}}
When upgrading the Kubernetes version of a cluster, we recommend that you:
1. Take a snapshot.
@@ -57,7 +57,7 @@ These steps describe how to set up a PVC in the namespace where your stateful wo
1. Go to the project containing a workload that you want to add a persistent volume claim to.
1. Then click the **Volumes** tab and click **Add Volume**. (In versions prior to v2.3.0, click **Workloads** on the main navigation bar, then **Volumes.**)
1. Then click the **Volumes** tab and click **Add Volume**. (In versions before v2.3.0, click **Workloads** on the main navigation bar, then **Volumes.**)
1. Enter a **Name** for the volume claim.
@@ -34,7 +34,7 @@ Persistent volume claims (PVCs) are objects that request storage resources from
To access persistent storage, a pod must have a PVC mounted as a volume. This PVC lets your deployment application store its data in an external location, so that if a pod fails, it can be replaced with a new pod and continue accessing its data stored externally, as though an outage never occurred.
Each Rancher project contains a list of PVCs that you've created, available from **Resources > Workloads > Volumes.** (In versions prior to v2.3.0, the PVCs are in the **Volumes** tab.) You can reuse these PVCs when creating deployments in the future.
Each Rancher project contains a list of PVCs that you've created, available from **Resources > Workloads > Volumes.** (In versions before v2.3.0, the PVCs are in the **Volumes** tab.) You can reuse these PVCs when creating deployments in the future.
### PVCs are Required for Both New and Existing Persistent Storage
@@ -66,7 +66,7 @@ These steps describe how to set up a PVC in the namespace where your stateful wo
1. Go to the **Cluster Manager** to the project containing a workload that you want to add a PVC to.
1. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Then select the **Volumes** tab. Click **Add Volume**.
1. From the main navigation bar, choose **Resources > Workloads.** (In versions before v2.3.0, choose **Workloads** on the main navigation bar.) Then select the **Volumes** tab. Click **Add Volume**.
1. Enter a **Name** for the volume claim.
@@ -218,7 +218,7 @@ Amazon will use the [EKS-optimized AMI](https://docs.aws.amazon.com/eks/latest/u
| Minimum ASG Size | The minimum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. |
{{% /tab %}}
{{% tab "Rancher prior to v2.5" %}}
{{% tab "Rancher before v2.5" %}}
### Account Access
@@ -360,7 +360,7 @@ Service Role | The service role provides Kubernetes the permissions it requires
VPC | Provides isolated network resources utilised by EKS and worker nodes. Rancher can create the VPC resources with the following [VPC Permissions]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/#vpc-permissions).
Resource targeting uses `*` as the ARN of many of the resources created cannot be known prior to creating the EKS cluster in Rancher.
Resource targeting uses `*` as the ARN of many of the resources created cannot be known before creating the EKS cluster in Rancher.
```json
{
@@ -12,7 +12,7 @@ Follow these steps while creating the vSphere cluster in Rancher:
{{< img "/img/rancher/vsphere-node-driver-cloudprovider.png" "vsphere-node-driver-cloudprovider">}}
1. Click on **Edit as YAML**
1. Insert the following structure to the pre-populated cluster YAML. As of Rancher v2.3+, this structure must be placed under `rancher_kubernetes_engine_config`. In versions prior to v2.3, it has to be defined as a top-level field. Note that the `name` *must* be set to `vsphere`.
1. Insert the following structure to the pre-populated cluster YAML. As of Rancher v2.3+, this structure must be placed under `rancher_kubernetes_engine_config`. In versions before v2.3, it has to be defined as a top-level field. Note that the `name` *must* be set to `vsphere`.
```yaml
rancher_kubernetes_engine_config: # Required as of Rancher v2.3+
@@ -88,7 +88,7 @@ You can access your cluster after its state is updated to **Active.**
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
{{% /tab %}}
{{% tab "Rancher prior to v2.2.0" %}}
{{% tab "Rancher before v2.2.0" %}}
Use Rancher to create a Kubernetes cluster in Azure.
@@ -22,7 +22,7 @@ The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-d
- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/)
{{% /tab %}}
{{% tab "Rancher prior to v2.2.0" %}}
{{% tab "Rancher before v2.2.0" %}}
- **Account Access** stores your account information for authenticating with Azure.
- **Placement** sets the geographical region where your cluster is hosted and other location metadata.
@@ -58,7 +58,7 @@ You can access your cluster after its state is updated to **Active.**
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
{{% /tab %}}
{{% tab "Rancher prior to v2.2.0" %}}
{{% tab "Rancher before v2.2.0" %}}
1. From the **Clusters** page, click **Add Cluster**.
1. Choose **DigitalOcean**.
@@ -21,7 +21,7 @@ The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-d
- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon
- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/)
{{% /tab %}}
{{% tab "Rancher prior to v2.2.0" %}}
{{% tab "Rancher before v2.2.0" %}}
### Access Token
@@ -76,7 +76,7 @@ You can access your cluster after its state is updated to **Active.**
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
{{% /tab %}}
{{% tab "Rancher prior to v2.2.0" %}}
{{% tab "Rancher before v2.2.0" %}}
1. From the **Clusters** page, click **Add Cluster**.
1. Choose **Amazon EC2**.
@@ -49,7 +49,7 @@ If you need to pass an **IAM Instance Profile Name** (not ARN), for example, whe
In the **Engine Options** section of the node template, you can configure the Docker daemon. You may want to specify the docker version or a Docker registry mirror.
{{% /tab %}}
{{% tab "Rancher prior to v2.2.0" %}}
{{% tab "Rancher before v2.2.0" %}}
### Account Access
@@ -43,7 +43,7 @@ For the fields to be populated, your setup needs to fulfill the [prerequisites.]
In Rancher v2.3.3+, you can provision VMs with any operating system that supports `cloud-init`. Only YAML format is supported for the [cloud config.](https://cloudinit.readthedocs.io/en/latest/topics/examples.html)
In Rancher prior to v2.3.3, the vSphere node driver included in Rancher only supported the provisioning of VMs with [RancherOS]({{<baseurl>}}/os/v1.x/en/) as the guest operating system.
In Rancher before v2.3.3, the vSphere node driver included in Rancher only supported the provisioning of VMs with [RancherOS]({{<baseurl>}}/os/v1.x/en/) as the guest operating system.
### Video Walkthrough of v2.3.3 Node Template Features
@@ -33,7 +33,7 @@ Refer to this [how-to guide]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/
It must be ensured that the hosts running the Rancher server are able to establish the following network connections:
- To the vSphere API on the vCenter server (usually port 443/TCP).
- To the Host API (port 443/TCP) on all ESXi hosts used to instantiate virtual machines for the clusters (*only required with Rancher prior to v2.3.3 or when using the ISO creation method in later versions*).
- To the Host API (port 443/TCP) on all ESXi hosts used to instantiate virtual machines for the clusters (*only required with Rancher before v2.3.3 or when using the ISO creation method in later versions*).
- To port 22/TCP and 2376/TCP on the created VMs
See [Node Networking Requirements]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#networking-requirements) for a detailed list of port requirements applicable for creating nodes on an infrastructure provider.
@@ -102,11 +102,11 @@ You can access your cluster after its state is updated to **Active.**
- `Default`, containing the `default` namespace
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
{{% /tab %}}
{{% tab "Rancher prior to v2.2.0" %}}
{{% tab "Rancher before v2.2.0" %}}
Use Rancher to create a Kubernetes cluster in vSphere.
For Rancher versions prior to v2.0.4, when you create the cluster, you will also need to follow the steps in [this section](http://localhost:9001/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vpshere-node-template-config/prior-to-2.0.4/#disk-uuids) to enable disk UUIDs.
For Rancher versions before v2.0.4, when you create the cluster, you will also need to follow the steps in [this section](http://localhost:9001/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vpshere-node-template-config/prior-to-2.0.4/#disk-uuids) to enable disk UUIDs.
1. From the **Clusters** page, click **Add Cluster**.
1. Choose **vSphere**.
@@ -116,7 +116,7 @@ For Rancher versions prior to v2.0.4, when you create the cluster, you will also
1. If you want to dynamically provision persistent storage or other infrastructure later, you will need to enable the vSphere cloud provider by modifying the cluster YAML file. For details, refer to [this section.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere)
1. Add one or more [node pools]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-pools) to your cluster. Each node pool uses a node template to provision new nodes. To create a node template, click **Add Node Template** and complete the **vSphere Options** form. For help filling out the form, refer to the vSphere node template configuration reference. Refer to the newest version of the configuration reference that is less than or equal to your Rancher version:
- [v2.0.4]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.0.4)
- [prior to v2.0.4]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4)
- [before v2.0.4]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4)
1. Review your options to confirm they're correct. Then click **Create** to start provisioning the VMs and Kubernetes services.
**Result:**
@@ -142,4 +142,4 @@ After creating your cluster, you can access it through the Rancher UI. As a best
- **Access your cluster with the kubectl CLI:** Follow [these steps]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/#accessing-clusters-with-kubectl-on-your-workstation) to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher servers authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI.
- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you cant connect to Rancher, you can still access the cluster.
- **Provision Storage:** For an example of how to provision storage in vSphere using Rancher, refer to [this section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere) In order to dynamically provision storage in vSphere, the vSphere provider must be [enabled.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere)
- **Provision Storage:** For an example of how to provision storage in vSphere using Rancher, refer to [this section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere) In order to dynamically provision storage in vSphere, the vSphere provider must be [enabled.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere)
@@ -13,4 +13,4 @@ The vSphere node templates in Rancher were updated in the following Rancher vers
- [v2.2.0](./v2.2.0)
- [v2.0.4](./v2.0.4)
For Rancher versions prior to v2.0.4, refer to [this version.](./prior-to-2.0.4)
For Rancher versions before v2.0.4, refer to [this version.](./prior-to-2.0.4)
@@ -1,6 +1,6 @@
---
title: vSphere Node Template Configuration in Rancher prior to v2.0.4
shortTitle: Prior to v2.0.4
title: vSphere Node Template Configuration in Rancher before v2.0.4
shortTitle: Before v2.0.4
weight: 5
---
@@ -267,7 +267,7 @@ windows_prefered_cluster: false
An example cluster config file is included below.
{{% accordion id="prior-to-v2.3.0-cluster-config-file" label="Example Cluster Config File for Rancher v2.0.0-v2.2.x" %}}
{{% accordion id="before-v2.3.0-cluster-config-file" label="Example Cluster Config File for Rancher v2.0.0-v2.2.x" %}}
```yaml
addon_job_timeout: 30
authentication:
@@ -11,7 +11,7 @@ When you create a [custom cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-provisi
You can provision a custom Windows cluster using Rancher by using a mix of Linux and Windows hosts as your cluster nodes.
>**Important:** In versions of Rancher prior to v2.3, support for Windows nodes is experimental. Therefore, it is not recommended to use Windows nodes for production environments if you are using Rancher prior to v2.3.
>**Important:** In versions of Rancher before v2.3, support for Windows nodes is experimental. Therefore, it is not recommended to use Windows nodes for production environments if you are using Rancher before v2.3.
This guide walks you through create of a custom cluster that includes three nodes:
@@ -15,4 +15,4 @@ Fleet is GitOps at scale. For more information, refer to the [Fleet section.](./
### Multi-cluster Apps
In Rancher prior to v2.5, the multi-cluster apps feature was used to deploy applications across clusters. Refer to the documentation [here.](./multi-cluster-apps)
In Rancher before v2.5, the multi-cluster apps feature was used to deploy applications across clusters. Refer to the documentation [here.](./multi-cluster-apps)
@@ -9,4 +9,4 @@ In Rancher v2.5, the [apps and marketplace feature](./apps-marketplace) is used
### Catalogs
In Rancher prior to v2.5, the [catalog system](./legacy-catalogs) was used to manage Helm charts.
In Rancher before v2.5, the [catalog system](./legacy-catalogs) was used to manage Helm charts.
@@ -52,7 +52,7 @@ When you create a custom catalog, you will have to configure the catalog to use
When you launch a new app from a catalog, the app will be managed by the catalog's Helm version. A Helm 2 catalog will use Helm 2 to manage all of the apps, and a Helm 3 catalog will use Helm 3 to manage all apps.
By default, catalogs are assumed to be deployed using Helm 2. If you run an app in Rancher prior to v2.4.0, then upgrade to Rancher v2.4.0+, the app will still be managed by Helm 2. If the app was already using a Helm 3 Chart (API version 2) it will no longer work in v2.4.0+. You must either downgrade the chart's API version or recreate the catalog to use Helm 3.
By default, catalogs are assumed to be deployed using Helm 2. If you run an app in Rancher before v2.4.0, then upgrade to Rancher v2.4.0+, the app will still be managed by Helm 2. If the app was already using a Helm 3 Chart (API version 2) it will no longer work in v2.4.0+. You must either downgrade the chart's API version or recreate the catalog to use Helm 3.
Charts that are specific to Helm 2 should only be added to a Helm 2 catalog, and Helm 3 specific charts should only be added to a Helm 3 catalog.
@@ -43,7 +43,7 @@ Private catalog repositories can be added using credentials like Username and Pa
For more information on private Git/Helm catalogs, refer to the [custom catalog configuration reference.]({{<baseurl>}}/rancher/v2.x/en/catalog/catalog-config)
1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar.
1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions before v2.2.0, you can select **Catalogs** directly in the navigation bar.
2. Click **Add Catalog**.
3. Complete the form and click **Create**.
@@ -56,7 +56,7 @@ For more information on private Git/Helm catalogs, refer to the [custom catalog
>- [Administrator Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/)
>- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Catalogs]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) role assigned.
1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar.
1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions before v2.2.0, you can select **Catalogs** directly in the navigation bar.
2. Click **Add Catalog**.
3. Complete the form. Select the Helm version that will be used to launch all of the apps in the catalog. For more information about the Helm version, refer to [this section.](
{{<baseurl>}}/rancher/v2.x/en/helm-charts/legacy-catalogs/#catalog-helm-deployment-versions)
@@ -15,7 +15,7 @@ Within Rancher, there are default catalogs packaged as part of Rancher. These ca
>- [Administrator Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/)
>- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Catalogs]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions-reference) role assigned.
1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar.
1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions before v2.2.0, you can select **Catalogs** directly in the navigation bar.
2. Toggle the default catalogs that you want to be enabled or disabled:
@@ -23,4 +23,4 @@ Within Rancher, there are default catalogs packaged as part of Rancher. These ca
- **Helm Stable:** This catalog, which is maintained by the Kubernetes community, includes native [Helm charts](https://helm.sh/docs/chart_template_guide/). This catalog features the largest pool of apps.
- **Helm Incubator:** Similar in user experience to Helm Stable, but this catalog is filled with applications in **beta**.
**Result**: The chosen catalogs are enabled. Wait a few minutes for Rancher to replicate the catalog charts. When replication completes, you'll be able to see them in any of your projects by selecting **Apps** from the main navigation bar. In versions prior to v2.2.0, within a project, you can select **Catalog Apps** from the main navigation bar.
**Result**: The chosen catalogs are enabled. Wait a few minutes for Rancher to replicate the catalog charts. When replication completes, you'll be able to see them in any of your projects by selecting **Apps** from the main navigation bar. In versions before v2.2.0, within a project, you can select **Catalog Apps** from the main navigation bar.
@@ -26,7 +26,7 @@ Rancher's Global DNS feature provides a way to program an external DNS provider
# Global DNS Providers
Prior to adding in Global DNS entries, you will need to configure access to an external provider.
Before adding in Global DNS entries, you will need to configure access to an external provider.
The following table lists the first version of Rancher each provider debuted.
@@ -28,7 +28,7 @@ Before launching an app, you'll need to either [enable a built-in global catalog
1. From the **Global** view, open the project that you want to deploy an app to.
2. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**.
2. From the main navigation bar, choose **Apps**. In versions before v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**.
3. Find the app that you want to launch, and then click **View Now**.
@@ -47,7 +47,7 @@ Before launching an app, you'll need to either [enable a built-in global catalog
7. Review the files in **Preview**. When you're satisfied, click **Launch**.
**Result**: Your application is deployed to your chosen namespace. You can view the application status from the project's **Workloads** view or **Apps** view. In versions prior to v2.2.0, this is the **Catalog Apps** view.
**Result**: Your application is deployed to your chosen namespace. You can view the application status from the project's **Workloads** view or **Apps** view. In versions before v2.2.0, this is the **Catalog Apps** view.
# Configuration Options
@@ -22,7 +22,7 @@ After an application is deployed, you can easily upgrade to a different template
1. From the **Global** view, navigate to the project that contains the catalog application that you want to upgrade.
1. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**.
1. From the main navigation bar, choose **Apps**. In versions before v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**.
3. Find the application that you want to upgrade, and then click the &#8942; to find **Upgrade**.
@@ -39,7 +39,7 @@ After an application is deployed, you can easily upgrade to a different template
**Result**: Your application is updated. You can view the application status from the project's:
- **Workloads** view
- **Apps** view. In versions prior to v2.2.0, this is the **Catalog Apps** view.
- **Apps** view. In versions before v2.2.0, this is the **Catalog Apps** view.
### Rolling Back Catalog Applications
@@ -48,7 +48,7 @@ After an application has been upgraded, you can easily rollback to a different t
1. From the **Global** view, navigate to the project that contains the catalog application that you want to upgrade.
1. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**.
1. From the main navigation bar, choose **Apps**. In versions before v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**.
3. Find the application that you want to rollback, and then click the &#8942; to find **Rollback**.
@@ -63,7 +63,7 @@ After an application has been upgraded, you can easily rollback to a different t
**Result**: Your application is updated. You can view the application status from the project's:
- **Workloads** view
- **Apps** view. In versions prior to v2.2.0, this is the **Catalog Apps** view.
- **Apps** view. In versions before v2.2.0, this is the **Catalog Apps** view.
### Deleting Catalog Application Deployments
@@ -78,7 +78,7 @@ For that reason, we recommend that for a production-grade architecture, you shou
>
> For Rancher v2.5, any Kubernetes cluster can be used.
> For Rancher v2.4.x, either an RKE Kubernetes cluster or K3s Kubernetes cluster can be used.
> For Rancher prior to v2.4, an RKE cluster must be used.
> For Rancher before v2.4, an RKE cluster must be used.
For testing or demonstration purposes, you can install Rancher in single Docker container. In this Docker install, you can use Rancher to set up Kubernetes clusters out-of-the-box. The Docker install allows you to explore the Rancher server functionality, but it is intended to be used for development and testing purposes only.
@@ -19,7 +19,7 @@ The cluster requirements depend on the Rancher version:
- **As of Rancher v2.5,** Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS.
> **Note:** To deploy Rancher v2.5 on a hosted Kubernetes cluster such as EKS, GKE, or AKS, you should deploy a compatible Ingress controller first to configure [SSL termination on Rancher.]({{<baseurl>}}/rancher/v2.x/en/installation/install-rancher-on-k8s/#4-choose-your-ssl-configuration).
- **In Rancher v2.4.x,** Rancher needs to be installed on a K3s Kubernetes cluster or an RKE Kubernetes cluster.
- **In Rancher prior to v2.4,** Rancher needs to be installed on an RKE Kubernetes cluster.
- **In Rancher before v2.4,** Rancher needs to be installed on an RKE Kubernetes cluster.
For the tutorial to install an RKE Kubernetes cluster, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-rke/) For help setting up the infrastructure for a high-availability RKE cluster, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-ha)
@@ -80,7 +80,7 @@ Rancher can be rolled back using the Rancher UI.
# Rolling Back to Rancher v2.2-v2.4+
To roll back to Rancher prior to v2.5, follow the procedure detailed here: [Restoring Backups — Kubernetes installs]({{<baseurl>}}/rancher/v2.x/en/backups/v2.0.x-v2.4.x/restore/rke-restore/) Restoring a snapshot of the Rancher server cluster will revert Rancher to the version and state at the time of the snapshot.
To roll back to Rancher before v2.5, follow the procedure detailed here: [Restoring Backups — Kubernetes installs]({{<baseurl>}}/rancher/v2.x/en/backups/v2.0.x-v2.4.x/restore/rke-restore/) Restoring a snapshot of the Rancher server cluster will revert Rancher to the version and state at the time of the snapshot.
For information on how to roll back Rancher installed with Docker, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-rollbacks)
@@ -35,7 +35,7 @@ During upgrades from Rancher v2.0.6- to Rancher v2.0.7+, all system namespaces a
You can prevent cluster networking issues from occurring during your upgrade to v2.0.7+ by unassigning system namespaces from all of your Rancher projects. Complete this task if you've assigned any of a cluster's system namespaces into a Rancher project.
1. Log into the Rancher UI prior to upgrade.
1. Log into the Rancher UI before upgrade.
1. From the context menu, open the **local** cluster (or any of your other clusters).
@@ -19,6 +19,6 @@ Since there is only one node and a single Docker container, if the node goes dow
The ability to migrate Rancher to a high-availability cluster depends on the Rancher version:
- For Rancher v2.0-v2.4, there was no migration path from a Docker installation to a high-availability installation. Therefore, if you are using Rancher prior to v2.5, you may want to use a Kubernetes installation from the start.
- For Rancher v2.0-v2.4, there was no migration path from a Docker installation to a high-availability installation. Therefore, if you are using Rancher before v2.5, you may want to use a Kubernetes installation from the start.
- For Rancher v2.5+, the Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.]({{<baseurl>}}/rancher/v2.x/en/backups/v2.5/migrating-rancher/)
@@ -25,7 +25,7 @@ This section describes installing Rancher in five parts:
- [2. Choose your SSL Configuration](#2-choose-your-ssl-configuration)
- [3. Render the Rancher Helm Template](#3-render-the-rancher-helm-template)
- [4. Install Rancher](#4-install-rancher)
- [5. For Rancher versions prior to v2.3.0, Configure System Charts](#5-for-rancher-versions-prior-to-v2-3-0-configure-system-charts)
- [5. For Rancher versions before v2.3.0, Configure System Charts](#5-for-rancher-versions-before-v2-3-0-configure-system-charts)
# 1. Add the Helm Chart Repository
@@ -220,9 +220,9 @@ kubectl -n cattle-system apply -R -f ./rancher
> **Note:** If you don't intend to send telemetry data, opt out [telemetry]({{<baseurl>}}/rancher/v2.x/en/faq/telemetry/) during the initial login. Leaving this active in an air-gapped environment can cause issues if the sockets cannot be opened successfully.
# 5. For Rancher versions prior to v2.3.0, Configure System Charts
# 5. For Rancher versions before v2.3.0, Configure System Charts
If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.x/en/installation/resources/local-system-charts/).
If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.x/en/installation/resources/local-system-charts/).
# Additional Resources
@@ -255,7 +255,7 @@ For security purposes, SSL (Secure Sockets Layer) is required when using Rancher
> - Configure custom CA root certificate to access your services? See [Custom CA root certificate]({{<baseurl>}}/rancher/v2.x/en/installation/options/custom-ca-root-certificate/).
> - Record all transactions with the Rancher API? See [API Auditing]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/advanced/#api-audit-log).
- For Rancher prior to v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher prior to v2.3.0.]({{<baseurl>}}/rancher/v2.x/en/installation/resources/local-system-charts/)
- For Rancher before v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher before v2.3.0.]({{<baseurl>}}/rancher/v2.x/en/installation/resources/local-system-charts/)
Choose from the following options:
@@ -364,7 +364,7 @@ If you are installing Rancher v2.3.0+, the installation is complete.
> **Note:** If you don't intend to send telemetry data, opt out [telemetry]({{<baseurl>}}/rancher/v2.x/en/faq/telemetry/) during the initial login.
If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.x/en/installation/resources/local-system-charts/).
If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.x/en/installation/resources/local-system-charts/).
{{% /tab %}}
{{% /tabs %}}
@@ -9,7 +9,7 @@ aliases:
This section describes how to install a Kubernetes cluster according to our [best practices for the Rancher server environment.]({{<baseurl>}}/rancher/v2.x/en/overview/architecture-recommendations/#environment-for-kubernetes-installations) This cluster should be dedicated to run only the Rancher server.
For Rancher prior to v2.4, Rancher should be installed on an [RKE]({{<baseurl>}}/rke/latest/en/) (Rancher Kubernetes Engine) Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers.
For Rancher before v2.4, Rancher should be installed on an [RKE]({{<baseurl>}}/rke/latest/en/) (Rancher Kubernetes Engine) Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers.
In Rancher v2.4, the Rancher management server can be installed on either an RKE cluster or a K3s Kubernetes cluster. K3s is also a fully certified Kubernetes distribution released by Rancher, but is newer than RKE. We recommend installing Rancher on K3s because K3s is easier to use, and more lightweight, with a binary size of less than 100 MB. The Rancher management server can only be run on a Kubernetes cluster in an infrastructure provider where Kubernetes is installed using RKE or K3s. Use of Rancher on hosted Kubernetes providers, such as EKS, is not supported. Note: After Rancher is installed on an RKE cluster, there is no migration path to a K3s setup at this time.
@@ -17,7 +17,7 @@ In this installation scenario, you'll install Docker on a single Linux host, and
A Docker installation of Rancher is recommended only for development and testing purposes. The ability to migrate Rancher to a high-availability cluster depends on the Rancher version:
- For Rancher v2.0-v2.4, there was no migration path from a Docker installation to a high-availability installation. Therefore, if you are using Rancher prior to v2.5, you may want to use a Kubernetes installation from the start.
- For Rancher v2.0-v2.4, there was no migration path from a Docker installation to a high-availability installation. Therefore, if you are using Rancher before v2.5, you may want to use a Kubernetes installation from the start.
- For Rancher v2.5+, the Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.]({{<baseurl>}}/rancher/v2.x/en/backups/v2.5/migrating-rancher/)
@@ -44,7 +44,7 @@ If you have issues upgrading Rancher, roll it back to its latest known healthy s
1. Using a remote Terminal connection, log into the node running your Rancher Server.
1. Pull the version of Rancher that you were running prior to upgrade. Replace the `<PRIOR_RANCHER_VERSION>` with that version.
1. Pull the version of Rancher that you were running before upgrade. Replace the `<PRIOR_RANCHER_VERSION>` with that version.
For example, if you were running Rancher v2.0.5 before upgrade, pull v2.0.5.
@@ -85,4 +85,4 @@ If you have issues upgrading Rancher, roll it back to its latest known healthy s
1. Wait a few moments and then open Rancher in a web browser. Confirm that the rollback succeeded and that your data is restored.
**Result:** Rancher is rolled back to its version and data state prior to upgrade.
**Result:** Rancher is rolled back to its version and data state before upgrade.
@@ -252,7 +252,7 @@ As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.x/
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
> For Rancher versions from v2.2.0 to v2.2.x, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher prior to v2.3.0.]({{<baseurl>}}/rancher/v2.x/en/installation/resources/local-system-charts/)
> For Rancher versions from v2.2.0 to v2.2.x, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher before v2.3.0.]({{<baseurl>}}/rancher/v2.x/en/installation/resources/local-system-charts/)
When starting the new Rancher server container, choose from the following options:
@@ -16,7 +16,7 @@ Make sure the node(s) for the Rancher server fulfill the following requirements:
- [RKE and Hosted Kubernetes](#rke-and-hosted-kubernetes)
- [K3s Kubernetes](#k3s-kubernetes)
- [RancherD](#rancherd)
- [CPU and Memory for Rancher prior to v2.4.0](#cpu-and-memory-for-rancher-prior-to-v2-4-0)
- [CPU and Memory for Rancher before v2.4.0](#cpu-and-memory-for-rancher-before-v2-4-0)
- [Disks](#disks)
- [Networking Requirements](#networking-requirements)
- [Node IP Addresses](#node-ip-addresses)
@@ -87,7 +87,7 @@ These CPU and memory requirements apply to each host in the Kubernetes cluster w
These requirements apply to RKE Kubernetes clusters, as well as to hosted Kubernetes clusters such as EKS.
Performance increased in Rancher v2.4.0. For the requirements of Rancher prior to v2.4.0, refer to [this section.](#cpu-and-memory-for-rancher-prior-to-v2-4-0)
Performance increased in Rancher v2.4.0. For the requirements of Rancher before v2.4.0, refer to [this section.](#cpu-and-memory-for-rancher-before-v2-4-0)
| Deployment Size | Clusters | Nodes | vCPUs | RAM |
| --------------- | ---------- | ------------ | -------| ------- |
@@ -133,10 +133,10 @@ These CPU and memory requirements apply to a host with a [single-node]({{<baseur
| Small | Up to 5 | Up to 50 | 1 | 4 GB |
| Medium | Up to 15 | Up to 200 | 2 | 8 GB |
### CPU and Memory for Rancher prior to v2.4.0
### CPU and Memory for Rancher before v2.4.0
{{% accordion label="Click to expand" %}}
These CPU and memory requirements apply to installing Rancher on an RKE Kubernetes cluster prior to Rancher v2.4.0:
These CPU and memory requirements apply to installing Rancher on an RKE Kubernetes cluster before Rancher v2.4.0:
| Deployment Size | Clusters | Nodes | vCPUs | RAM |
| --------------- | --------- | ---------- | ----------------------------------------------- | ----------------------------------------------- |
@@ -227,7 +227,7 @@ The following table depicts the port requirements for [hosted clusters]({{<baseu
### Ports for Registered Clusters
Note: Registered clusters were called imported clusters prior to Rancher v2.5.
Note: Registered clusters were called imported clusters before Rancher v2.5.
{{% accordion label="Click to expand" %}}
@@ -23,7 +23,7 @@ This section describes installing Rancher in five parts:
- [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration)
- [C. Render the Rancher Helm Template](#c-render-the-rancher-helm-template)
- [D. Install Rancher](#d-install-rancher)
- [E. For Rancher versions prior to v2.3.0, Configure System Charts](#e-for-rancher-versions-prior-to-v2-3-0-configure-system-charts)
- [E. For Rancher versions before v2.3.0, Configure System Charts](#e-for-rancher-versions-before-v2-3-0-configure-system-charts)
### A. Add the Helm Chart Repository
@@ -209,9 +209,9 @@ kubectl -n cattle-system apply -R -f ./rancher
**Step Result:** If you are installing Rancher v2.3.0+, the installation is complete.
### E. For Rancher versions prior to v2.3.0, Configure System Charts
### E. For Rancher versions before v2.3.0, Configure System Charts
If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.x/en/installation/options/local-system-charts/).
If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.x/en/installation/options/local-system-charts/).
### Additional Resources
@@ -238,7 +238,7 @@ For security purposes, SSL (Secure Sockets Layer) is required when using Rancher
> - Configure custom CA root certificate to access your services? See [Custom CA root certificate]({{<baseurl>}}/rancher/v2.x/en/installation/options/chart-options/#additional-trusted-cas).
> - Record all transactions with the Rancher API? See [API Auditing]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/advanced/#api-audit-log).
- For Rancher prior to v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher prior to v2.3.0.]({{<baseurl>}}/rancher/v2.x/en/installation/options/local-system-charts/)
- For Rancher before v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher before v2.3.0.]({{<baseurl>}}/rancher/v2.x/en/installation/options/local-system-charts/)
Choose from the following options:
@@ -328,7 +328,7 @@ docker run -d --restart=unless-stopped \
If you are installing Rancher v2.3.0+, the installation is complete.
If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.x/en/installation/options/local-system-charts/).
If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.x/en/installation/options/local-system-charts/).
{{% /tab %}}
{{% /tabs %}}
@@ -64,7 +64,7 @@ kubectl -n cattle-system logs -f rancher-84d886bdbb-s4s69 rancher-audit-log
#### Rancher Web GUI
1. From the context menu, select **Cluster: local > System**.
1. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Find the `cattle-system` namespace. Open the `rancher` workload by clicking its link.
1. From the main navigation bar, choose **Resources > Workloads.** (In versions before v2.3.0, choose **Workloads** on the main navigation bar.) Find the `cattle-system` namespace. Open the `rancher` workload by clicking its link.
1. Pick one of the `rancher` pods and select **&#8942; > View Logs**.
1. From the **Logs** drop-down, select `rancher-audit-log`.
@@ -33,7 +33,7 @@ Rancher provides several different Helm chart repositories to choose from. We al
<br/>
Instructions on when to select these repos are available below in [Switching to a Different Helm Chart Repository](#switching-to-a-different-helm-chart-repository).
> **Note:** The introduction of the `rancher-latest` and `rancher-stable` Helm Chart repositories was introduced after Rancher v2.1.0, so the `rancher-stable` repository contains some Rancher versions that were never marked as `rancher/rancher:stable`. The versions of Rancher that were tagged as `rancher/rancher:stable` prior to v2.1.0 are v2.0.4, v2.0.6, v2.0.8. Post v2.1.0, all charts in the `rancher-stable` repository will correspond with any Rancher version tagged as `stable`.
> **Note:** The introduction of the `rancher-latest` and `rancher-stable` Helm Chart repositories was introduced after Rancher v2.1.0, so the `rancher-stable` repository contains some Rancher versions that were never marked as `rancher/rancher:stable`. The versions of Rancher that were tagged as `rancher/rancher:stable` before v2.1.0 are v2.0.4, v2.0.6, v2.0.8. Post v2.1.0, all charts in the `rancher-stable` repository will correspond with any Rancher version tagged as `stable`.
### Helm Chart Versions
@@ -5,7 +5,7 @@ weight: 4
This section contains information on how to install a Kubernetes cluster that the Rancher server can be installed on.
In Rancher prior to v2.4, the Rancher server needed to run on an RKE Kubernetes cluster.
In Rancher before v2.4, the Rancher server needed to run on an RKE Kubernetes cluster.
In Rancher v2.4.x, Rancher need to run on either an RKE Kubernetes cluster or a K3s Kubernetes cluster.
@@ -9,7 +9,7 @@ aliases:
This section describes how to install a Kubernetes cluster. This cluster should be dedicated to run only the Rancher server.
For Rancher prior to v2.4, Rancher should be installed on an RKE Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers.
For Rancher before v2.4, Rancher should be installed on an RKE Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers.
As of Rancher v2.4, the Rancher management server can be installed on either an RKE cluster or a K3s Kubernetes cluster. K3s is also a fully certified Kubernetes distribution released by Rancher, but is newer than RKE. We recommend installing Rancher on K3s because K3s is easier to use, and more lightweight, with a binary size of less than 100 MB. Note: After Rancher is installed on an RKE cluster, there is no migration path to a K3s setup at this time.
@@ -9,7 +9,7 @@ aliases:
The [System Charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS.
In an air gapped installation of Rancher, you will need to configure Rancher to use a local copy of the system charts. This section describes how to use local system charts using a CLI flag in Rancher v2.3.0, and using a Git mirror for Rancher versions prior to v2.3.0.
In an air gapped installation of Rancher, you will need to configure Rancher to use a local copy of the system charts. This section describes how to use local system charts using a CLI flag in Rancher v2.3.0, and using a Git mirror for Rancher versions before v2.3.0.
# Using Local System Charts in Rancher v2.3.0
@@ -17,7 +17,7 @@ In Rancher v2.3.0, a local copy of `system-charts` has been packaged into the `r
Example commands for a Rancher installation with a bundled `system-charts` are included in the [air gap Docker installation]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap-single-node/install-rancher) instructions and the [air gap Kubernetes installation]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap-high-availability/install-rancher/) instructions.
# Setting Up System Charts for Rancher Prior to v2.3.0
# Setting Up System Charts for Rancher Before v2.3.0
### A. Prepare System Charts
@@ -88,7 +88,7 @@ spec:
This enables monitoring across namespaces by giving Prometheus additional scrape configurations.
The usability tradeoff is that all of Prometheus' `additionalScrapeConfigs` are maintained in a single Secret. This could make upgrading difficult if monitoring is already deployed with additionalScrapeConfigs prior to installing Istio.
The usability tradeoff is that all of Prometheus' `additionalScrapeConfigs` are maintained in a single Secret. This could make upgrading difficult if monitoring is already deployed with additionalScrapeConfigs before installing Istio.
1. If starting a new install, **Click** the **rancher-monitoring** chart, then in **Chart Options** click **Edit as Yaml**.
1. If updating an existing installation, click on **Upgrade**, then in **Chart Options** click **Edit as Yaml**.
@@ -15,7 +15,7 @@ Add SSL certificates to either projects, namespaces, or both. A project scoped c
1. From the **Global** view, select the project where you want to deploy your ingress.
1. From the main menu, select **Resources > Secrets > Certificates**. Click **Add Certificate**. (For Rancher prior to v2.3, click **Resources > Certificates.**)
1. From the main menu, select **Resources > Secrets > Certificates**. Click **Add Certificate**. (For Rancher before v2.3, click **Resources > Certificates.**)
1. Enter a **Name** for the certificate.
@@ -39,7 +39,7 @@ Add SSL certificates to either projects, namespaces, or both. A project scoped c
- If you added an SSL certificate to the project, the certificate is available for deployments created in any project namespace.
- If you added an SSL certificate to a namespace, the certificate is available only for deployments in that namespace.
- Your certificate is added to the **Resources > Secrets > Certificates** view. (For Rancher prior to v2.3, it is added to **Resources > Certificates.**)
- Your certificate is added to the **Resources > Secrets > Certificates** view. (For Rancher before v2.3, it is added to **Resources > Certificates.**)
## What's Next?
@@ -22,12 +22,12 @@ The way that you manage HPAs is different based on your version of the Kubernete
HPAs are also managed differently based on your version of Rancher:
- **For Rancher v2.3.0+**: You can create, manage, and delete HPAs using the Rancher UI. From the Rancher UI you can configure the HPA to scale based on CPU and memory utilization. For more information, refer to [Managing HPAs with the Rancher UI]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). To scale the HPA based on custom metrics, you still need to use `kubectl`. For more information, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus).
- **For Rancher Prior to v2.3.0:** To manage and configure HPAs, you need to use `kubectl`. For instructions on how to create, manage, and scale HPAs, refer to [Managing HPAs with kubectl]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl).
- **For Rancher Before v2.3.0:** To manage and configure HPAs, you need to use `kubectl`. For instructions on how to create, manage, and scale HPAs, refer to [Managing HPAs with kubectl]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl).
You might have additional HPA installation steps if you are using an older version of Rancher:
- **For Rancher v2.0.7+:** Clusters created in Rancher v2.0.7 and higher automatically have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use HPA.
- **For Rancher Prior to v2.0.7:** Clusters created in Rancher prior to v2.0.7 don't automatically have the requirements needed to use HPA. For instructions on installing HPA for these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7).
- **For Rancher Before v2.0.7:** Clusters created in Rancher before v2.0.7 don't automatically have the requirements needed to use HPA. For instructions on installing HPA for these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7).
## Testing HPAs with a Service Deployment
@@ -5,7 +5,7 @@ aliases:
- /rancher/v2.x/en/k8s-in-rancher/horizontal-pod-autoscaler/hpa-for-rancher-before-2_0_7
---
This section describes how to manually install HPAs for clusters created with Rancher prior to v2.0.7. This section also describes how to configure your HPA to scale up or down, and how to assign roles to your HPA.
This section describes how to manually install HPAs for clusters created with Rancher before v2.0.7. This section also describes how to configure your HPA to scale up or down, and how to assign roles to your HPA.
Before you can use HPA in your Kubernetes cluster, you must fulfill some requirements.
@@ -17,7 +17,7 @@ This section describes HPA management with `kubectl`. This document has instruct
In Rancher v2.3.x, you can create, view, and delete HPAs from the Rancher UI. You can also configure them to scale based on CPU or memory usage from the Rancher UI. For more information, refer to [Managing HPAs with the Rancher UI]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). For scaling HPAs based on other metrics than CPU or memory, you still need `kubectl`.
### Note For Rancher Prior to v2.0.7
### Note For Rancher Before v2.0.7
Clusters created with older versions of Rancher don't automatically have all the requirements to create an HPA. To install an HPA on these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7).
@@ -10,7 +10,7 @@ aliases:
Ingress can be added for workloads to provide load balancing, SSL termination and host/path based routing. When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a [Global DNS entry]({{<baseurl>}}/rancher/v2.x/en/catalog/globaldns/).
1. From the **Global** view, open the project that you want to add ingress to.
1. Click **Resources** in the main navigation bar. Click the **Load Balancing** tab. (In versions prior to v2.3.0, just click the **Load Balancing** tab.) Then click **Add Ingress**.
1. Click **Resources** in the main navigation bar. Click the **Load Balancing** tab. (In versions before v2.3.0, just click the **Load Balancing** tab.) Then click **Add Ingress**.
1. Enter a **Name** for the ingress.
1. Select an existing **Namespace** from the drop-down list. Alternatively, you can create a new namespace on the fly by clicking **Add to a new namespace**.
1. Create ingress forwarding **Rules**. For help configuring the rules, refer to [this section.](#ingress-rule-configuration) If any of your ingress rules handle requests for encrypted ports, add a certificate to encrypt/decrypt communications.
@@ -24,7 +24,7 @@ Currently, deployments pull the private registry credentials automatically only
1. From the **Global** view, select the project containing the namespace(s) where you want to add a registry.
1. From the main menu, click **Resources > Secrets > Registry Credentials.** (For Rancher prior to v2.3, click **Resources > Registries.)**
1. From the main menu, click **Resources > Secrets > Registry Credentials.** (For Rancher before v2.3, click **Resources > Registries.)**
1. Click **Add Registry.**
@@ -53,7 +53,7 @@ You can deploy a workload with an image from a private registry through the Ranc
To deploy a workload with an image from your private registry,
1. Go to the project view,
1. Click **Resources > Workloads.** In versions prior to v2.3.0, go to the **Workloads** tab.
1. Click **Resources > Workloads.** In versions before v2.3.0, go to the **Workloads** tab.
1. Click **Deploy.**
1. Enter a unique name for the workload and choose a namespace.
1. In the **Docker Image** field, enter the URL of the path to the Docker image in your private registry. For example, if your private registry is on Quay.io, you could use `quay.io/<Quay profile name>/<Image name>`.
@@ -13,7 +13,7 @@ However, you also have the option of creating additional Service Discovery recor
1. From the **Global** view, open the project that you want to add a DNS record to.
1. Click **Resources** in the main navigation bar. Click the **Service Discovery** tab. (In versions prior to v2.3.0, just click the **Service Discovery** tab.) Then click **Add Record**.
1. Click **Resources** in the main navigation bar. Click the **Service Discovery** tab. (In versions before v2.3.0, just click the **Service Discovery** tab.) Then click **Add Record**.
1. Enter a **Name** for the DNS record. This name is used for DNS resolution.
@@ -9,7 +9,7 @@ A _sidecar_ is a container that extends or enhances the main container in a pod.
1. From the **Global** view, open the project running the workload you want to add a sidecar to.
1. Click **Resources > Workloads.** In versions prior to v2.3.0, select the **Workloads** tab.
1. Click **Resources > Workloads.** In versions before v2.3.0, select the **Workloads** tab.
1. Find the workload that you want to extend. Select **&#8942; icon (...) > Add a Sidecar**.
@@ -11,7 +11,7 @@ Deploy a workload to run an application in one or more containers.
1. From the **Global** view, open the project that you want to deploy a workload to.
1. 1. Click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) From the **Workloads** view, click **Deploy**.
1. 1. Click **Resources > Workloads.** (In versions before v2.3.0, click the **Workloads** tab.) From the **Workloads** view, click **Deploy**.
1. Enter a **Name** for the workload.
@@ -5,7 +5,7 @@ weight: 2
---
This section contains documentation for the logging features that were available in Rancher prior to v2.5.
This section contains documentation for the logging features that were available in Rancher before v2.5.
- [Cluster logging](./cluster-logging)
- [Project logging](./project-logging)
@@ -59,7 +59,7 @@ Logs that are sent to your logging service are from the following locations:
1. From the **Global** view, navigate to the project that you want to configure project logging.
1. Select **Tools > Logging** in the navigation bar. In versions prior to v2.2.0, you can choose **Resources > Logging**.
1. Select **Tools > Logging** in the navigation bar. In versions before v2.2.0, you can choose **Resources > Logging**.
1. Select a logging service and enter the configuration. Refer to the specific service for detailed configuration. Rancher supports the following services:
+1 -1
View File
@@ -24,7 +24,7 @@ With Longhorn, you can:
### New in Rancher v2.5
Prior to Rancher v2.5, Longhorn could be installed as a Rancher catalog app. In Rancher v2.5, the catalog system was replaced by the **Apps & Marketplace,** and it became possible to install Longhorn as an app from that page.
Before Rancher v2.5, Longhorn could be installed as a Rancher catalog app. In Rancher v2.5, the catalog system was replaced by the **Apps & Marketplace,** and it became possible to install Longhorn as an app from that page.
The **Cluster Explorer** now allows you to manipulate Longhorn's Kubernetes resources from the Rancher UI. So now you can control the Longhorn functionality with the Longhorn UI, or with kubectl, or by manipulating Longhorn's Kubernetes custom resources in the Rancher UI.
@@ -4,7 +4,7 @@ shortTitle: Rancher v2.0-v2.4
weight: 2
---
This section contains documentation related to the monitoring features available in Rancher prior to v2.5.
This section contains documentation related to the monitoring features available in Rancher before v2.5.
@@ -53,7 +53,7 @@ For information on other default alerts, refer to the section on [cluster-level
>**Prerequisite:** Before you can receive project alerts, you must add a notifier.
1. From the **Global** view, navigate to the project that you want to configure project alerts for. Select **Tools > Alerts**. In versions prior to v2.2.0, you can choose **Resources > Alerts**.
1. From the **Global** view, navigate to the project that you want to configure project alerts for. Select **Tools > Alerts**. In versions before v2.2.0, you can choose **Resources > Alerts**.
1. Click **Add Alert Group**.
@@ -75,7 +75,7 @@ For information on other default alerts, refer to the section on [cluster-level
# Managing Project Alerts
To manage project alerts, browse to the project that alerts you want to manage. Then select **Tools > Alerts**. In versions prior to v2.2.0, you can choose **Resources > Alerts**. You can:
To manage project alerts, browse to the project that alerts you want to manage. Then select **Tools > Alerts**. In versions before v2.2.0, you can choose **Resources > Alerts**. You can:
- Deactivate/Reactive alerts
- Edit alert settings
@@ -105,7 +105,7 @@ Workload metrics display the hardware utilization for a Kubernetes workload. You
1. From the **Global** view, navigate to the project that you want to view workload metrics.
1. From the main navigation bar, choose **Resources > Workloads.** In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.
1. From the main navigation bar, choose **Resources > Workloads.** In versions before v2.3.0, choose **Workloads** on the main navigation bar.
1. Select a specific workload and click on its name.
@@ -72,7 +72,7 @@ To access a project-level Grafana instance,
1. Go to a project that has monitoring enabled.
1. From the project view, click **Apps.** In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar.
1. From the project view, click **Apps.** In versions before v2.2.0, choose **Catalog Apps** on the main navigation bar.
1. Go to the `project-monitoring` application.
@@ -19,9 +19,9 @@ Rancher's dashboards are available at multiple locations:
- **Cluster Dashboard**: From the **Global** view, navigate to the cluster.
- **Node Metrics**: From the **Global** view, navigate to the cluster. Select **Nodes**. Find the individual node and click on its name. Click **Node Metrics.**
- **Workload Metrics**: From the **Global** view, navigate to the project. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Find the individual workload and click on its name. Click **Workload Metrics.**
- **Workload Metrics**: From the **Global** view, navigate to the project. From the main navigation bar, choose **Resources > Workloads.** (In versions before v2.3.0, choose **Workloads** on the main navigation bar.) Find the individual workload and click on its name. Click **Workload Metrics.**
- **Pod Metrics**: From the **Global** view, navigate to the project. Select **Workloads > Workloads**. Find the individual workload and click on its name. Find the individual pod and click on its name. Click **Pod Metrics.**
- **Container Metrics**: From the **Global** view, navigate to the project. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Find the individual workload and click on its name. Find the individual pod and click on its name. Find the individual container and click on its name. Click **Container Metrics.**
- **Container Metrics**: From the **Global** view, navigate to the project. From the main navigation bar, choose **Resources > Workloads.** (In versions before v2.3.0, choose **Workloads** on the main navigation bar.) Find the individual workload and click on its name. Find the individual pod and click on its name. Find the individual container and click on its name. Click **Container Metrics.**
Prometheus metrics are displayed and are denoted with the Grafana icon. If you click on the icon, the metrics will open a new tab in Grafana.
@@ -53,7 +53,7 @@ When you go to the Grafana instance, you will be logged in with the username `ad
1. Go to the **System** project view. This project is where the cluster-level Grafana instance runs.
1. Click **Apps.** In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar.
1. Click **Apps.** In versions before v2.2.0, choose **Catalog Apps** on the main navigation bar.
1. Go to the `cluster-monitoring` application.
@@ -19,7 +19,7 @@ Rancher's solution allows users to:
More information about the resources that get deployed onto your cluster to support this solution can be found in the [`rancher-monitoring`](https://github.com/rancher/charts/tree/main/charts/rancher-monitoring) Helm chart, which closely tracks the upstream [kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) Helm chart maintained by the Prometheus community with certain changes tracked in the [CHANGELOG.md](https://github.com/rancher/charts/blob/main/charts/rancher-monitoring/CHANGELOG.md).
> If you previously enabled Monitoring, Alerting, or Notifiers in Rancher prior to v2.5, there is no upgrade path for switching to the new monitoring/ alerting solution. You will need to disable monitoring/ alerting/notifiers in Cluster Manager before deploying the new monitoring solution via Cluster Explorer.
> If you previously enabled Monitoring, Alerting, or Notifiers in Rancher before v2.5, there is no upgrade path for switching to the new monitoring/ alerting solution. You will need to disable monitoring/ alerting/notifiers in Cluster Manager before deploying the new monitoring solution via Cluster Explorer.
For more information about upgrading the Monitoring app in Rancher 2.5, please refer to the [migration docs](./migrating).
@@ -5,11 +5,11 @@ aliases:
- /rancher/v2.x/en/monitoring-alerting/migrating
---
If you previously enabled Monitoring, Alerting, or Notifiers in Rancher prior to v2.5, there is no automatic upgrade path for switching to the new monitoring/alerting solution. Before deploying the new monitoring solution via Cluster Explore, you will need to disable and remove all existing custom alerts, notifiers and monitoring installations for the whole cluster and in all projects.
If you previously enabled Monitoring, Alerting, or Notifiers in Rancher before v2.5, there is no automatic upgrade path for switching to the new monitoring/alerting solution. Before deploying the new monitoring solution via Cluster Explore, you will need to disable and remove all existing custom alerts, notifiers and monitoring installations for the whole cluster and in all projects.
### Monitoring Prior to Rancher v2.5
### Monitoring Before Rancher v2.5
As of v2.2.0, Rancher's Cluster Manager allowed users to enable Monitoring & Alerting V1 (both powered by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator)) independently within a cluster. For more information on how to configure Monitoring & Alerting V1, see the [docs about monitoring prior to Rancher v2.5]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x).
As of v2.2.0, Rancher's Cluster Manager allowed users to enable Monitoring & Alerting V1 (both powered by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator)) independently within a cluster. For more information on how to configure Monitoring & Alerting V1, see the [docs about monitoring before Rancher v2.5]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x).
When Monitoring is enabled, Monitoring V1 deploys [Prometheus](https://prometheus.io/) and [Grafana](https://grafana.com/docs/grafana/latest/getting-started/what-is-grafana/) onto a cluster to monitor the state of processes of your cluster nodes, Kubernetes components, and software deployments and create custom dashboards to make it easy to visualize collected metrics.
@@ -43,7 +43,7 @@ The option to install Rancher on a K3s cluster is a feature introduced in Ranche
### RKE Kubernetes Cluster Installations
If you are installing Rancher prior to v2.4, you will need to install Rancher on an RKE cluster, in which the cluster data is stored on each node with the etcd role. As of Rancher v2.4, there is no migration path to transition the Rancher server from an RKE cluster to a K3s cluster. All versions of the Rancher server, including v2.4+, can be installed on an RKE cluster.
If you are installing Rancher before v2.4, you will need to install Rancher on an RKE cluster, in which the cluster data is stored on each node with the etcd role. As of Rancher v2.4, there is no migration path to transition the Rancher server from an RKE cluster to a K3s cluster. All versions of the Rancher server, including v2.4+, can be installed on an RKE cluster.
In an RKE installation, the cluster data is replicated on each of three etcd nodes in the cluster, providing redundancy and data duplication in case one of the nodes fails.
@@ -45,7 +45,7 @@ A high-availability Kubernetes installation is recommended for production.
A Docker installation of Rancher is recommended only for development and testing purposes. The ability to migrate Rancher to a high-availability cluster depends on the Rancher version:
- For Rancher v2.0-v2.4, there was no migration path from a Docker installation to a high-availability installation. Therefore, if you are using Rancher prior to v2.5, you may want to use a Kubernetes installation from the start.
- For Rancher v2.0-v2.4, there was no migration path from a Docker installation to a high-availability installation. Therefore, if you are using Rancher before v2.5, you may want to use a Kubernetes installation from the start.
- For Rancher v2.5+, the Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.]({{<baseurl>}}/rancher/v2.x/en/backups/v2.5/migrating-rancher/)
+6 -6
View File
@@ -101,7 +101,7 @@ Select your provider's tab below and follow the directions.
{{% tab "GitHub" %}}
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Select **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**.
1. Select **Tools > Pipelines** in the navigation bar. In versions before v2.2.0, you can select **Resources > Pipelines**.
1. Follow the directions displayed to **Setup a Github application**. Rancher redirects you to Github to setup an OAuth App in Github.
@@ -118,7 +118,7 @@ _Available as of v2.1.0_
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Select **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**.
1. Select **Tools > Pipelines** in the navigation bar. In versions before v2.2.0, you can select **Resources > Pipelines**.
1. Follow the directions displayed to **Setup a GitLab application**. Rancher redirects you to GitLab.
@@ -182,7 +182,7 @@ After the version control provider is authorized, you are automatically re-direc
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
1. Click on **Configure Repositories**.
@@ -200,7 +200,7 @@ Now that repositories are added to your project, you can start configuring the p
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
1. Find the repository that you want to set up a pipeline for.
@@ -243,7 +243,7 @@ The configuration reference also covers how to configure:
# Running your Pipelines
Run your pipeline for the first time. From the project view in Rancher, go to **Resources > Pipelines.** (In versions prior to v2.3.0, go to the **Pipelines** tab.) Find your pipeline and select the vertical **&#8942; > Run**.
Run your pipeline for the first time. From the project view in Rancher, go to **Resources > Pipelines.** (In versions before v2.3.0, go to the **Pipelines** tab.) Find your pipeline and select the vertical **&#8942; > Run**.
During this initial run, your pipeline is tested, and the following pipeline components are deployed to your project as workloads in a new namespace dedicated to the pipeline:
@@ -269,7 +269,7 @@ Available Events:
1. From the **Global** view, navigate to the project that you want to modify the event trigger for the pipeline.
1. 1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. 1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
1. Find the repository that you want to modify the event triggers. Select the vertical **&#8942; > Setting**.
@@ -393,7 +393,7 @@ This section covers the following topics:
1. From the **Global** view, navigate to the project that you want to configure a pipeline trigger rule.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
1. From the repository for which you want to manage trigger rules, select the vertical **&#8942; > Edit Config**.
@@ -411,7 +411,7 @@ This section covers the following topics:
1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
1. From the repository for which you want to manage trigger rules, select the vertical **&#8942; > Edit Config**.
@@ -436,7 +436,7 @@ This section covers the following topics:
1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
1. From the repository for which you want to manage trigger rules, select the vertical **&#8942; > Edit Config**.
@@ -491,7 +491,7 @@ When configuring a pipeline, certain [step types](#step-types) allow you to use
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
1. From the pipeline for which you want to edit build triggers, select **&#8942; > Edit Config**.
@@ -534,7 +534,7 @@ Create a secret in the same project as your pipeline, or explicitly in the names
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
1. From the pipeline for which you want to edit build triggers, select **&#8942; > Edit Config**.
@@ -584,7 +584,7 @@ Variable Name | Description
# Global Pipeline Execution Settings
After configuring a version control provider, there are several options that can be configured globally on how pipelines are executed in Rancher. These settings can be edited by selecting **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**.
After configuring a version control provider, there are several options that can be configured globally on how pipelines are executed in Rancher. These settings can be edited by selecting **Tools > Pipelines** in the navigation bar. In versions before v2.2.0, you can select **Resources > Pipelines**.
- [Executor Quota](#executor-quota)
- [Resource Quota for Executors](#resource-quota-for-executors)
@@ -37,7 +37,7 @@ You can set up your pipeline to run a series of stages and steps to test your co
1. Go to the project you want this pipeline to run in.
2. Click **Resources > Pipelines.** In versions prior to v2.3.0,click **Workloads > Pipelines.**
2. Click **Resources > Pipelines.** In versions before v2.3.0,click **Workloads > Pipelines.**
4. Click Add pipeline button.
@@ -26,7 +26,7 @@ By default, the example pipeline repositories are disabled. Enable one (or more)
1. From the **Global** view, navigate to the project that you want to test out pipelines.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
1. Click **Configure Repositories**.
@@ -52,7 +52,7 @@ After enabling an example repository, review the pipeline to see how it is set u
1. From the **Global** view, navigate to the project that you want to test out pipelines.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
1. Find the example repository, select the vertical **&#8942;**. There are two ways to view the pipeline:
* **Rancher UI**: Click on **Edit Config** to view the stages and steps of the pipeline.
@@ -64,7 +64,7 @@ After enabling an example repository, run the pipeline to see how it works.
1. From the **Global** view, navigate to the project that you want to test out pipelines.
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
1. Find the example repository, select the vertical **&#8942; > Run**.
@@ -15,7 +15,7 @@ This section assumes that you understand how persistent storage works in Kuberne
### A. Configuring Persistent Data for Docker Registry
1. From the project that you're configuring a pipeline for, and click **Resources > Workloads.** In versions prior to v2.3.0, select the **Workloads** tab.
1. From the project that you're configuring a pipeline for, and click **Resources > Workloads.** In versions before v2.3.0, select the **Workloads** tab.
1. Find the `docker-registry` workload and select **&#8942; > Edit**.
@@ -61,7 +61,7 @@ This section assumes that you understand how persistent storage works in Kuberne
### B. Configuring Persistent Data for Minio
1. From the project view, click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) Find the `minio` workload and select **&#8942; > Edit**.
1. From the project view, click **Resources > Workloads.** (In versions before v2.3.0, click the **Workloads** tab.) Find the `minio` workload and select **&#8942; > Edit**.
1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section:
@@ -27,7 +27,7 @@ Edit [container default resource limit]({{<baseurl>}}/rancher/v2.x/en/k8s-in-ran
When the default container resource limit is set at a project level, the parameter will be propagated to any namespace created in the project after the limit has been set. For any existing namespace in a project, this limit will not be automatically propagated. You will need to manually set the default container resource limit for any existing namespaces in the project in order for it to be used when creating any containers.
> **Note:** Prior to v2.2.0, you could not launch catalog applications that did not have any limits set. With v2.2.0, you can set a default container resource limit on a project and launch any catalog applications.
> **Note:** Before v2.2.0, you could not launch catalog applications that did not have any limits set. With v2.2.0, you can set a default container resource limit on a project and launch any catalog applications.
Once a container default resource limit is configured on a namespace, the default will be pre-populated for any containers created in that namespace. These limits/reservations can always be overridden during workload creation.
@@ -19,7 +19,7 @@ For this workload, you'll be deploying the application Rancher Hello-World.
3. Open the **Project: Default** project.
4. Click **Resources > Workloads.** In versions prior to v2.3.0, click **Workloads > Workloads.**
4. Click **Resources > Workloads.** In versions before v2.3.0, click **Workloads > Workloads.**
5. Click **Deploy**.
@@ -49,7 +49,7 @@ Now that the application is up and running it needs to be exposed so that other
3. Open the **Default** project.
4. Click **Resources > Workloads > Load Balancing.** In versions prior to v2.3.0, click the **Workloads** tab. Click on the **Load Balancing** tab.
4. Click **Resources > Workloads > Load Balancing.** In versions before v2.3.0, click the **Workloads** tab. Click on the **Load Balancing** tab.
5. Click **Add Ingress**.
@@ -19,7 +19,7 @@ For this workload, you'll be deploying the application Rancher Hello-World.
3. Open the **Project: Default** project.
4. Click **Resources > Workloads.** In versions prior to v2.3.0, click **Workloads > Workloads.**
4. Click **Resources > Workloads.** In versions before v2.3.0, click **Workloads > Workloads.**
5. Click **Deploy**.
@@ -44,7 +44,7 @@ kernel.keys.root_maxbytes=25000000
Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings.
### Configure `etcd` user and group
A user account and group for the **etcd** service is required to be setup prior to installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time.
A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time.
#### create `etcd` user and group
To create the **etcd** group run the following console commands.
@@ -44,7 +44,7 @@ kernel.keys.root_maxbytes=25000000
Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings.
### Configure `etcd` user and group
A user account and group for the **etcd** service is required to be setup prior to installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time.
A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time.
#### create `etcd` user and group
To create the **etcd** group run the following console commands.
@@ -41,7 +41,7 @@ kernel.keys.root_maxbytes=25000000
Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings.
### Configure `etcd` user and group
A user account and group for the **etcd** service is required to be setup prior to installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time.
A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time.
#### create `etcd` user and group
To create the **etcd** group run the following console commands.
@@ -41,7 +41,7 @@ kernel.keys.root_maxbytes=25000000
Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings.
### Configure `etcd` user and group
A user account and group for the **etcd** service is required to be setup prior to installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time.
A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time.
#### create `etcd` user and group
To create the **etcd** group run the following console commands.
@@ -71,7 +71,7 @@ In the image below, the `web-deployment.yml` and `web-service.yml` files [create
Just as you can create an alias for Rancher v1.6 services, you can do the same for Rancher v2.x workloads. Similarly, you can also create DNS records pointing to services running externally, using either their hostname or IP address. These DNS records are Kubernetes service objects.
Using the v2.x UI, use the context menu to navigate to the `Project` view. Then click **Resources > Workloads > Service Discovery.** (In versions prior to v2.3.0, click the **Workloads > Service Discovery** tab.) All existing DNS records created for your workloads are listed under each namespace.
Using the v2.x UI, use the context menu to navigate to the `Project` view. Then click **Resources > Workloads > Service Discovery.** (In versions before v2.3.0, click the **Workloads > Service Discovery** tab.) All existing DNS records created for your workloads are listed under each namespace.
Click **Add Record** to create new DNS records. Then view the various options supported to link to external services or to create aliases for another workload, DNS record, or set of pods.
@@ -74,14 +74,14 @@ Although Rancher v2.x supports HTTP and HTTPS hostname and path-based load balan
## Deploying Ingress
You can launch a new load balancer to replace your load balancer from v1.6. Using the Rancher v2.x UI, browse to the applicable project and choose **Resources > Workloads > Load Balancing.** (In versions prior to v2.3.0, click **Workloads > Load Balancing.**) Then click **Deploy**. During deployment, you can choose a target project or namespace.
You can launch a new load balancer to replace your load balancer from v1.6. Using the Rancher v2.x UI, browse to the applicable project and choose **Resources > Workloads > Load Balancing.** (In versions before v2.3.0, click **Workloads > Load Balancing.**) Then click **Deploy**. During deployment, you can choose a target project or namespace.
>**Prerequisite:** Before deploying Ingress, you must have a workload deployed that's running a scale of two or more pods.
>
![Workload Scale]({{<baseurl>}}/img/rancher/workload-scale.png)
For balancing between these two pods, you must create a Kubernetes Ingress rule. To create this rule, navigate to your cluster and project, and click **Resources > Workloads > Load Balancing.** (In versions prior to v2.3.0, click **Workloads > Load Balancing.**) Then click **Add Ingress**. This GIF below depicts how to add Ingress to one of your projects.
For balancing between these two pods, you must create a Kubernetes Ingress rule. To create this rule, navigate to your cluster and project, and click **Resources > Workloads > Load Balancing.** (In versions before v2.3.0, click **Workloads > Load Balancing.**) Then click **Add Ingress**. This GIF below depicts how to add Ingress to one of your projects.
<figcaption>Browsing to Load Balancer Tab and Adding Ingress</figcaption>
@@ -263,7 +263,7 @@ Use the following Rancher CLI commands to deploy your application using Rancher
{{% /tab %}}
{{% /tabs %}}
Following importation, you can view your v1.6 services in the v2.x UI as Kubernetes manifests by using the context menu to select `<CLUSTER> > <PROJECT>` that contains your services. The imported manifests will display on the **Resources > Workloads** and on the tab at **Resources > Workloads > Service Discovery.** (In Rancher v2.x prior to v2.3.0, these are on the **Workloads** and **Service Discovery** tabs in the top navigation bar.)
Following importation, you can view your v1.6 services in the v2.x UI as Kubernetes manifests by using the context menu to select `<CLUSTER> > <PROJECT>` that contains your services. The imported manifests will display on the **Resources > Workloads** and on the tab at **Resources > Workloads > Service Discovery.** (In Rancher v2.x before v2.3.0, these are on the **Workloads** and **Service Discovery** tabs in the top navigation bar.)
<figcaption>Imported Services</figcaption>
@@ -87,7 +87,7 @@ Rancher schedules pods to the node you select if 1) there are compute resource a
If you expose the workload using a NodePort that conflicts with another workload, the deployment gets created successfully, but no NodePort service is created. Therefore, the workload isn't exposed outside of the cluster.
After the workload is created, you can confirm that the pods are scheduled to your chosen node. From the project view, click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) Click the **Group by Node** icon to sort your workloads by node. Note that both Nginx pods are scheduled to the same node.
After the workload is created, you can confirm that the pods are scheduled to your chosen node. From the project view, click **Resources > Workloads.** (In versions before v2.3.0, click the **Workloads** tab.) Click the **Group by Node** icon to sort your workloads by node. Note that both Nginx pods are scheduled to the same node.
![Pods Scheduled to Same Node]({{<baseurl>}}/img/rancher/scheduled-nodes.png)
@@ -16,7 +16,7 @@ There are a few things worth noting:
* In addition to these pluggable add-ons, you can specify an add-on that you want deployed after the cluster deployment is complete.
* As of v0.1.8, RKE will update an add-on if it is the same name.
* Prior to v0.1.8, update any add-ons by using `kubectl edit`.
* Before v0.1.8, update any add-ons by using `kubectl edit`.
## Critical and Non-Critical Add-ons
@@ -6,7 +6,7 @@ weight: 262
By default, RKE deploys the NGINX ingress controller on all schedulable nodes.
> **Note:** As of v0.1.8, only workers are considered schedulable nodes, but prior to v0.1.8, worker and controlplane nodes were considered schedulable nodes.
> **Note:** As of v0.1.8, only workers are considered schedulable nodes, but before v0.1.8, worker and controlplane nodes were considered schedulable nodes.
RKE will deploy the ingress controller as a DaemonSet with `hostnetwork: true`, so ports `80`, and `443` will be opened on each node where the controller is deployed.
@@ -18,7 +18,7 @@ RKE only adds additional add-ons when using `rke up` multiple times. RKE does **
As of v0.1.8, RKE will update an add-on if it is the same name.
Prior to v0.1.8, update any add-ons by using `kubectl edit`.
Before v0.1.8, update any add-ons by using `kubectl edit`.
## In-line Add-ons
@@ -32,4 +32,4 @@ $ govc vm.change -vm <vm-path> -e disk.enableUUID=TRUE
In Rancher v2.0.4+, disk UUIDs are enabled in vSphere node templates by default.
If you are using Rancher prior to v2.0.4, refer to the [vSphere node template documentation.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4/#disk-uuids) for details on how to enable a UUID with a Rancher node template.
If you are using Rancher before v2.0.4, refer to the [vSphere node template documentation.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/before-2.0.4/#disk-uuids) for details on how to enable a UUID with a Rancher node template.
@@ -78,7 +78,7 @@ nodes:
You can specify the list of roles that you want the node to be as part of the Kubernetes cluster. Three roles are supported: `controlplane`, `etcd` and `worker`. Node roles are not mutually exclusive. It's possible to assign any combination of roles to any node. It's also possible to change a node's role using the upgrade process.
> **Note:** Prior to v0.1.8, workloads/pods might have run on any nodes with `worker` or `controlplane` roles, but as of v0.1.8, they will only be deployed to any `worker` nodes.
> **Note:** Before v0.1.8, workloads/pods might have run on any nodes with `worker` or `controlplane` roles, but as of v0.1.8, they will only be deployed to any `worker` nodes.
### etcd
@@ -35,5 +35,5 @@ By default, all system images are being pulled from DockerHub. If you are on a s
As of v0.1.10, you have to configure your private registry credentials, but you can specify this registry as a default registry so that all [system images]({{<baseurl>}}/rke/latest/en/config-options/system-images/) are pulled from the designated private registry. You can use the command `rke config --system-images` to get the list of default system images to populate your private registry.
Prior to v0.1.10, you had to configure your private registry credentials **and** update the names of all the [system images]({{<baseurl>}}/rke/latest/en/config-options/system-images/) in the `cluster.yml` so that the image names would have the private registry URL appended before each image name.
Before v0.1.10, you had to configure your private registry credentials **and** update the names of all the [system images]({{<baseurl>}}/rke/latest/en/config-options/system-images/) in the `cluster.yml` so that the image names would have the private registry URL appended before each image name.

Some files were not shown because too many files have changed in this diff Show More