diff --git a/docs/en/admin-settings/authentication/keycloak-saml/keycloak-saml.md b/docs/en/admin-settings/authentication/keycloak-saml/keycloak-saml.md
index 75a3c5cf97c..19cb7a727a6 100644
--- a/docs/en/admin-settings/authentication/keycloak-saml/keycloak-saml.md
+++ b/docs/en/admin-settings/authentication/keycloak-saml/keycloak-saml.md
@@ -34,12 +34,12 @@ If your organization uses Keycloak Identity Provider (IdP) for user authenticati
## Getting the IDP Metadata
-{{% tabs %}}
-{{% tab "Keycloak 5 and earlier" %}}
+
+
To get the IDP metadata, export a `metadata.xml` file from your Keycloak client.
From the **Installation** tab, choose the **SAML Metadata IDPSSODescriptor** format option and download your file.
-{{% /tab %}}
-{{% tab "Keycloak 6-13" %}}
+
+
1. From the **Configure** section, click the **Realm Settings** tab.
1. Click the **General** tab.
@@ -77,8 +77,8 @@ You are left with something similar as the example below:
```
-{{% /tab %}}
-{{% tab "Keycloak 14+" %}}
+
+
1. From the **Configure** section, click the **Realm Settings** tab.
1. Click the **General** tab.
@@ -102,8 +102,8 @@ The following is an example process for Firefox, but will vary slightly for othe
1. From the details pane, click the **Response** tab.
1. Copy the raw response data.
-{{% /tab %}}
-{{% /tabs %}}
+
+
## Configuring Keycloak in Rancher
diff --git a/docs/en/admin-settings/branding/branding.md b/docs/en/admin-settings/branding/branding.md
index 667db8ecde6..29f9bb1e57b 100644
--- a/docs/en/admin-settings/branding/branding.md
+++ b/docs/en/admin-settings/branding/branding.md
@@ -44,11 +44,11 @@ You can override the primary color used throughout the UI with a custom color of
### Fixed Banners
-{{% tabs %}}
-{{% tab "Rancher before v2.6.4" %}}
+
+
Display a custom fixed banner in the header, footer, or both.
-{{% /tab %}}
-{{% tab "Rancher v2.6.4+" %}}
+
+
Display a custom fixed banner in the header, footer, or both.
As of Rancher v2.6.4, configuration of fixed banners has moved from the **Branding** tab to the **Banners** tab.
@@ -57,8 +57,8 @@ To configure banner settings,
1. Click **☰ > Global settings**.
2. Click **Banners**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
# Custom Navigation Links
diff --git a/docs/en/admin-settings/rbac/cluster-project-roles/cluster-project-roles.md b/docs/en/admin-settings/rbac/cluster-project-roles/cluster-project-roles.md
index a34d1c4653c..e58cea8b5a7 100644
--- a/docs/en/admin-settings/rbac/cluster-project-roles/cluster-project-roles.md
+++ b/docs/en/admin-settings/rbac/cluster-project-roles/cluster-project-roles.md
@@ -88,24 +88,24 @@ To assign a custom role to a new cluster member, you can use the Rancher UI. To
To assign the role to a new cluster member,
-{{% tabs %}}
-{{% tab "Rancher before v2.6.4" %}}
+
+
1. Click **☰ > Cluster Management**.
1. Go to the cluster where you want to assign a role to a member and click **Explore**.
1. Click **RBAC > Cluster Members**.
1. Click **Add**.
1. In the **Cluster Permissions** section, choose the custom cluster role that should be assigned to the member.
1. Click **Create**.
-{{% /tab %}}
-{{% tab "Rancher v2.6.4+" %}}
+
+
1. Click **☰ > Cluster Management**.
1. Go to the cluster where you want to assign a role to a member and click **Explore**.
1. Click **Cluster > Cluster Members**.
1. Click **Add**.
1. In the **Cluster Permissions** section, choose the custom cluster role that should be assigned to the member.
1. Click **Create**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:** The member has the assigned role.
diff --git a/docs/en/api/api.md b/docs/en/api/api.md
index d1cc9cc4454..d4abcc7a836 100644
--- a/docs/en/api/api.md
+++ b/docs/en/api/api.md
@@ -7,20 +7,20 @@ weight: 24
The API has its own user interface accessible from a web browser. This is an easy way to see resources, perform actions, and see the equivalent cURL or HTTP request & response. To access it:
-{{% tabs %}}
-{{% tab "Rancher v2.6.4+" %}}
+
+
1. Click on your user avatar in the upper right corner.
1. Click **Account & API Keys**.
1. Under the **API Keys** section, find the **API Endpoint** field and click the link. The link will look something like `https:///v3`, where `` is the fully qualified domain name of your Rancher deployment.
-{{% /tab %}}
-{{% tab "Rancher before v2.6.4" %}}
+
+
Go to the URL endpoint at `https:///v3`, where `` is the fully qualified domain name of your Rancher deployment.
-{{% /tab %}}
-{{% /tabs %}}
+
+
## Authentication
diff --git a/docs/en/cluster-admin/certificate-rotation/certificate-rotation.md b/docs/en/cluster-admin/certificate-rotation/certificate-rotation.md
index c0a67823fe9..1f886ae9304 100644
--- a/docs/en/cluster-admin/certificate-rotation/certificate-rotation.md
+++ b/docs/en/cluster-admin/certificate-rotation/certificate-rotation.md
@@ -13,8 +13,8 @@ By default, Kubernetes clusters require certificates and Rancher launched Kubern
Certificates can be rotated for the following services:
-{{% tabs %}}
-{{% tab "RKE" %}}
+
+
- etcd
- kubelet (node certificate)
@@ -24,8 +24,8 @@ Certificates can be rotated for the following services:
- kube-scheduler
- kube-controller-manager
-{{% /tab %}}
-{{% tab "RKE2" %}}
+
+
- admin
- api-server
@@ -39,8 +39,8 @@ Certificates can be rotated for the following services:
- kubelet
- kube-proxy
-{{% /tab %}}
-{{% /tabs %}}
+
+
:::note
@@ -65,15 +65,15 @@ Rancher launched Kubernetes clusters have the ability to rotate the auto-generat
### Additional Notes
-{{% tabs %}}
-{{% tab "RKE" %}}
+
+
Even though the RKE CLI can use custom certificates for the Kubernetes cluster components, Rancher currently doesn't allow the ability to upload these in Rancher launched Kubernetes clusters.
-{{% /tab %}}
-{{% tab "RKE2" %}}
+
+
In RKE2, both etcd and control plane nodes are treated as the same `server` concept. As such, when rotating certificates of services specific to either of these components will result in certificates being rotated on both. The certificates will only change for the specified service, but you will see nodes for both components go into an updating state. You may also see worker only nodes go into an updating state. This is to restart the workers after a certificate change to ensure they get the latest client certs.
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/docs/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md b/docs/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md
index e09577fb57c..cdd272d5eb7 100644
--- a/docs/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md
+++ b/docs/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md
@@ -59,8 +59,8 @@ For registered clusters, the process for removing Rancher is a little different.
After the registered cluster is detached from Rancher, the cluster's workloads will be unaffected and you can access the cluster using the same methods that you did before the cluster was registered into Rancher.
-{{% tabs %}}
-{{% tab "By UI / API" %}}
+
+
:::danger
This process will remove data from your cluster. Make sure you have created a backup of files you want to keep before executing the command, as data will be lost.
@@ -77,8 +77,8 @@ After you initiate the removal of a registered cluster using the Rancher UI (or
**Result:** All components listed for registered clusters in [What Gets Removed?](#what-gets-removed) are deleted.
-{{% /tab %}}
-{{% tab "By Script" %}}
+
+
Rather than cleaning registered cluster nodes using the Rancher UI, you can run a script instead.
:::note Prerequisite:
@@ -112,8 +112,8 @@ Install [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
**Result:** The script runs. All components listed for registered clusters in [What Gets Removed?](#what-gets-removed) are deleted.
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Windows Nodes
diff --git a/docs/en/cluster-provisioning/node-requirements/node-requirements.md b/docs/en/cluster-provisioning/node-requirements/node-requirements.md
index a6d2924de98..c68bebec58d 100644
--- a/docs/en/cluster-provisioning/node-requirements/node-requirements.md
+++ b/docs/en/cluster-provisioning/node-requirements/node-requirements.md
@@ -53,8 +53,8 @@ SUSE Linux may have a firewall that blocks all ports by default. In that situati
When [Launching Kubernetes with Rancher]({{}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/) using Flatcar Container Linux nodes, it is required to use the following configuration in the [Cluster Config File]({{}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/options/#cluster-config-file)
-{{% tabs %}}
-{{% tab "Canal"%}}
+
+
```yaml
rancher_kubernetes_engine_config:
@@ -69,9 +69,9 @@ rancher_kubernetes_engine_config:
extra_args:
flex-volume-plugin-dir: /opt/kubernetes/kubelet-plugins/volume/exec/
```
-{{% /tab %}}
+
-{{% tab "Calico"%}}
+
```yaml
rancher_kubernetes_engine_config:
@@ -86,8 +86,8 @@ rancher_kubernetes_engine_config:
extra_args:
flex-volume-plugin-dir: /opt/kubernetes/kubelet-plugins/volume/exec/
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
It is also required to enable the Docker service, you can enable the Docker service using the following command:
diff --git a/docs/en/cluster-provisioning/rke-clusters/node-pools/azure/azure.md b/docs/en/cluster-provisioning/rke-clusters/node-pools/azure/azure.md
index 731dd8c755d..425b0c21200 100644
--- a/docs/en/cluster-provisioning/rke-clusters/node-pools/azure/azure.md
+++ b/docs/en/cluster-provisioning/rke-clusters/node-pools/azure/azure.md
@@ -47,8 +47,8 @@ The creation of this service principal returns three pieces of identification in
# Creating an Azure Cluster
-{{% tabs %}}
-{{% tab "RKE" %}}
+
+
1. [Create your cloud credentials](#1-create-your-cloud-credentials)
2. [Create a node template with your cloud credentials](#2-create-a-node-template-with-your-cloud-credentials)
@@ -88,8 +88,8 @@ Use Rancher to create a Kubernetes cluster in Azure.
1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
1. Click **Create**.
-{{% /tab %}}
-{{% tab "RKE2" %}}
+
+
### 1. Create your cloud credentials
@@ -120,8 +120,8 @@ Use Rancher to create a Kubernetes cluster in Azure.
1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
1. Click **Create**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:**
diff --git a/docs/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/digital-ocean.md b/docs/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/digital-ocean.md
index 7d3e103cd1e..8f28ce92772 100644
--- a/docs/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/digital-ocean.md
+++ b/docs/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/digital-ocean.md
@@ -9,8 +9,8 @@ First, you will set up your DigitalOcean cloud credentials in Rancher. Then you
Then you will create a DigitalOcean cluster in Rancher, and when configuring the new cluster, you will define node pools for it. Each node pool will have a Kubernetes role of etcd, controlplane, or worker. Rancher will install RKE Kubernetes on the new nodes, and it will set up each node with the Kubernetes role defined by the node pool.
-{{% tabs %}}
-{{% tab "RKE" %}}
+
+
1. [Create your cloud credentials](#1-create-your-cloud-credentials)
2. [Create a node template with your cloud credentials](#2-create-a-node-template-with-your-cloud-credentials)
@@ -48,8 +48,8 @@ Creating a [node template]({{}}/rancher/v2.6/en/cluster-provisioning/rk
1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
1. Click **Create**.
-{{% /tab %}}
-{{% tab "RKE2" %}}
+
+
### 1. Create your cloud credentials
@@ -78,8 +78,8 @@ Use Rancher to create a Kubernetes cluster in DigitalOcean.
1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
1. Click **Create**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:**
diff --git a/docs/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2.md b/docs/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2.md
index 5cafd243f31..a198070622b 100644
--- a/docs/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2.md
+++ b/docs/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2.md
@@ -23,8 +23,8 @@ Then you will create an EC2 cluster in Rancher, and when configuring the new clu
The steps to create a cluster differ based on your Rancher version.
-{{% tabs %}}
-{{% tab "RKE" %}}
+
+
1. [Create your cloud credentials](#1-create-your-cloud-credentials)
2. [Create a node template with your cloud credentials and information from EC2](#2-create-a-node-template-with-your-cloud-credentials-and-information-from-ec2)
@@ -78,8 +78,8 @@ Add one or more node pools to your cluster. For more information about node pool
1. Click **Create**.
-{{% /tab %}}
-{{% tab "RKE2" %}}
+
+
### 1. Create your cloud credentials
@@ -110,8 +110,8 @@ If you already have a set of cloud credentials to use, skip this section.
1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
1. Click **Create**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:**
diff --git a/docs/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md b/docs/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md
index 7c85b2d3bf0..24414f08f27 100644
--- a/docs/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md
+++ b/docs/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md
@@ -178,8 +178,8 @@ This final command to install Rancher requires a domain name that forwards traff
:::
-{{% tabs %}}
-{{% tab "Rancher-generated Certificates" %}}
+
+
The default is for Rancher to generate a CA and uses `cert-manager` to issue the certificate for access to the Rancher server interface.
@@ -206,8 +206,8 @@ Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are
deployment "rancher" successfully rolled out
```
-{{% /tab %}}
-{{% tab "Let's Encrypt" %}}
+
+
This option uses `cert-manager` to automatically request and renew [Let's Encrypt](https://letsencrypt.org/) certificates. This is a free service that provides you with a valid certificate as Let's Encrypt is a trusted CA.
@@ -244,8 +244,8 @@ Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are
deployment "rancher" successfully rolled out
```
-{{% /tab %}}
-{{% tab "Certificates from Files" %}}
+
+
In this option, Kubernetes secrets are created from your own certificates for Rancher to use.
When you run this command, the `hostname` option must match the `Common Name` or a `Subject Alternative Names` entry in the server certificate or the Ingress controller will fail to configure correctly.
@@ -283,8 +283,8 @@ helm install rancher rancher-/rancher \
```
Now that Rancher is deployed, see [Adding TLS Secrets]({{}}/rancher/v2.6/en/installation/resources/tls-secrets/) to publish the certificate files so Rancher and the Ingress controller can use them.
-{{% /tab %}}
-{{% /tabs %}}
+
+
The Rancher chart configuration has many options for customizing the installation to suit your specific environment. Here are some common advanced scenarios.
diff --git a/docs/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md b/docs/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md
index e98f0801e20..6ae7aacc5f9 100644
--- a/docs/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md
+++ b/docs/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md
@@ -15,8 +15,8 @@ Rancher can be installed on any Kubernetes cluster, including hosted Kubernetes
The steps to set up an air-gapped Kubernetes cluster on RKE, RKE2, or K3s are shown below.
-{{% tabs %}}
-{{% tab "K3s" %}}
+
+
In this guide, we are assuming you have created your nodes in your air gapped environment and have a secure Docker private registry on your bastion server.
@@ -143,8 +143,8 @@ Upgrading an air-gap environment can be accomplished in the following manner:
1. Download the new air-gap images (tar file) from the [releases](https://github.com/k3s-io/k3s/releases) page for the version of K3s you will be upgrading to. Place the tar in the `/var/lib/rancher/k3s/agent/images/` directory on each node. Delete the old tar file.
2. Copy and replace the old K3s binary in `/usr/local/bin` on each node. Copy over the install script at https://get.k3s.io (as it is possible it has changed since the last release). Run the script again just as you had done in the past with the same environment variables.
3. Restart the K3s service (if not restarted automatically by installer).
-{{% /tab %}}
-{{% tab "RKE2" %}}
+
+
In this guide, we are assuming you have created your nodes in your air-gapped environment and have a secure Docker private registry on your bastion server.
@@ -279,8 +279,8 @@ Upgrading an air-gap environment can be accomplished in the following manner:
1. Download the new air-gap artifacts and install script from the [releases](https://github.com/rancher/rke2/releases) page for the version of RKE2 you will be upgrading to.
2. Run the script again just as you had done in the past with the same environment variables.
3. Restart the RKE2 service.
-{{% /tab %}}
-{{% tab "RKE" %}}
+
+
We will create a Kubernetes cluster using Rancher Kubernetes Engine (RKE). Before being able to start your Kubernetes cluster, you’ll need to install RKE and create a RKE config file.
### 1. Install RKE
@@ -359,8 +359,8 @@ Save a copy of the following files in a secure location:
- `rancher-cluster.yml`: The RKE cluster configuration file.
- `kube_config_cluster.yml`: The [Kubeconfig file]({{}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file]({{}}/rke/latest/en/installation/#kubernetes-cluster-state), this file contains the current state of the cluster including the RKE configuration and the certificates.
_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
-{{% /tab %}}
-{{% /tabs %}}
+
+
:::note
diff --git a/docs/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md b/docs/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md
index 354ffb0bfdc..7d5dcc3ecf0 100644
--- a/docs/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md
+++ b/docs/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md
@@ -19,8 +19,8 @@ If the registry has certs, follow [this K3s documentation](https://rancher.com/d
:::
-{{% tabs %}}
-{{% tab "Linux Only Clusters" %}}
+
+
For Rancher servers that will only provision Linux clusters, these are the steps to populate your private registry.
@@ -119,8 +119,8 @@ The `rancher-images.txt` is expected to be on the workstation in the same direct
```plain
./rancher-load-images.sh --image-list ./rancher-images.txt --registry
```
-{{% /tab %}}
-{{% tab "Linux and Windows Clusters" %}}
+
+
For Rancher servers that will provision Linux and Windows clusters, there are distinctive steps to populate your private registry for the Windows images and the Linux images. Since a Windows cluster is a mix of Linux and Windows nodes, the Linux images pushed into the private registry are manifests.
@@ -305,8 +305,8 @@ The image list, `rancher-images.txt` or `rancher-windows-images.txt`, is expecte
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
### [Next step for Kubernetes Installs - Launch a Kubernetes Cluster]({{}}/rancher/v2.6/en/installation/other-installation-methods/air-gap/launch-kubernetes/)
diff --git a/docs/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md b/docs/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md
index 396d5f8c346..11b88c75ce3 100644
--- a/docs/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md
+++ b/docs/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md
@@ -11,8 +11,8 @@ The infrastructure depends on whether you are installing Rancher on a K3s Kubern
Rancher can be installed on any Kubernetes cluster. The RKE and K3s Kubernetes infrastructure tutorials below are still included for convenience.
-{{% tabs %}}
-{{% tab "K3s" %}}
+
+
We recommend setting up the following infrastructure for a high-availability installation:
- **Two Linux nodes,** typically virtual machines, in the infrastructure provider of your choice.
@@ -85,8 +85,8 @@ Rancher supports air gap installs using a private registry. You must have your o
In a later step, when you set up your K3s Kubernetes cluster, you will create a [private registries configuration file]({{}}/k3s/latest/en/installation/private-registry/) with details from this registry.
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry)
-{{% /tab %}}
-{{% tab "RKE" %}}
+
+
To install the Rancher management server on a high-availability RKE cluster, we recommend setting up the following infrastructure:
@@ -152,8 +152,8 @@ In a later step, when you set up your RKE Kubernetes cluster, you will create a
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry)
-{{% /tab %}}
-{{% tab "Docker" %}}
+
+
:::note Notes:
@@ -177,7 +177,7 @@ Rancher supports air gap installs using a Docker private registry on your bastio
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/)
-{{% /tab %}}
-{{% /tabs %}}
+
+
### [Next: Collect and Publish Images to your Private Registry]({{}}/rancher/v2.6/en/installation/other-installation-methods/air-gap/populate-private-registry/)
diff --git a/docs/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md b/docs/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md
index bc3cd99c0d9..b645833ffad 100644
--- a/docs/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md
+++ b/docs/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md
@@ -133,8 +133,8 @@ To see the command to use when starting the new Rancher server container, choose
- Docker Upgrade
- Docker Upgrade for Air Gap Installs
-{{% tabs %}}
-{{% tab "Docker Upgrade" %}}
+
+
Select which option you had installed Rancher server
@@ -265,8 +265,8 @@ Privileged access is [required.]({{}}/rancher/v2.6/en/installation/othe
{{% /accordion %}}
-{{% /tab %}}
-{{% tab "Docker Air Gap Upgrade" %}}
+
+
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
@@ -371,8 +371,8 @@ docker run -d --volumes-from rancher-data \
```
privileged access is [required.]({{}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
{{% /accordion %}}
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:** You have upgraded Rancher. Data from your upgraded server is now saved to the `rancher-data` container for use in future upgrades.
diff --git a/docs/en/installation/requirements/ports/ports.md b/docs/en/installation/requirements/ports/ports.md
index f503899549a..bca1b98dc84 100644
--- a/docs/en/installation/requirements/ports/ports.md
+++ b/docs/en/installation/requirements/ports/ports.md
@@ -300,8 +300,8 @@ When using the [AWS EC2 node driver]({{}}/rancher/v2.6/en/cluster-provi
SUSE Linux may have a firewall that blocks all ports by default. To open the ports needed for adding the host to a custom cluster,
-{{% tabs %}}
-{{% tab "SLES 15 / openSUSE Leap 15" %}}
+
+
1. SSH into the instance.
1. Start YaST in text mode:
```
@@ -319,8 +319,8 @@ UDP Ports
1. When all required ports are enter, select **Accept**.
-{{% /tab %}}
-{{% tab "SLES 12 / openSUSE Leap 42" %}}
+
+
1. SSH into the instance.
1. Edit /`etc/sysconfig/SuSEfirewall2` and open the required ports. In this example, ports 9796 and 10250 are also opened for monitoring:
```
@@ -332,7 +332,7 @@ UDP Ports
```
SuSEfirewall2
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:** The node has the open ports required to be added to a custom cluster.
diff --git a/docs/en/installation/resources/choosing-version/choosing-version.md b/docs/en/installation/resources/choosing-version/choosing-version.md
index 6186082cd08..2fa4cfe9f12 100644
--- a/docs/en/installation/resources/choosing-version/choosing-version.md
+++ b/docs/en/installation/resources/choosing-version/choosing-version.md
@@ -9,8 +9,8 @@ For a high-availability installation of Rancher, which is recommended for produc
For Docker installations of Rancher, which is used for development and testing, you will install Rancher as a **Docker image**.
-{{% tabs %}}
-{{% tab "Helm Charts" %}}
+
+
When installing, upgrading, or rolling back Rancher Server when it is [installed on a Kubernetes cluster]({{}}/rancher/v2.6/en/installation/install-rancher-on-k8s/), Rancher server is installed using a Helm chart on a Kubernetes cluster. Therefore, as you prepare to install or upgrade a high availability Rancher configuration, you must add a Helm chart repository that contains the charts for installing Rancher.
@@ -81,8 +81,8 @@ Because the rancher-alpha repository contains only alpha charts, switching betwe
```
4. Continue to follow the steps to [upgrade Rancher]({{}}/rancher/v2.6/en/installation/install-rancher-on-k8s/upgrades) from the new Helm chart repository.
-{{% /tab %}}
-{{% tab "Docker Images" %}}
+
+
When performing [Docker installs]({{}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker), upgrades, or rollbacks, you can use _tags_ to install a specific version of Rancher.
### Server Tags
@@ -102,5 +102,5 @@ Rancher Server is distributed as a Docker image, which have tags attached to the
:::
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/docs/en/installation/resources/k8s-tutorials/gke/gke.md b/docs/en/installation/resources/k8s-tutorials/gke/gke.md
index 0516e4aea69..15748bfee03 100644
--- a/docs/en/installation/resources/k8s-tutorials/gke/gke.md
+++ b/docs/en/installation/resources/k8s-tutorials/gke/gke.md
@@ -69,8 +69,8 @@ To install `gcloud` and `kubectl`, perform the following steps:
- Using gcloud init, if you want to be walked through setting defaults.
- Using gcloud config, to individually set your project ID, zone, and region.
-{{% tabs %}}
-{{% tab "Using gcloud init" %}}
+
+
1. Run gcloud init and follow the directions:
@@ -84,10 +84,10 @@ To install `gcloud` and `kubectl`, perform the following steps:
```
2. Follow the instructions to authorize gcloud to use your Google Cloud account and select the new project that you created.
-{{% /tab %}}
-{{% tab "Using gcloud config" %}}
-{{% /tab %}}
-{{% /tabs %}}
+
+
+
+
# 4. Confirm that gcloud is configured correctly
diff --git a/docs/en/monitoring-alerting/configuration/receiver/receiver.md b/docs/en/monitoring-alerting/configuration/receiver/receiver.md
index c005fffdb0d..aafb15d7c3d 100644
--- a/docs/en/monitoring-alerting/configuration/receiver/receiver.md
+++ b/docs/en/monitoring-alerting/configuration/receiver/receiver.md
@@ -38,8 +38,8 @@ This section assumes familiarity with how monitoring components work together. F
To create notification receivers in the Rancher UI,
-{{% tabs %}}
-{{% tab "Rancher v2.6.5+" %}}
+
+
1. Go to the cluster where you want to create receivers. Click **Monitoring -> Alerting -> AlertManagerConfigs**.
1. Ciick **Create**.
@@ -48,16 +48,16 @@ To create notification receivers in the Rancher UI,
1. Configure one or more providers for the receiver. For help filling out the forms, refer to the configuration options below.
1. Click **Create**.
-{{% /tab %}}
-{{% tab "Rancher before v2.6.5" %}}
+
+
1. Go to the cluster where you want to create receivers. Click **Monitoring** and click **Receiver**.
2. Enter a name for the receiver.
3. Configure one or more providers for the receiver. For help filling out the forms, refer to the configuration options below.
4. Click **Create**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:** Alerts can be configured to send notifications to the receiver(s).
diff --git a/docs/en/monitoring-alerting/configuration/route/route.md b/docs/en/monitoring-alerting/configuration/route/route.md
index a80315dddbb..dded885835d 100644
--- a/docs/en/monitoring-alerting/configuration/route/route.md
+++ b/docs/en/monitoring-alerting/configuration/route/route.md
@@ -46,8 +46,8 @@ The route needs to refer to a [receiver](#receiver-configuration) that has alrea
### Grouping
-{{% tabs %}}
-{{% tab "Rancher v2.6.5+" %}}
+
+
:::note
@@ -62,8 +62,8 @@ As of Rancher v2.6.5, `Group By` now accepts a list of strings instead of key-va
| Group Interval | 5m | How long to wait before sending an alert that has been added to a group of alerts for which an initial notification has already been sent. |
| Repeat Interval | 4h | How long to wait before re-sending a given alert that has already been sent. |
-{{% /tab %}}
-{{% tab "Rancher before v2.6.5" %}}
+
+
| Field | Default | Description |
|-------|--------------|---------|
@@ -72,8 +72,8 @@ As of Rancher v2.6.5, `Group By` now accepts a list of strings instead of key-va
| Group Interval | 5m | How long to wait before sending an alert that has been added to a group of alerts for which an initial notification has already been sent. |
| Repeat Interval | 4h | How long to wait before re-sending a given alert that has already been sent. |
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/docs/en/monitoring-alerting/guides/persist-grafana/persist-grafana.md b/docs/en/monitoring-alerting/guides/persist-grafana/persist-grafana.md
index d9d3fecea3d..ab4f5661026 100644
--- a/docs/en/monitoring-alerting/guides/persist-grafana/persist-grafana.md
+++ b/docs/en/monitoring-alerting/guides/persist-grafana/persist-grafana.md
@@ -10,8 +10,8 @@ To allow the Grafana dashboard to persist after the Grafana instance restarts, a
# Creating a Persistent Grafana Dashboard
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
:::note Prerequisites:
@@ -89,8 +89,8 @@ grafana.sidecar.dashboards.searchNamespace=ALL
Note that the RBAC roles exposed by the Monitoring chart to add Grafana Dashboards are still restricted to giving permissions for users to add dashboards in the namespace defined in `grafana.dashboards.namespace`, which defaults to `cattle-dashboards`.
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
:::note Prerequisites:
@@ -139,8 +139,8 @@ To prevent the persistent dashboard from being deleted when Monitoring v2 is uni
helm.sh/resource-policy: "keep"
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
# Known Issues
diff --git a/docs/en/pipelines/pipelines.md b/docs/en/pipelines/pipelines.md
index 8252a6bb7fc..aba0f2dea83 100644
--- a/docs/en/pipelines/pipelines.md
+++ b/docs/en/pipelines/pipelines.md
@@ -119,8 +119,8 @@ Before you can start configuring a pipeline for your repository, you must config
Select your provider's tab below and follow the directions.
-{{% tabs %}}
-{{% tab "GitHub" %}}
+
+
1. In the upper left corner, click **☰ > Cluster Management**.
1. Go to the cluster where you want to configure pipelines and click **Explore**.
@@ -132,8 +132,8 @@ Select your provider's tab below and follow the directions.
1. If you're using GitHub for enterprise, select **Use a private github enterprise installation**. Enter the host address of your GitHub installation.
1. Click **Authenticate**.
-{{% /tab %}}
-{{% tab "GitLab" %}}
+
+
1. In the upper left corner, click **☰ > Cluster Management**.
1. Go to the cluster where you want to configure pipelines and click **Explore**.
@@ -153,8 +153,8 @@ Select your provider's tab below and follow the directions.
:::
-{{% /tab %}}
-{{% tab "Bitbucket Cloud" %}}
+
+
1. In the upper left corner, click **☰ > Cluster Management**.
1. Go to the cluster where you want to configure pipelines and click **Explore**.
@@ -166,8 +166,8 @@ Select your provider's tab below and follow the directions.
1. From Bitbucket, copy the consumer **Key** and **Secret**. Paste them into Rancher.
1. Click **Authenticate**.
-{{% /tab %}}
-{{% tab "Bitbucket Server" %}}
+
+
1. In the upper left corner, click **☰ > Cluster Management**.
1. Go to the cluster where you want to configure pipelines and click **Explore**.
@@ -188,8 +188,8 @@ Bitbucket server needs to do SSL verification when sending webhooks to Rancher.
:::
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:** After the version control provider is authenticated, you will be automatically re-directed to start configuring which repositories you want start using with a pipeline.
diff --git a/docs/en/pipelines/storage/storage.md b/docs/en/pipelines/storage/storage.md
index 6e052bb5146..30f101648f8 100644
--- a/docs/en/pipelines/storage/storage.md
+++ b/docs/en/pipelines/storage/storage.md
@@ -27,8 +27,8 @@ This section assumes that you understand how persistent storage works in Kuberne
- **Add Volume > Use an existing persistent volume (claim)**
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
-{{% tabs %}}
-{{% tab "Add a new persistent volume" %}}
+
+
1. Enter a **Name** for the volume claim.
@@ -41,9 +41,9 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
+
-{{% tab "Use an existing persistent volume" %}}
+
1. Enter a **Name** for the volume claim.
@@ -53,9 +53,9 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
+
-{{% /tabs %}}
+
1. From the **Mount Point** field, enter `/var/lib/registry`, which is the data storage path inside the Docker registry container.
@@ -74,9 +74,9 @@ This section assumes that you understand how persistent storage works in Kuberne
- **Add Volume > Use an existing persistent volume (claim)**
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
-{{% tabs %}}
+
-{{% tab "Add a new persistent volume" %}}
+
1. Enter a **Name** for the volume claim.
@@ -89,8 +89,8 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
-{{% tab "Use an existing persistent volume" %}}
+
+
1. Enter a **Name** for the volume claim.
@@ -100,8 +100,8 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
1. From the **Mount Point** field, enter `/data`, which is the data storage path inside the Minio container.
diff --git a/docs/en/quick-start-guide/deployment/quickstart-manual-setup/quickstart-manual-setup.md b/docs/en/quick-start-guide/deployment/quickstart-manual-setup/quickstart-manual-setup.md
index 28315797069..418f9584310 100644
--- a/docs/en/quick-start-guide/deployment/quickstart-manual-setup/quickstart-manual-setup.md
+++ b/docs/en/quick-start-guide/deployment/quickstart-manual-setup/quickstart-manual-setup.md
@@ -28,15 +28,15 @@ Save the IP of the Linux machine.
The kubeconfig file is important for accessing the Kubernetes cluster. Copy the file at `/etc/rancher/k3s/k3s.yaml` from the Linux machine and save it to your local workstation in the directory `~/.kube/config`. One way to do this is by using the `scp` tool and run this command on your local machine:
-{{% tabs %}}
-{{% tab "Mac and Linux" %}}
+
+
```
scp root@:/etc/rancher/k3s/k3s.yaml ~/.kube/config
```
-{{% /tab %}}
-{{% tab "Windows" %}}
+
+
By default, "scp" is not a recognized command, so we need to install a module first.
@@ -50,15 +50,15 @@ Install-Module Posh-SSH
scp root@:/etc/rancher/k3s/k3s.yaml $env:USERPROFILE\.kube\config
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
## Edit the Rancher server URL in the kubeconfig
In the kubeconfig file, you will need to change the value of the `server` field to `:6443`. The Kubernetes API server will be reached at port 6443, while the Rancher server will be reached at ports 80 and 443. This edit is needed so that when you run Helm or kubectl commands from your local workstation, you will be able to communicate with the Kubernetes cluster that Rancher will be installed on.
-{{% tabs %}}
-{{% tab "Mac and Linux" %}}
+
+
One way to open the kubeconfig file for editing is to use Vim:
@@ -68,8 +68,8 @@ vi ~/.kube/config
Press `i` to put Vim in insert mode. To save your work, press `Esc`. Then press `:wq` and press `Enter`.
-{{% /tab %}}
-{{% tab "Windows" %}}
+
+
In Windows Powershell, you can use `notepad.exe` for editing the kubeconfig file:
@@ -80,8 +80,8 @@ notepad.exe $env:USERPROFILE\.kube\config
Once edited, either press `ctrl+s` or go to `File > Save` to save your work.
-{{% /tab %}}
-{{% /tabs %}}
+
+
## Install Rancher with Helm
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/admin-settings/k8s-metadata/k8s-metadata.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/admin-settings/k8s-metadata/k8s-metadata.md
index 40233b35676..2f4b8f5d39e 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/admin-settings/k8s-metadata/k8s-metadata.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/admin-settings/k8s-metadata/k8s-metadata.md
@@ -43,8 +43,8 @@ The RKE metadata config controls how often Rancher syncs metadata and where it d
The way that the metadata is configured depends on the Rancher version.
-{{% tabs %}}
-{{% tab "Rancher v2.4+" %}}
+
+
To edit the metadata config in Rancher,
1. Go to the **Global** view and click the **Settings** tab.
@@ -57,8 +57,8 @@ To edit the metadata config in Rancher,
If you don't have an air gap setup, you don't need to specify the URL where Rancher gets the metadata, because the default setting is to pull from [Rancher's metadata Git repository.](https://github.com/rancher/kontainer-driver-metadata/blob/dev-v2.5/data/data.json)
However, if you have an [air gap setup,](#air-gap-setups) you will need to mirror the Kubernetes metadata repository in a location available to Rancher. Then you need to change the URL to point to the new location of the JSON file.
-{{% /tab %}}
-{{% tab "Rancher v2.3" %}}
+
+
To edit the metadata config in Rancher,
1. Go to the **Global** view and click the **Settings** tab.
@@ -72,8 +72,8 @@ To edit the metadata config in Rancher,
If you don't have an air gap setup, you don't need to specify the URL or Git branch where Rancher gets the metadata, because the default setting is to pull from [Rancher's metadata Git repository.](https://github.com/rancher/kontainer-driver-metadata.git)
However, if you have an [air gap setup,](#air-gap-setups) you will need to mirror the Kubernetes metadata repository in a location available to Rancher. Then you need to change the URL and Git branch in the `rke-metadata-config` settings to point to the new location of the repository.
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Air Gap Setups
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/admin-settings/rbac/default-custom-roles/default-custom-roles.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/admin-settings/rbac/default-custom-roles/default-custom-roles.md
index 3496070b024..d6e35f51708 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/admin-settings/rbac/default-custom-roles/default-custom-roles.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/admin-settings/rbac/default-custom-roles/default-custom-roles.md
@@ -30,8 +30,8 @@ While Rancher comes out-of-the-box with a set of default user roles, you can als
The steps to add custom roles differ depending on the version of Rancher.
-{{% tabs %}}
-{{% tab "Rancher v2.0.7+" %}}
+
+
1. From the **Global** view, select **Security > Roles** from the main menu.
@@ -60,8 +60,8 @@ The steps to add custom roles differ depending on the version of Rancher.
1. Click **Create**.
-{{% /tab %}}
-{{% tab "Rancher before v2.0.7" %}}
+
+
1. From the **Global** view, select **Security > Roles** from the main menu.
@@ -93,8 +93,8 @@ The steps to add custom roles differ depending on the version of Rancher.
1. Click **Create**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
## Creating a Custom Global Role
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/backups/backup/rke-backups/rke-backups.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/backups/backup/rke-backups/rke-backups.md
index a85625de79b..491c86bd227 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/backups/backup/rke-backups/rke-backups.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/backups/backup/rke-backups/rke-backups.md
@@ -67,8 +67,8 @@ To take recurring snapshots, enable the `etcd-snapshot` service, which is a serv
The steps to enable recurring snapshots differ based on the version of RKE.
-{{% tabs %}}
-{{% tab "RKE v0.2.0+" %}}
+
+
1. Open `rancher-cluster.yml` with your favorite text editor.
2. Edit the code for the `etcd` service to enable recurring snapshots. Snapshots can be saved in a S3 compatible backend.
@@ -101,8 +101,8 @@ The steps to enable recurring snapshots differ based on the version of RKE.
```
**Result:** RKE is configured to take recurring snapshots of `etcd` on all nodes running the `etcd` role. Snapshots are saved locally to the following directory: `/opt/rke/etcd-snapshots/`. If configured, the snapshots are also uploaded to your S3 compatible backend.
-{{% /tab %}}
-{{% tab "RKE v0.1.x" %}}
+
+
1. Open `rancher-cluster.yml` with your favorite text editor.
2. Edit the code for the `etcd` service to enable recurring snapshots.
@@ -122,8 +122,8 @@ The steps to enable recurring snapshots differ based on the version of RKE.
```
**Result:** RKE is configured to take recurring snapshots of `etcd` on all nodes running the `etcd` role. Snapshots are saved locally to the following directory: `/opt/rke/etcd-snapshots/`.
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Option B: One-Time Snapshots
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/backing-up-etcd/backing-up-etcd.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/backing-up-etcd/backing-up-etcd.md
index 5cac3d1bab0..3afb8ff728b 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/backing-up-etcd/backing-up-etcd.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/backing-up-etcd/backing-up-etcd.md
@@ -27,8 +27,8 @@ This section covers the following topics:
# How Snapshots Work
-{{% tabs %}}
-{{% tab "Rancher v2.4.0+" %}}
+
+
### Snapshot Components
@@ -84,8 +84,8 @@ On restore, the following process is used:
4. The other etcd nodes download the snapshot and validate the checksum so that they all use the same snapshot for the restore.
5. The cluster is restored and post-restore actions will be done in the cluster.
-{{% /tab %}}
-{{% tab "Rancher before v2.4.0" %}}
+
+
When Rancher creates a snapshot, only the etcd data is included in the snapshot.
Because the Kubernetes version is not included in the snapshot, there is no option to restore a cluster to a different Kubernetes version.
@@ -128,8 +128,8 @@ On restore, the following process is used:
4. The other etcd nodes download the snapshot and validate the checksum so that they all use the same snapshot for the restore.
5. The cluster is restored and post-restore actions will be done in the cluster.
-{{% /tab %}}
-{{% /tabs %}}
+
+
# Configuring Recurring Snapshots
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md
index 3fe9ad5797f..4b23ea80248 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md
@@ -55,8 +55,8 @@ For imported clusters, the process for removing Rancher is a little different. Y
After the imported cluster is detached from Rancher, the cluster's workloads will be unaffected and you can access the cluster using the same methods that you did before the cluster was imported into Rancher.
-{{% tabs %}}
-{{% tab "By UI / API" %}}
+
+
>**Warning:** This process will remove data from your cluster. Make sure you have created a backup of files you want to keep before executing the command, as data will be lost.
After you initiate the removal of an imported cluster using the Rancher UI (or API), the following events occur.
@@ -69,8 +69,8 @@ After you initiate the removal of an imported cluster using the Rancher UI (or A
**Result:** All components listed for imported clusters in [What Gets Removed?](#what-gets-removed) are deleted.
-{{% /tab %}}
-{{% tab "By Script" %}}
+
+
Rather than cleaning imported cluster nodes using the Rancher UI, you can run a script instead. This functionality is available since `v2.1.0`.
>**Prerequisite:**
@@ -100,8 +100,8 @@ Rather than cleaning imported cluster nodes using the Rancher UI, you can run a
**Result:** The script runs. All components listed for imported clusters in [What Gets Removed?](#what-gets-removed) are deleted.
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Windows Nodes
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/nodes/nodes.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/nodes/nodes.md
index 7311650ae0d..4d2b148cc1c 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/nodes/nodes.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/nodes/nodes.md
@@ -130,8 +130,8 @@ However, you can override the conditions draining when you initiate the drain. Y
The node draining options are different based on your version of Rancher.
-{{% tabs %}}
-{{% tab "Rancher v2.2.x+" %}}
+
+
There are two drain modes: aggressive and safe.
- **Aggressive Mode**
@@ -143,8 +143,8 @@ There are two drain modes: aggressive and safe.
- **Safe Mode**
If a node has standalone pods or ephemeral data it will be cordoned but not drained.
-{{% /tab %}}
-{{% tab "Rancher before v2.2.x" %}}
+
+
The following list describes each drain option:
@@ -159,8 +159,8 @@ The following list describes each drain option:
- **Even if there are pods using emptyDir**
If a pod uses emptyDir to store local data, you might not be able to safely delete it, since the data in the emptyDir will be deleted once the pod is removed from the node. Similar to the first option, Kubernetes expects the implementation to decide what to do with these pods. Choosing this option will delete these pods.
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Grace Period
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/restoring-etcd/restoring-etcd.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/restoring-etcd/restoring-etcd.md
index 2d795e8bb09..8da98cbcb77 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/restoring-etcd/restoring-etcd.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/restoring-etcd/restoring-etcd.md
@@ -32,8 +32,8 @@ If your Kubernetes cluster is broken, you can restore the cluster from a snapsho
Restores changed in Rancher v2.4.0.
-{{% tabs %}}
-{{% tab "Rancher v2.4.0+" %}}
+
+
Snapshots are composed of the cluster data in etcd, the Kubernetes version, and the cluster configuration in the `cluster.yml.` These components allow you to select from the following options when restoring a cluster from a snapshot:
@@ -57,8 +57,8 @@ When rolling back to a prior Kubernetes version, the [upgrade strategy options](
**Result:** The cluster will go into `updating` state and the process of restoring the `etcd` nodes from the snapshot will start. The cluster is restored when it returns to an `active` state.
-{{% /tab %}}
-{{% tab "Rancher before v2.4.0" %}}
+
+
> **Prerequisites:**
>
@@ -75,8 +75,8 @@ When rolling back to a prior Kubernetes version, the [upgrade strategy options](
**Result:** The cluster will go into `updating` state and the process of restoring the `etcd` nodes from the snapshot will start. The cluster is restored when it returns to an `active` state.
-{{% /tab %}}
-{{% /tabs %}}
+
+
## Recovering etcd without a Snapshot
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/upgrading-kubernetes/upgrading-kubernetes.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/upgrading-kubernetes/upgrading-kubernetes.md
index 51e7fa4b58a..6638dadbb8d 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/upgrading-kubernetes/upgrading-kubernetes.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-admin/upgrading-kubernetes/upgrading-kubernetes.md
@@ -44,8 +44,8 @@ In this section of the [RKE documentation,]({{}}/rke/latest/en/upgrades
# Recommended Best Practice for Upgrades
-{{% tabs %}}
-{{% tab "Rancher v2.4+" %}}
+
+
When upgrading the Kubernetes version of a cluster, we recommend that you:
1. Take a snapshot.
@@ -53,8 +53,8 @@ When upgrading the Kubernetes version of a cluster, we recommend that you:
1. If the upgrade fails, revert the cluster to the pre-upgrade Kubernetes version. This is achieved by selecting the **Restore etcd and Kubernetes version** option. This will return your cluster to the pre-upgrade kubernetes version before restoring the etcd snapshot.
The restore operation will work on a cluster that is not in a healthy or active state.
-{{% /tab %}}
-{{% tab "Rancher before v2.4" %}}
+
+
When upgrading the Kubernetes version of a cluster, we recommend that you:
1. Take a snapshot.
@@ -62,8 +62,8 @@ When upgrading the Kubernetes version of a cluster, we recommend that you:
1. If the upgrade fails, restore the cluster from the etcd snapshot.
The cluster cannot be downgraded to a previous Kubernetes version.
-{{% /tab %}}
-{{% /tabs %}}
+
+
# Upgrading the Kubernetes Version
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/node-requirements/node-requirements.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/node-requirements/node-requirements.md
index 81b0972581d..92da41e7e50 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/node-requirements/node-requirements.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/node-requirements/node-requirements.md
@@ -38,8 +38,8 @@ SUSE Linux may have a firewall that blocks all ports by default. In that situati
When [Launching Kubernetes with Rancher]({{}}/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/) using Flatcar Container Linux nodes, it is required to use the following configuration in the [Cluster Config File]({{}}/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/options/#cluster-config-file)
-{{% tabs %}}
-{{% tab "Canal"%}}
+
+
```yaml
rancher_kubernetes_engine_config:
@@ -54,9 +54,9 @@ rancher_kubernetes_engine_config:
extra_args:
flex-volume-plugin-dir: /opt/kubernetes/kubelet-plugins/volume/exec/
```
-{{% /tab %}}
+
-{{% tab "Calico"%}}
+
```yaml
rancher_kubernetes_engine_config:
@@ -71,8 +71,8 @@ rancher_kubernetes_engine_config:
extra_args:
flex-volume-plugin-dir: /opt/kubernetes/kubelet-plugins/volume/exec/
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
It is also required to enable the Docker service, you can enable the Docker service using the following command:
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/azure/azure-node-template-config/azure-node-template-config.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/azure/azure-node-template-config/azure-node-template-config.md
index 1c2db8c79cf..dc62e511f4e 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/azure/azure-node-template-config/azure-node-template-config.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/azure/azure-node-template-config/azure-node-template-config.md
@@ -5,8 +5,8 @@ weight: 1
For more information about Azure, refer to the official [Azure documentation.](https://docs.microsoft.com/en-us/azure/?product=featured)
-{{% tabs %}}
-{{% tab "Rancher v2.2.0+" %}}
+
+
Account access information is stored as a cloud credential. Cloud credentials are stored as Kubernetes secrets. Multiple node templates can use the same cloud credential. You can use an existing cloud credential or create a new one.
@@ -21,8 +21,8 @@ The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-d
- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon
- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/)
-{{% /tab %}}
-{{% tab "Rancher before v2.2.0" %}}
+
+
- **Account Access** stores your account information for authenticating with Azure.
- **Placement** sets the geographical region where your cluster is hosted and other location metadata.
@@ -35,5 +35,5 @@ The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-d
- **Docker Engine Install URL:** Determines what Docker version will be installed on the instance.
- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon
- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/)
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/azure/azure.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/azure/azure.md
index 263db5c0d39..92c3839a66c 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/azure/azure.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/azure/azure.md
@@ -45,8 +45,8 @@ The creation of this service principal returns three pieces of identification in
# Creating an Azure Cluster
-{{%tabs %}}
-{{% tab "Rancher v2.2.0+" %}}
+
+
1. [Create your cloud credentials](#1-create-your-cloud-credentials)
2. [Create a node template with your cloud credentials](#2-create-a-node-template-with-your-cloud-credentials)
@@ -94,8 +94,8 @@ You can access your cluster after its state is updated to **Active.**
- `Default`, containing the `default` namespace
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
-{{% /tab %}}
-{{% tab "Rancher before v2.2.0" %}}
+
+
Use Rancher to create a Kubernetes cluster in Azure.
@@ -118,8 +118,8 @@ You can access your cluster after its state is updated to **Active.**
- `Default`, containing the `default` namespace
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Optional Next Steps
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/digital-ocean.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/digital-ocean.md
index c22ef453172..ba3dd4d6cd3 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/digital-ocean.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/digital-ocean.md
@@ -11,8 +11,8 @@ First, you will set up your DigitalOcean cloud credentials in Rancher. Then you
Then you will create a DigitalOcean cluster in Rancher, and when configuring the new cluster, you will define node pools for it. Each node pool will have a Kubernetes role of etcd, controlplane, or worker. Rancher will install RKE Kubernetes on the new nodes, and it will set up each node with the Kubernetes role defined by the node pool.
-{{% tabs %}}
-{{% tab "Rancher v2.2.0+" %}}
+
+
1. [Create your cloud credentials](#1-create-your-cloud-credentials)
2. [Create a node template with your cloud credentials](#2-create-a-node-template-with-your-cloud-credentials)
3. [Create a cluster with node pools using the node template](#3-create-a-cluster-with-node-pools-using-the-node-template)
@@ -57,8 +57,8 @@ You can access your cluster after its state is updated to **Active.**
- `Default`, containing the `default` namespace
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
-{{% /tab %}}
-{{% tab "Rancher before v2.2.0" %}}
+
+
1. From the **Clusters** page, click **Add Cluster**.
1. Choose **DigitalOcean**.
@@ -78,8 +78,8 @@ You can access your cluster after its state is updated to **Active.**
- `Default`, containing the `default` namespace
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
-{{% /tab %}}
-{{% /tabs %}}
+
+
# Optional Next Steps
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/do-node-template-config/do-node-template-config.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/do-node-template-config/do-node-template-config.md
index 4d9a0066f42..46396fa4aa3 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/do-node-template-config/do-node-template-config.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/do-node-template-config/do-node-template-config.md
@@ -3,8 +3,8 @@ title: DigitalOcean Node Template Configuration
weight: 1
----
-{{% tabs %}}
-{{% tab "Rancher v2.2.0+" %}}
+
+
Account access information is stored as a cloud credential. Cloud credentials are stored as Kubernetes secrets. Multiple node templates can use the same cloud credential. You can use an existing cloud credential or create a new one.
@@ -20,8 +20,8 @@ The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-d
- **Docker Engine Install URL:** Determines what Docker version will be installed on the instance.
- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon
- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/)
-{{% /tab %}}
-{{% tab "Rancher before v2.2.0" %}}
+
+
### Access Token
@@ -39,5 +39,5 @@ The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-d
- **Docker Engine Install URL:** Determines what Docker version will be installed on the instance.
- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon
- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/)
-{{% /tab %}}
-{{% /tabs %}}
\ No newline at end of file
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/ec2-node-template-config.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/ec2-node-template-config.md
index 4b7110fe78f..9b2e356973e 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/ec2-node-template-config.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/ec2-node-template-config.md
@@ -5,8 +5,8 @@ weight: 1
For more details about EC2, nodes, refer to the official documentation for the [EC2 Management Console](https://aws.amazon.com/ec2).
-{{% tabs %}}
-{{% tab "Rancher v2.2.0+" %}}
+
+
### Region
@@ -48,8 +48,8 @@ If you need to pass an **IAM Instance Profile Name** (not ARN), for example, whe
In the **Engine Options** section of the node template, you can configure the Docker daemon. You may want to specify the docker version or a Docker registry mirror.
-{{% /tab %}}
-{{% tab "Rancher before v2.2.0" %}}
+
+
### Account Access
@@ -95,5 +95,5 @@ The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-d
- **Docker Engine Install URL:** Determines what Docker version will be installed on the instance.
- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon
- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/)
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2.md
index b320cfc9f4a..d20de3383df 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2.md
@@ -25,8 +25,8 @@ Then you will create an EC2 cluster in Rancher, and when configuring the new clu
The steps to create a cluster differ based on your Rancher version.
-{{% tabs %}}
-{{% tab "Rancher v2.2.0+" %}}
+
+
1. [Create your cloud credentials](#1-create-your-cloud-credentials)
2. [Create a node template with your cloud credentials and information from EC2](#2-create-a-node-template-with-your-cloud-credentials-and-information-from-ec2)
@@ -75,8 +75,8 @@ You can access your cluster after its state is updated to **Active.**
- `Default`, containing the `default` namespace
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
-{{% /tab %}}
-{{% tab "Rancher before v2.2.0" %}}
+
+
1. From the **Clusters** page, click **Add Cluster**.
1. Choose **Amazon EC2**.
@@ -99,8 +99,8 @@ You can access your cluster after its state is updated to **Active.**
- `Default`, containing the `default` namespace
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Optional Next Steps
After creating your cluster, you can access it through the Rancher UI. As a best practice, we recommend setting up these alternate ways of accessing your cluster:
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/provisioning-vsphere-clusters.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/provisioning-vsphere-clusters.md
index d299c958c4c..94af7a252be 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/provisioning-vsphere-clusters.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/provisioning-vsphere-clusters.md
@@ -50,8 +50,8 @@ If you have a cluster with DRS enabled, setting up [VM-VM Affinity Rules](https:
The a vSphere cluster is created in Rancher depends on the Rancher version.
-{{% tabs %}}
-{{% tab "Rancher v2.2.0+" %}}
+
+
1. [Create your cloud credentials](#1-create-your-cloud-credentials)
2. [Create a node template with your cloud credentials](#2-create-a-node-template-with-your-cloud-credentials)
3. [Create a cluster with node pools using the node template](#3-create-a-cluster-with-node-pools-using-the-node-template)
@@ -101,8 +101,8 @@ You can access your cluster after its state is updated to **Active.**
- `Default`, containing the `default` namespace
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
-{{% /tab %}}
-{{% tab "Rancher before v2.2.0" %}}
+
+
Use Rancher to create a Kubernetes cluster in vSphere.
@@ -130,8 +130,8 @@ You can access your cluster after its state is updated to **Active.**
- `Default`, containing the `default` namespace
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/helm-charts/launching-apps/launching-apps.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/helm-charts/launching-apps/launching-apps.md
index e3af01f5d48..b269c0c432e 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/helm-charts/launching-apps/launching-apps.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/helm-charts/launching-apps/launching-apps.md
@@ -56,8 +56,8 @@ For each Helm chart, there are a list of desired answers that must be entered in
> For example, when entering an answer that includes two values separated by a comma (i.e. `abc, bcd`), it is required to wrap the values with double quotes (i.e., ``"abc, bcd"``).
-{{% tabs %}}
-{{% tab "UI" %}}
+
+
### Using a questions.yml file
@@ -67,8 +67,8 @@ If the Helm chart that you are deploying contains a `questions.yml` file, Ranche
For native Helm charts (i.e., charts from the **Helm Stable** or **Helm Incubator** catalogs or a [custom Helm chart repository]({{}}/rancher/v2.0-v2.4/en/helm-charts/legacy-catalogs/catalog-config/#custom-helm-chart-repository)), answers are provided as key value pairs in the **Answers** section. These answers are used to override the default values.
-{{% /tab %}}
-{{% tab "Editing YAML Files" %}}
+
+
_Available as of v2.1.0_
@@ -101,5 +101,5 @@ servers[0].host=example
_Available as of v2.2.0_
You can directly paste that YAML formatted structure into the YAML editor. By allowing custom values to be set using a YAML formatted structure, Rancher has the ability to easily customize for more complicated input values (e.g. multi-lines, array and JSON objects).
-{{% /tab %}}
-{{% /tabs %}}
\ No newline at end of file
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md
index 0e79ce86a0d..cfcdc35fe8b 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md
@@ -153,8 +153,8 @@ cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m
The exact command to install Rancher differs depending on the certificate configuration.
-{{% tabs %}}
-{{% tab "Rancher-generated Certificates" %}}
+
+
The default is for Rancher to generate a CA and uses `cert-manager` to issue the certificate for access to the Rancher server interface.
@@ -179,8 +179,8 @@ Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are
deployment "rancher" successfully rolled out
```
-{{% /tab %}}
-{{% tab "Let's Encrypt" %}}
+
+
This option uses `cert-manager` to automatically request and renew [Let's Encrypt](https://letsencrypt.org/) certificates. This is a free service that provides you with a valid certificate as Let's Encrypt is a trusted CA.
@@ -207,8 +207,8 @@ Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are
deployment "rancher" successfully rolled out
```
-{{% /tab %}}
-{{% tab "Certificates from Files" %}}
+
+
In this option, Kubernetes secrets are created from your own certificates for Rancher to use.
When you run this command, the `hostname` option must match the `Common Name` or a `Subject Alternative Names` entry in the server certificate or the Ingress controller will fail to configure correctly.
@@ -239,8 +239,8 @@ helm install rancher rancher-/rancher \
```
Now that Rancher is deployed, see [Adding TLS Secrets]({{}}/rancher/v2.0-v2.4/en/installation/resources/encryption/tls-secrets/) to publish the certificate files so Rancher and the Ingress controller can use them.
-{{% /tab %}}
-{{% /tabs %}}
+
+
The Rancher chart configuration has many options for customizing the installation to suit your specific environment. Here are some common advanced scenarios.
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/helm2/helm2.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/helm2/helm2.md
index e0f9ac2787c..f051392dcf1 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/helm2/helm2.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/helm2/helm2.md
@@ -84,8 +84,8 @@ of your Kubernetes cluster running Rancher server. You'll use the snapshot as a
This section describes how to upgrade normal (Internet-connected) or air gap installations of Rancher with Helm.
-{{% tabs %}}
-{{% tab "Kubernetes Upgrade" %}}
+
+
Get the values, which were passed with `--set`, from the current Rancher Helm chart that is installed.
@@ -136,9 +136,9 @@ If you are currently running the cert-manager whose version is older than v0.11,
{{% /accordion %}}
-{{% /tab %}}
+
-{{% tab "Kubernetes Air Gap Upgrade" %}}
+
1. Render the Rancher template using the same chosen options that were used when installing Rancher. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
@@ -202,8 +202,8 @@ helm template ./rancher-.tgz --output-dir . \
kubectl -n cattle-system apply -R -f ./rancher
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
### D. Verify the Upgrade
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/namespace-migration/namespace-migration.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/namespace-migration/namespace-migration.md
index 773bb97f9a0..2176e529949 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/namespace-migration/namespace-migration.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/namespace-migration/namespace-migration.md
@@ -71,8 +71,8 @@ Reset the cluster nodes' network policies to restore connectivity.
>
>Download and setup [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
-{{% tabs %}}
-{{% tab "Kubernetes Install" %}}
+
+
1. From **Terminal**, change directories to your kubectl file that's generated during Rancher install, `kube_config_rancher-cluster.yml`. This file is usually in the directory where you ran RKE during Rancher installation.
1. Before repairing networking, run the following two commands to make sure that your nodes have a status of `Ready` and that your cluster components are `Healthy`.
@@ -171,8 +171,8 @@ Reset the cluster nodes' network policies to restore connectivity.
1. Log into the Rancher UI and view your clusters. Created clusters will show errors from attempting to contact Rancher while it was unavailable. However, these errors should resolve automatically.
-{{% /tab %}}
-{{% tab "Rancher Launched Kubernetes" %}}
+
+
If you can access Rancher, but one or more of the clusters that you launched using Rancher has no networking, you can repair them by moving them:
@@ -185,7 +185,7 @@ If you can access Rancher, but one or more of the clusters that you launched usi
done
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/upgrades.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/upgrades.md
index cb3cf8b9655..b5b773ee627 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/upgrades.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/upgrades.md
@@ -124,8 +124,8 @@ You'll use the backup as a restoration point if something goes wrong during upgr
This section describes how to upgrade normal (Internet-connected) or air gap installations of Rancher with Helm.
-{{% tabs %}}
-{{% tab "Kubernetes Upgrade" %}}
+
+
Get the values, which were passed with `--set`, from the current Rancher Helm chart that is installed.
@@ -186,9 +186,9 @@ If you are currently running the cert-manager whose version is older than v0.11,
--set hostname=rancher.my.org
```
-{{% /tab %}}
+
-{{% tab "Kubernetes Air Gap Upgrade" %}}
+
Render the Rancher template using the same chosen options that were used when installing Rancher. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
@@ -252,8 +252,8 @@ Use `kubectl` to apply the rendered manifests.
kubectl -n cattle-system apply -R -f ./rancher
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
# 4. Verify the Upgrade
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/install-rancher/install-rancher.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/install-rancher/install-rancher.md
index f94562e9002..1f54e33138d 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/install-rancher/install-rancher.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/install-rancher/install-rancher.md
@@ -11,8 +11,8 @@ aliases:
This section is about how to deploy Rancher for your air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. There are _tabs_ for either a high availability (recommended) or a Docker installation.
-{{% tabs %}}
-{{% tab "Kubernetes Install (Recommended)" %}}
+
+
Rancher recommends installing Rancher on a Kubernetes cluster. A highly available Kubernetes install is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails.
@@ -229,8 +229,8 @@ These resources could be helpful when installing Rancher:
- [Adding TLS secrets]({{}}/rancher/v2.0-v2.4/en/installation/resources/encryption/tls-secrets/)
- [Troubleshooting Rancher Kubernetes Installations]({{}}/rancher/v2.0-v2.4/en/installation/options/troubleshooting/)
-{{% /tab %}}
-{{% tab "Docker Install" %}}
+
+
The Docker installation is for Rancher users who want to test out Rancher.
@@ -354,5 +354,5 @@ If you are installing Rancher v2.3.0+, the installation is complete.
If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{}}/rancher/v2.0-v2.4/en/installation/resources/local-system-charts/).
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md
index 232e69ee608..417d760edd2 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md
@@ -15,8 +15,8 @@ In Rancher v2.4, the Rancher management server can be installed on either an RKE
The steps to set up an air-gapped Kubernetes cluster on RKE or K3s are shown below.
-{{% tabs %}}
-{{% tab "K3s" %}}
+
+
In this guide, we are assuming you have created your nodes in your air gapped environment and have a secure Docker private registry on your bastion server.
@@ -139,8 +139,8 @@ Upgrading an air-gap environment can be accomplished in the following manner:
1. Download the new air-gap images (tar file) from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be upgrading to. Place the tar in the `/var/lib/rancher/k3s/agent/images/` directory on each node. Delete the old tar file.
2. Copy and replace the old K3s binary in `/usr/local/bin` on each node. Copy over the install script at https://get.k3s.io (as it is possible it has changed since the last release). Run the script again just as you had done in the past with the same environment variables.
3. Restart the K3s service (if not restarted automatically by installer).
-{{% /tab %}}
-{{% tab "RKE" %}}
+
+
We will create a Kubernetes cluster using Rancher Kubernetes Engine (RKE). Before being able to start your Kubernetes cluster, you’ll need to install RKE and create a RKE config file.
### 1. Install RKE
@@ -212,8 +212,8 @@ Save a copy of the following files in a secure location:
- `rancher-cluster.yml`: The RKE cluster configuration file.
- `kube_config_rancher-cluster.yml`: The [Kubeconfig file]({{}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file]({{}}/rke/latest/en/installation/#kubernetes-cluster-state), this file contains the current state of the cluster including the RKE configuration and the certificates.
_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
-{{% /tab %}}
-{{% /tabs %}}
+
+
> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md
index ea5b7142c3c..d5b765f29af 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md
@@ -22,8 +22,8 @@ The steps in this section differ depending on whether or not you are planning to
>
> If the registry has certs, follow [this K3s documentation](https://rancher.com/docs/k3s/latest/en/installation/private-registry/) about adding a private registry. The certs and registry configuration files need to be mounted into the Rancher container.
-{{% tabs %}}
-{{% tab "Linux Only Clusters" %}}
+
+
For Rancher servers that will only provision Linux clusters, these are the steps to populate your private registry.
@@ -107,8 +107,8 @@ The `rancher-images.txt` is expected to be on the workstation in the same direct
```plain
./rancher-load-images.sh --image-list ./rancher-images.txt --registry
```
-{{% /tab %}}
-{{% tab "Linux and Windows Clusters" %}}
+
+
_Available as of v2.3.0_
@@ -290,8 +290,8 @@ chmod +x rancher-load-images.sh
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
### [Next step for Kubernetes Installs - Launch a Kubernetes Cluster]({{}}/rancher/v2.0-v2.4/en/installation/other-installation-methods/air-gap/launch-kubernetes/)
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md
index efd93d093c7..b4c535c2d16 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md
@@ -11,8 +11,8 @@ An air gapped environment is an environment where the Rancher server is installe
The infrastructure depends on whether you are installing Rancher on a K3s Kubernetes cluster, an RKE Kubernetes cluster, or a single Docker container. For more information on each installation option, refer to [this page.]({{}}/rancher/v2.0-v2.4/en/installation/)
-{{% tabs %}}
-{{% tab "K3s" %}}
+
+
We recommend setting up the following infrastructure for a high-availability installation:
- **Two Linux nodes,** typically virtual machines, in the infrastructure provider of your choice.
@@ -82,8 +82,8 @@ Rancher supports air gap installs using a private registry. You must have your o
In a later step, when you set up your K3s Kubernetes cluster, you will create a [private registries configuration file]({{}}/k3s/latest/en/installation/private-registry/) with details from this registry.
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry)
-{{% /tab %}}
-{{% tab "RKE" %}}
+
+
To install the Rancher management server on a high-availability RKE cluster, we recommend setting up the following infrastructure:
@@ -146,8 +146,8 @@ In a later step, when you set up your RKE Kubernetes cluster, you will create a
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry)
-{{% /tab %}}
-{{% tab "Docker" %}}
+
+
> The Docker installation is for Rancher users that are wanting to test out Rancher. Since there is only one node and a single Docker container, if the node goes down, you will lose all the data of your Rancher server.
>
> For Rancher v2.0-v2.4, there is no migration path from a Docker installation to a high-availability installation. Therefore, you may want to use a Kubernetes installation from the start.
@@ -166,7 +166,7 @@ Rancher supports air gap installs using a Docker private registry on your bastio
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/)
-{{% /tab %}}
-{{% /tabs %}}
+
+
### [Next: Collect and Publish Images to your Private Registry]({{}}/rancher/v2.0-v2.4/en/installation/other-installation-methods/air-gap/populate-private-registry/)
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md
index 7e7e935097c..d3df2447032 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md
@@ -129,8 +129,8 @@ To see the command to use when starting the new Rancher server container, choose
- Docker Upgrade
- Docker Upgrade for Air Gap Installs
-{{% tabs %}}
-{{% tab "Docker Upgrade" %}}
+
+
Select which option you had installed Rancher server
@@ -237,8 +237,8 @@ docker run -d --volumes-from rancher-data \
{{% /accordion %}}
-{{% /tab %}}
-{{% tab "Docker Air Gap Upgrade" %}}
+
+
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
@@ -328,8 +328,8 @@ docker run -d --volumes-from rancher-data \
```
{{% /accordion %}}
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:** You have upgraded Rancher. Data from your upgraded server is now saved to the `rancher-data` container for use in future upgrades.
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/requirements/requirements.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/requirements/requirements.md
index fbf442d663e..38210ecd7e8 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/requirements/requirements.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/requirements/requirements.md
@@ -63,8 +63,8 @@ This section describes the CPU, memory, and disk requirements for the nodes wher
Hardware requirements scale based on the size of your Rancher deployment. Provision each individual node according to the requirements. The requirements are different depending on if you are installing Rancher in a single container with Docker, or if you are installing Rancher on a Kubernetes cluster.
-{{% tabs %}}
-{{% tab "RKE" %}}
+
+
These requirements apply to each host in an [RKE Kubernetes cluster where the Rancher server is installed.]({{}}/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/)
@@ -80,9 +80,9 @@ Performance increased in Rancher v2.4.0. For the requirements of Rancher before
Every use case and environment is different. Please [contact Rancher](https://rancher.com/contact/) to review yours.
-{{% /tab %}}
+
-{{% tab "K3s" %}}
+
These requirements apply to each host in a [K3s Kubernetes cluster where the Rancher server is installed.]({{}}/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/)
@@ -96,9 +96,9 @@ These requirements apply to each host in a [K3s Kubernetes cluster where the Ran
Every use case and environment is different. Please [contact Rancher](https://rancher.com/contact/) to review yours.
-{{% /tab %}}
+
-{{% tab "Docker" %}}
+
These requirements apply to a host with a [single-node]({{}}/rancher/v2.0-v2.4/en/installation/other-installation-methods/single-node-docker) installation of Rancher.
@@ -107,8 +107,8 @@ These requirements apply to a host with a [single-node]({{}}/rancher/v2
| Small | Up to 5 | Up to 50 | 1 | 4 GB |
| Medium | Up to 15 | Up to 200 | 2 | 8 GB |
-{{% /tab %}}
-{{% /tabs %}}
+
+
### CPU and Memory for Rancher before v2.4.0
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/advanced/air-gap-helm2/install-rancher/install-rancher.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/advanced/air-gap-helm2/install-rancher/install-rancher.md
index 5ba25689328..83e5234271e 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/advanced/air-gap-helm2/install-rancher/install-rancher.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/advanced/air-gap-helm2/install-rancher/install-rancher.md
@@ -13,8 +13,8 @@ aliases:
This section is about how to deploy Rancher for your air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. There are _tabs_ for either a high availability (recommended) or a Docker installation.
-{{% tabs %}}
-{{% tab "Kubernetes Install (Recommended)" %}}
+
+
Rancher recommends installing Rancher on a Kubernetes cluster. A highly available Kubernetes Installation is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails.
@@ -222,8 +222,8 @@ These resources could be helpful when installing Rancher:
- [Adding TLS secrets]({{}}/rancher/v2.0-v2.4/en/installation/resources/tls-secrets/)
- [Troubleshooting Rancher Kubernetes Installations]({{}}/rancher/v2.0-v2.4/en/installation/options/troubleshooting/)
-{{% /tab %}}
-{{% tab "Docker Install" %}}
+
+
The Docker installation is for Rancher users that are wanting to **test** out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. **Important: If you install Rancher following the Docker installation guide, there is no upgrade path to transition your Docker installation to a Kubernetes Installation.** Instead of running the single node installation, you have the option to follow the Kubernetes Install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a Kubernetes Installation.
@@ -331,5 +331,5 @@ If you are installing Rancher v2.3.0+, the installation is complete.
If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{}}/rancher/v2.0-v2.4/en/installation/options/local-system-charts/).
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/advanced/air-gap-helm2/populate-private-registry/populate-private-registry.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/advanced/air-gap-helm2/populate-private-registry/populate-private-registry.md
index 75e024e218e..8f2341cc40f 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/advanced/air-gap-helm2/populate-private-registry/populate-private-registry.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/advanced/air-gap-helm2/populate-private-registry/populate-private-registry.md
@@ -21,8 +21,8 @@ This section describes how to set up your private registry so that when you inst
By default, we provide the steps of how to populate your private registry assuming you are provisioning Linux only clusters, but if you plan on provisioning any [Windows clusters]({{}}/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/windows-clusters/), there are separate instructions to support the images needed for a Windows cluster.
-{{% tabs %}}
-{{% tab "Linux Only Clusters" %}}
+
+
For Rancher servers that will only provision Linux clusters, these are the steps to populate your private registry.
@@ -100,8 +100,8 @@ Move the images in the `rancher-images.tar.gz` to your private registry using th
```plain
./rancher-load-images.sh --image-list ./rancher-images.txt --registry
```
-{{% /tab %}}
-{{% tab "Linux and Windows Clusters" %}}
+
+
_Available as of v2.3.0_
@@ -268,8 +268,8 @@ Move the images in the `rancher-images.tar.gz` to your private registry using th
{{% /accordion %}}
-{{% /tab %}}
-{{% /tabs %}}
+
+
### [Next: Kubernetes Installs - Launch a Kubernetes Cluster with RKE]({{}}/rancher/v2.0-v2.4/en/installation/other-installation-methods/air-gap/launch-kubernetes/)
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/advanced/air-gap-helm2/prepare-nodes/prepare-nodes.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/advanced/air-gap-helm2/prepare-nodes/prepare-nodes.md
index 71c94aecb84..bcaf6b74ae1 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/advanced/air-gap-helm2/prepare-nodes/prepare-nodes.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/advanced/air-gap-helm2/prepare-nodes/prepare-nodes.md
@@ -12,8 +12,8 @@ This section is about how to prepare your node(s) to install Rancher for your ai
# Prerequisites
-{{% tabs %}}
-{{% tab "Kubernetes Install (Recommended)" %}}
+
+
### OS, Docker, Hardware, and Networking
@@ -33,8 +33,8 @@ The following CLI tools are required for the Kubernetes Install. Make sure these
- [rke]({{}}/rke/latest/en/installation/) - Rancher Kubernetes Engine, cli for building Kubernetes clusters.
- [helm](https://docs.helm.sh/using_helm/#installing-helm) - Package management for Kubernetes. Refer to the [Helm version requirements]({{}}/rancher/v2.0-v2.4/en/installation/options/helm-version) to choose a version of Helm to install Rancher.
-{{% /tab %}}
-{{% tab "Docker Install" %}}
+
+
### OS, Docker, Hardware, and Networking
@@ -45,13 +45,13 @@ Make sure that your node(s) fulfill the general [installation requirements.]({{<
Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machines.
If you need help with creating a private registry, please refer to the [Docker documentation](https://docs.docker.com/registry/).
-{{% /tab %}}
-{{% /tabs %}}
+
+
# Set up Infrastructure
-{{% tabs %}}
-{{% tab "Kubernetes Install (Recommended)" %}}
+
+
Rancher recommends installing Rancher on a Kubernetes cluster. A highly available Kubernetes install is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails.
@@ -86,8 +86,8 @@ You will need to configure a load balancer as a basic Layer 4 TCP forwarder to d
- For an example showing how to set up an NGINX load balancer, refer to [this page.]({{}}/rancher/v2.0-v2.4/en/installation/options/nginx)
- For an example showing how to set up an Amazon NLB load balancer, refer to [this page.]({{}}/rancher/v2.0-v2.4/en/installation/options/nlb)
-{{% /tab %}}
-{{% tab "Docker Install" %}}
+
+
The Docker installation is for Rancher users that are wanting to test out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server.
@@ -101,7 +101,7 @@ These hosts will be disconnected from the internet, but require being able to co
View hardware and software requirements for each of your cluster nodes in [Requirements]({{}}/rancher/v2.0-v2.4/en/installation/requirements).
-{{% /tab %}}
-{{% /tabs %}}
+
+
### [Next: Collect and Publish Images to your Private Registry]({{}}/rancher/v2.0-v2.4/en/installation/other-installation-methods/air-gap/populate-private-registry/)
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/choosing-version/choosing-version.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/choosing-version/choosing-version.md
index df137ded59a..678907478bc 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/choosing-version/choosing-version.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/choosing-version/choosing-version.md
@@ -11,8 +11,8 @@ For a high-availability installation of Rancher, which is recommended for produc
For Docker installations of Rancher, which is used for development and testing, you will install Rancher as a **Docker image.**
-{{% tabs %}}
-{{% tab "Helm Charts" %}}
+
+
When installing, upgrading, or rolling back Rancher Server when it is [installed on a Kubernetes cluster]({{}}/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/), Rancher server is installed using a Helm chart on a Kubernetes cluster. Therefore, as you prepare to install or upgrade a high availability Rancher configuration, you must add a Helm chart repository that contains the charts for installing Rancher.
@@ -77,8 +77,8 @@ After installing Rancher, if you want to change which Helm chart repository to i
```
4. Continue to follow the steps to [upgrade Rancher]({{}}/rancher/v2.0-v2.4/en/installation/upgrades-rollbacks/upgrades/ha) from the new Helm chart repository.
-{{% /tab %}}
-{{% tab "Docker Images" %}}
+
+
When performing [Docker installs]({{}}/rancher/v2.0-v2.4/en/installation/single-node), upgrades, or rollbacks, you can use _tags_ to install a specific version of Rancher.
### Server Tags
@@ -96,5 +96,5 @@ Rancher Server is distributed as a Docker image, which have tags attached to the
> - The `master` tag or any tag with `-rc` or another suffix is meant for the Rancher testing team to validate. You should not use these tags, as these builds are not officially supported.
> - Want to install an alpha review for preview? Install using one of the alpha tags listed on our [announcements page](https://forums.rancher.com/c/announcements) (e.g., `v2.2.0-alpha1`). Caveat: Alpha releases cannot be upgraded to or from any other release.
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/feature-flags/feature-flags.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/feature-flags/feature-flags.md
index 3c0c500c5aa..a4a50241c0c 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/feature-flags/feature-flags.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/feature-flags/feature-flags.md
@@ -49,8 +49,8 @@ When you install Rancher, enable the feature you want with a feature flag. The c
> **Note:** Values set from the Rancher API will override the value passed in through the command line.
-{{% tabs %}}
-{{% tab "Kubernetes Install" %}}
+
+
When installing Rancher with a Helm chart, use the `--features` option. In the below example, two features are enabled by passing the feature flag names names in a comma separated list:
```
@@ -99,8 +99,8 @@ helm template ./rancher-.tgz --output-dir . \
--set 'extraEnv[0].value==true,=true' # Available as of v2.3.0
```
-{{% /tab %}}
-{{% tab "Docker Install" %}}
+
+
When installing Rancher with Docker, use the `--features` option. In the below example, two features are enabled by passing the feature flag names in a comma separated list:
```
@@ -110,8 +110,8 @@ docker run -d -p 80:80 -p 443:443 \
--features==true,=true # Available as of v2.3.0
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
# Enabling Features with the Rancher UI
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/local-system-charts/local-system-charts.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/local-system-charts/local-system-charts.md
index 50f28c23fc0..ecfc3ff3632 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/local-system-charts/local-system-charts.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/installation/resources/local-system-charts/local-system-charts.md
@@ -29,8 +29,8 @@ Refer to the release notes in the `system-charts` repository to see which branch
Rancher needs to be configured to use your Git mirror of the `system-charts` repository. You can configure the system charts repository either from the Rancher UI or from Rancher's API view.
-{{% tabs %}}
-{{% tab "Rancher UI" %}}
+
+
In the catalog management page in the Rancher UI, follow these steps:
@@ -46,8 +46,8 @@ In the catalog management page in the Rancher UI, follow these steps:
**Result:** Rancher is configured to download all the required catalog items from your `system-charts` repository.
-{{% /tab %}}
-{{% tab "Rancher API" %}}
+
+
1. Log into Rancher.
@@ -65,5 +65,5 @@ In the catalog management page in the Rancher UI, follow these steps:
**Result:** Rancher is configured to download all the required catalog items from your `system-charts` repository.
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/pipelines/pipelines.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/pipelines/pipelines.md
index 2226fce30de..6d7400bfdcb 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/pipelines/pipelines.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/pipelines/pipelines.md
@@ -95,8 +95,8 @@ Before you can start configuring a pipeline for your repository, you must config
Select your provider's tab below and follow the directions.
-{{% tabs %}}
-{{% tab "GitHub" %}}
+
+
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Select **Tools > Pipelines** in the navigation bar. In versions before v2.2.0, you can select **Resources > Pipelines**.
@@ -109,8 +109,8 @@ Select your provider's tab below and follow the directions.
1. Click **Authenticate**.
-{{% /tab %}}
-{{% tab "GitLab" %}}
+
+
_Available as of v2.1.0_
@@ -129,8 +129,8 @@ _Available as of v2.1.0_
>**Note:**
> 1. Pipeline uses Gitlab [v4 API](https://docs.gitlab.com/ee/api/v3_to_v4.html) and the supported Gitlab version is 9.0+.
> 2. If you use GitLab 10.7+ and your Rancher setup is in a local network, enable the **Allow requests to the local network from hooks and services** option in GitLab admin settings.
-{{% /tab %}}
-{{% tab "Bitbucket Cloud" %}}
+
+
_Available as of v2.2.0_
@@ -146,8 +146,8 @@ _Available as of v2.2.0_
1. Click **Authenticate**.
-{{% /tab %}}
-{{% tab "Bitbucket Server" %}}
+
+
_Available as of v2.2.0_
@@ -169,8 +169,8 @@ _Available as of v2.2.0_
> 1. Setup Rancher server with a certificate from a trusted CA.
> 1. If you're using self-signed certificates, import Rancher server's certificate to the Bitbucket server. For instructions, see the Bitbucket server documentation for [configuring self-signed certificates](https://confluence.atlassian.com/bitbucketserver/if-you-use-self-signed-certificates-938028692.html).
>
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:** After the version control provider is authenticated, you will be automatically re-directed to start configuring which repositories you want start using with a pipeline.
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/pipelines/storage/storage.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/pipelines/storage/storage.md
index c176b2ac843..a09ab568eef 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/pipelines/storage/storage.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/pipelines/storage/storage.md
@@ -25,8 +25,8 @@ This section assumes that you understand how persistent storage works in Kuberne
- **Add Volume > Use an existing persistent volume (claim)**
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
-{{% tabs %}}
-{{% tab "Add a new persistent volume" %}}
+
+
1. Enter a **Name** for the volume claim.
@@ -39,9 +39,9 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
+
-{{% tab "Use an existing persistent volume" %}}
+
1. Enter a **Name** for the volume claim.
@@ -51,9 +51,9 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
+
-{{% /tabs %}}
+
1. From the **Mount Point** field, enter `/var/lib/registry`, which is the data storage path inside the Docker registry container.
@@ -69,9 +69,9 @@ This section assumes that you understand how persistent storage works in Kuberne
- **Add Volume > Use an existing persistent volume (claim)**
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
-{{% tabs %}}
+
-{{% tab "Add a new persistent volume" %}}
+
1. Enter a **Name** for the volume claim.
@@ -84,8 +84,8 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
-{{% tab "Use an existing persistent volume" %}}
+
+
1. Enter a **Name** for the volume claim.
@@ -95,8 +95,8 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
1. From the **Mount Point** field, enter `/data`, which is the data storage path inside the Minio container.
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/v1.6-migration/monitor-apps/monitor-apps.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/v1.6-migration/monitor-apps/monitor-apps.md
index da5d465c829..da5ab7eea29 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/v1.6-migration/monitor-apps/monitor-apps.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/v1.6-migration/monitor-apps/monitor-apps.md
@@ -91,9 +91,9 @@ Configure probes by using the **Health Check** section while editing deployments
While you create a workload using Rancher v2.x, we recommend configuring a check that monitors the health of the deployment's pods.
-{{% tabs %}}
+
-{{% tab "TCP Check" %}}
+
TCP checks monitor your deployment's health by attempting to open a connection to the pod over a specified port. If the probe can open the port, it's considered healthy. Failure to open it is considered unhealthy, which notifies Kubernetes that it should kill the pod and then replace it according to its [restart policy](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy). (this applies to Liveness probes, for Readiness probes, it will mark the pod as Unready).
@@ -129,9 +129,9 @@ When you configure a readiness check using Rancher v2.x, the `readinessProbe` di
-->
-{{% /tab %}}
+
-{{% tab "HTTP Check" %}}
+
HTTP checks monitor your deployment's health by sending an HTTP GET request to a specific URL path that you define. If the pod responds with a message range of `200`-`400`, the health check is considered successful. If the pod replies with any other value, the check is considered unsuccessful, so Kubernetes kills and replaces the pod according to its [restart policy](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy). (this applies to Liveness probes, for Readiness probes, it will mark the pod as Unready).
@@ -141,9 +141,9 @@ You can configure the probe along with values for specifying its behavior by sel
When you configure a readiness check using Rancher v2.x, the `readinessProbe` directive and the values you've set are added to the deployment's Kubernetes manifest. Configuring a readiness check also automatically adds a liveness check (`livenessProbe`) to the deployment.
-{{% /tab %}}
+
-{{% /tabs %}}
+
### Configuring Separate Liveness Checks
diff --git a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/v1.6-migration/run-migration-tool/run-migration-tool.md b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/v1.6-migration/run-migration-tool/run-migration-tool.md
index c540b32b430..6d6a76536d0 100644
--- a/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/v1.6-migration/run-migration-tool/run-migration-tool.md
+++ b/versioned_docs/version-2.0-2.4/v2.0-v2.4/en/v1.6-migration/run-migration-tool/run-migration-tool.md
@@ -121,8 +121,8 @@ File | Description
@@ -235,8 +235,8 @@ status: {}
>**Note:** Although these instructions deploy your v1.6 services in Rancher v2.x, they will not work correctly until you adjust their Kubernetes manifests.
-{{% tabs %}}
-{{% tab "Rancher UI" %}}
+
+
You can deploy the Kubernetes manifests created by migration-tools by importing them into Rancher v2.x.
@@ -248,8 +248,8 @@ You can deploy the Kubernetes manifests created by migration-tools by importing

-{{% /tab %}}
-{{% tab "Rancher CLI" %}}
+
+
>**Prerequisite:** [Install Rancher CLI]({{}}/rancher/v2.0-v2.4/en/cli/) for Rancher v2.x.
@@ -262,8 +262,8 @@ Use the following Rancher CLI commands to deploy your application using Rancher
./rancher kubectl create -f # DEPLOY THE SERVICE YAML
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
Following importation, you can view your v1.6 services in the v2.x UI as Kubernetes manifests by using the context menu to select ` > ` that contains your services. The imported manifests will display on the **Resources > Workloads** and on the tab at **Resources > Workloads > Service Discovery.** (In Rancher v2.x before v2.3.0, these are on the **Workloads** and **Service Discovery** tabs in the top navigation bar.)
diff --git a/versioned_docs/version-2.5/v2.5/en/admin-settings/authentication/keycloak/keycloak.md b/versioned_docs/version-2.5/v2.5/en/admin-settings/authentication/keycloak/keycloak.md
index e4e75f36477..8dd94d6299d 100644
--- a/versioned_docs/version-2.5/v2.5/en/admin-settings/authentication/keycloak/keycloak.md
+++ b/versioned_docs/version-2.5/v2.5/en/admin-settings/authentication/keycloak/keycloak.md
@@ -36,12 +36,12 @@ If your organization uses Keycloak Identity Provider (IdP) for user authenticati
## Getting the IDP Metadata
-{{% tabs %}}
-{{% tab "Keycloak 5 and earlier" %}}
+
+
To get the IDP metadata, export a `metadata.xml` file from your Keycloak client.
From the **Installation** tab, choose the **SAML Metadata IDPSSODescriptor** format option and download your file.
-{{% /tab %}}
-{{% tab "Keycloak 6-13" %}}
+
+
1. From the **Configure** section, click the **Realm Settings** tab.
1. Click the **General** tab.
@@ -79,8 +79,8 @@ You are left with something similar as the example below:
```
-{{% /tab %}}
-{{% tab "Keycloak 14+" %}}
+
+
1. From the **Configure** section, click the **Realm Settings** tab.
1. Click the **General** tab.
@@ -104,8 +104,8 @@ The following is an example process for Firefox, but will vary slightly for othe
1. From the details pane, click the **Response** tab.
1. Copy the raw response data.
-{{% /tab %}}
-{{% /tabs %}}
+
+
## Configuring Keycloak in Rancher
diff --git a/versioned_docs/version-2.5/v2.5/en/admin-settings/rbac/global-permissions/global-permissions.md b/versioned_docs/version-2.5/v2.5/en/admin-settings/rbac/global-permissions/global-permissions.md
index eef72464cfa..57d464d7c89 100644
--- a/versioned_docs/version-2.5/v2.5/en/admin-settings/rbac/global-permissions/global-permissions.md
+++ b/versioned_docs/version-2.5/v2.5/en/admin-settings/rbac/global-permissions/global-permissions.md
@@ -47,8 +47,8 @@ CATTLE_RESTRICTED_DEFAULT_ADMIN=true
The permissions for the `restricted-admin` role differ based on the Rancher version.
-{{% tabs %}}
-{{% tab "v2.5.7+" %}}
+
+
The `restricted-admin` permissions are as follows:
@@ -56,8 +56,8 @@ The `restricted-admin` permissions are as follows:
- Can add other users and assign them to clusters outside of the local cluster.
- Can create other restricted admins.
-{{% /tab %}}
-{{% tab "v2.5.0-v2.5.6" %}}
+
+
The `restricted-admin` permissions are as follows:
@@ -67,8 +67,8 @@ The `restricted-admin` permissions are as follows:
- Can create other restricted admins.
- Cannot grant any permissions in the local cluster they don't currently have. (This is how Kubernetes normally operates)
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Upgrading from Rancher with a Hidden Local Cluster
diff --git a/versioned_docs/version-2.5/v2.5/en/cis-scans/cis-scans.md b/versioned_docs/version-2.5/v2.5/en/cis-scans/cis-scans.md
index 6f72660c12c..b52f43da2b6 100644
--- a/versioned_docs/version-2.5/v2.5/en/cis-scans/cis-scans.md
+++ b/versioned_docs/version-2.5/v2.5/en/cis-scans/cis-scans.md
@@ -41,8 +41,8 @@ Support for alerting for the cluster scan results is now also available from Ran
In Rancher v2.4, permissive and hardened profiles were included. In Rancher v2.5.0 and in v2.5.4, more profiles were included.
-{{% tabs %}}
-{{% tab "Profiles in v2.5.4" %}}
+
+
- Generic CIS 1.5
- Generic CIS 1.6
- RKE permissive 1.5
@@ -53,22 +53,22 @@ In Rancher v2.4, permissive and hardened profiles were included. In Rancher v2.5
- GKE
- RKE2 permissive 1.5
- RKE2 permissive 1.5
-{{% /tab %}}
-{{% tab "Profiles in v2.5.0-v2.5.3" %}}
+
+
- Generic CIS 1.5
- RKE permissive
- RKE hardened
- EKS
- GKE
-{{% /tab %}}
-{{% /tabs %}}
+
+
The default profile and the supported CIS benchmark version depends on the type of cluster that will be scanned and the Rancher version:
-{{% tabs %}}
-{{% tab "v2.5.4" %}}
+
+
The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version.
@@ -77,8 +77,8 @@ The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version.
- For RKE2 Kubernetes clusters, the RKE2 Permissive 1.5 profile is the default.
- For cluster types other than RKE, RKE2, EKS and GKE, the Generic CIS 1.5 profile will be used by default.
-{{% /tab %}}
-{{% tab "v2.5.0-v2.5.3" %}}
+
+
The `rancher-cis-benchmark` supports the CIS 1.5 Benchmark version.
@@ -86,8 +86,8 @@ The `rancher-cis-benchmark` supports the CIS 1.5 Benchmark version.
- EKS and GKE have their own CIS Benchmarks published by `kube-bench`. The corresponding test profiles are used by default for those clusters.
- For cluster types other than RKE, EKS and GKE, the Generic CIS 1.5 profile will be used by default.
-{{% /tab %}}
-{{% /tabs %}}
+
+
> **Note:** CIS v1 cannot run on a cluster when CIS v2 is deployed. In other words, after `rancher-cis-benchmark` is installed, you can't run scans by going to the Cluster Manager view in the Rancher UI and clicking Tools > CIS Scans.
@@ -135,8 +135,8 @@ Refer to the t
The following profiles are available:
-{{% tabs %}}
-{{% tab "Profiles in v2.5.4" %}}
+
+
- Generic CIS 1.5
- Generic CIS 1.6
- RKE permissive 1.5
@@ -147,15 +147,15 @@ The following profiles are available:
- GKE
- RKE2 permissive 1.5
- RKE2 permissive 1.5
-{{% /tab %}}
-{{% tab "Profiles in v2.5.0-v2.5.3" %}}
+
+
- Generic CIS 1.5
- RKE permissive
- RKE hardened
- EKS
- GKE
-{{% /tab %}}
-{{% /tabs %}}
+
+
You also have the ability to customize a profile by saving a set of tests to skip.
diff --git a/versioned_docs/version-2.5/v2.5/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md b/versioned_docs/version-2.5/v2.5/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md
index 00f10553515..b88a05a4ba7 100644
--- a/versioned_docs/version-2.5/v2.5/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md
+++ b/versioned_docs/version-2.5/v2.5/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md
@@ -58,8 +58,8 @@ For registered clusters, the process for removing Rancher is a little different.
After the registered cluster is detached from Rancher, the cluster's workloads will be unaffected and you can access the cluster using the same methods that you did before the cluster was registered into Rancher.
-{{% tabs %}}
-{{% tab "By UI / API" %}}
+
+
>**Warning:** This process will remove data from your cluster. Make sure you have created a backup of files you want to keep before executing the command, as data will be lost.
After you initiate the removal of a registered cluster using the Rancher UI (or API), the following events occur.
@@ -72,8 +72,8 @@ After you initiate the removal of a registered cluster using the Rancher UI (or
**Result:** All components listed for registered clusters in [What Gets Removed?](#what-gets-removed) are deleted.
-{{% /tab %}}
-{{% tab "By Script" %}}
+
+
Rather than cleaning registered cluster nodes using the Rancher UI, you can run a script instead.
>**Prerequisite:**
@@ -103,8 +103,8 @@ Rather than cleaning registered cluster nodes using the Rancher UI, you can run
**Result:** The script runs. All components listed for registered clusters in [What Gets Removed?](#what-gets-removed) are deleted.
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Windows Nodes
diff --git a/versioned_docs/version-2.5/v2.5/en/cluster-admin/editing-clusters/eks-config-reference/eks-config-reference.md b/versioned_docs/version-2.5/v2.5/en/cluster-admin/editing-clusters/eks-config-reference/eks-config-reference.md
index 0ea3ab8ad80..471316df39b 100644
--- a/versioned_docs/version-2.5/v2.5/en/cluster-admin/editing-clusters/eks-config-reference/eks-config-reference.md
+++ b/versioned_docs/version-2.5/v2.5/en/cluster-admin/editing-clusters/eks-config-reference/eks-config-reference.md
@@ -4,8 +4,8 @@ shortTitle: EKS Cluster Configuration
weight: 2
---
-{{% tabs %}}
-{{% tab "Rancher v2.5.6+" %}}
+
+
### Account Access
@@ -152,8 +152,8 @@ The following settings are also configurable. All of these except for the "Node
| Tags | These are tags for the managed node group and do not propagate to any of the associated resources. |
-{{% /tab %}}
-{{% tab "Rancher v2.5.0-v2.5.5" %}}
+
+
### Changes in Rancher v2.5
@@ -283,8 +283,8 @@ Amazon will use the [EKS-optimized AMI](https://docs.aws.amazon.com/eks/latest/u
| Maximum ASG Size | The maximum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. |
| Minimum ASG Size | The minimum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. |
-{{% /tab %}}
-{{% tab "Rancher prior to v2.5" %}}
+
+
### Account Access
@@ -390,15 +390,15 @@ Custom AMI Override | If you want to use a custom [Amazon Machine Image](https:/
Desired ASG Size | The number of instances that your cluster will provision.
User Data | Custom commands can to be passed to perform automated configuration tasks **WARNING: Modifying this may cause your nodes to be unable to join the cluster.** _Note: Available as of v2.2.0_
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Configuring the Refresh Interval
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
The `eks-refresh-cron` setting is deprecated. It has been migrated to the `eks-refresh` setting, which is an integer representing seconds.
@@ -410,12 +410,12 @@ If the `eks-refresh-cron` setting was previously set, the migration will happen
The shorter the refresh window, the less likely any race conditions will occur, but it does increase the likelihood of encountering request limits that may be in place for AWS APIs.
-{{% /tab %}}
-{{% tab "Before v2.5.8" %}}
+
+
It is possible to change the refresh interval through the setting `eks-refresh-cron`. This setting accepts values in the Cron format. The default is `*/5 * * * *`.
The shorter the refresh window, the less likely any race conditions will occur, but it does increase the likelihood of encountering request limits that may be in place for AWS APIs.
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.5/v2.5/en/cluster-admin/editing-clusters/gke-config-reference/gke-config-reference.md b/versioned_docs/version-2.5/v2.5/en/cluster-admin/editing-clusters/gke-config-reference/gke-config-reference.md
index 67b0c603600..b4bce2285bb 100644
--- a/versioned_docs/version-2.5/v2.5/en/cluster-admin/editing-clusters/gke-config-reference/gke-config-reference.md
+++ b/versioned_docs/version-2.5/v2.5/en/cluster-admin/editing-clusters/gke-config-reference/gke-config-reference.md
@@ -4,8 +4,8 @@ shortTitle: GKE Cluster Configuration
weight: 3
---
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
# Changes in v2.5.8
@@ -302,8 +302,8 @@ The syncing interval can be changed by running `kubectl edit setting gke-refresh
The shorter the refresh window, the less likely any race conditions will occur, but it does increase the likelihood of encountering request limits that may be in place for GCP APIs.
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
# Labels & Annotations
@@ -449,5 +449,5 @@ Access scopes are the legacy method of specifying permissions for your nodes.
- **Set access for each API:** Alternatively, you can choose to set specific scopes that permit access to the particular API methods that the service will call.
For more information, see the [section about enabling service accounts for a VM.](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances)
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/cluster-capabilities-table/index.md b/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/cluster-capabilities-table/index.md
index 16ee4674be7..8dca5a8b84c 100644
--- a/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/cluster-capabilities-table/index.md
+++ b/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/cluster-capabilities-table/index.md
@@ -2,8 +2,8 @@
headless: true
---
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
| Action | Rancher Launched Kubernetes Clusters | EKS and GKE Clusters1 | Other Hosted Kubernetes Clusters | Non-EKS or GKE Registered Clusters |
| --- | --- | ---| ---|----|
@@ -31,8 +31,8 @@ headless: true
4. For registered clusters using etcd as a control plane, snapshots must be taken manually outside of the Rancher UI to use for backup and recovery.
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
| Action | Rancher Launched Kubernetes Clusters | Hosted Kubernetes Clusters | Registered EKS Clusters | All Other Registered Clusters |
| --- | --- | ---| ---|----|
@@ -59,5 +59,5 @@ headless: true
3. For registered clusters using etcd as a control plane, snapshots must be taken manually outside of the Rancher UI to use for backup and recovery.
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/gke.md b/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/gke.md
index 68fec4a4f20..64fa4385bd4 100644
--- a/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/gke.md
+++ b/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/gke.md
@@ -8,8 +8,8 @@ aliases:
- /rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke/
---
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
- [Prerequisites](#prerequisites)
- [Provisioning a GKE Cluster](#provisioning-a-gke-cluster)
@@ -105,8 +105,8 @@ The GKE provisioner can synchronize the state of a GKE cluster between Rancher a
For information on configuring the refresh interval, see [this section.]({{}}/rancher/v2.5/en/cluster-admin/editing-clusters/gke-config-reference/#configuring-the-refresh-interval)
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
# Prerequisites
@@ -159,5 +159,5 @@ You can access your cluster after its state is updated to **Active.**
- `Default`, containing the `default` namespace
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/node-requirements/node-requirements.md b/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/node-requirements/node-requirements.md
index 3daf9c0cb8f..94bf7bf569d 100644
--- a/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/node-requirements/node-requirements.md
+++ b/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/node-requirements/node-requirements.md
@@ -47,8 +47,8 @@ SUSE Linux may have a firewall that blocks all ports by default. In that situati
When [Launching Kubernetes with Rancher]({{}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/) using Flatcar Container Linux nodes, it is required to use the following configuration in the [Cluster Config File]({{}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/options/#cluster-config-file)
-{{% tabs %}}
-{{% tab "Canal"%}}
+
+
```yaml
rancher_kubernetes_engine_config:
@@ -63,9 +63,9 @@ rancher_kubernetes_engine_config:
extra_args:
flex-volume-plugin-dir: /opt/kubernetes/kubelet-plugins/volume/exec/
```
-{{% /tab %}}
+
-{{% tab "Calico"%}}
+
```yaml
rancher_kubernetes_engine_config:
@@ -80,8 +80,8 @@ rancher_kubernetes_engine_config:
extra_args:
flex-volume-plugin-dir: /opt/kubernetes/kubelet-plugins/volume/exec/
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
It is also required to enable the Docker service, you can enable the Docker service using the following command:
diff --git a/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/registered-clusters/registered-clusters.md b/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/registered-clusters/registered-clusters.md
index 0e1ee65c004..ac8391dbe21 100644
--- a/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/registered-clusters/registered-clusters.md
+++ b/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/registered-clusters/registered-clusters.md
@@ -20,8 +20,8 @@ The control that Rancher has to manage a registered cluster depends on the type
# Prerequisites
-{{% tabs %}}
-{{% tab "v2.5.9+" %}}
+
+
### Kubernetes Node Roles
@@ -51,8 +51,8 @@ If you are registering a K3s cluster, make sure the `cluster.yml` is readable. I
EKS clusters must have at least one managed node group to be imported into Rancher or provisioned from Rancher successfully.
-{{% /tab %}}
-{{% tab "Rancher before v2.5.9" %}}
+
+
### Permissions
@@ -76,8 +76,8 @@ If you are registering a K3s cluster, make sure the `cluster.yml` is readable. I
EKS clusters must have at least one managed node group to be imported into Rancher or provisioned from Rancher successfully.
-{{% /tab %}}
-{{% /tabs %}}
+
+
# Registering a Cluster
@@ -151,8 +151,8 @@ resource "rancher2_cluster" "my-eks-to-import" {
The control that Rancher has to manage a registered cluster depends on the type of cluster.
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
- [Changes in v2.5.8](#changes-in-v2-5-8)
- [Features for All Registered Clusters](#2-5-8-features-for-all-registered-clusters)
@@ -197,8 +197,8 @@ When you delete an EKS cluster or GKE cluster that was created in Rancher, the c
The capabilities for registered clusters are listed in the table on [this page.]({{}}/rancher/v2.5/en/cluster-provisioning/)
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
- [Features for All Registered Clusters](#before-2-5-8-features-for-all-registered-clusters)
- [Additional Features for Registered K3s Clusters](#before-2-5-8-additional-features-for-registered-k3s-clusters)
@@ -236,8 +236,8 @@ Amazon EKS clusters can now be registered in Rancher. For the most part, registe
When you delete an EKS cluster that was created in Rancher, the cluster is destroyed. When you delete an EKS cluster that was registered in Rancher, it is disconnected from the Rancher server, but it still exists and you can still access it in the same way you did before it was registered in Rancher.
The capabilities for registered EKS clusters are listed in the table on [this page.]({{}}/rancher/v2.5/en/cluster-provisioning/)
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/rke-clusters/options/options.md b/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/rke-clusters/options/options.md
index 63b3bfddcbd..4b88876c42f 100644
--- a/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/rke-clusters/options/options.md
+++ b/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/rke-clusters/options/options.md
@@ -70,18 +70,18 @@ When Weave is selected as network provider, Rancher will automatically enable en
Project network isolation is used to enable or disable communication between pods in different projects.
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
To enable project network isolation as a cluster option, you will need to use any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
To enable project network isolation as a cluster option, you will need to use Canal as the CNI.
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Kubernetes Cloud Providers
diff --git a/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/rke-clusters/windows-clusters/windows-clusters.md b/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/rke-clusters/windows-clusters/windows-clusters.md
index 16bf34656c1..46f1d787e50 100644
--- a/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/rke-clusters/windows-clusters/windows-clusters.md
+++ b/versioned_docs/version-2.5/v2.5/en/cluster-provisioning/rke-clusters/windows-clusters/windows-clusters.md
@@ -35,14 +35,14 @@ The general node requirements for networking, operating systems, and Docker are
### OS and Docker Requirements
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
Our support for Windows Server and Windows containers match the Microsoft official lifecycle for LTSC (Long-Term Servicing Channel) and SAC (Semi-Annual Channel).
For the support lifecycle dates for Windows Server, see the [Microsoft Documentation.](https://docs.microsoft.com/en-us/windows-server/get-started/windows-server-release-info)
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
In order to add Windows worker nodes to a cluster, the node must be running one of the following Windows Server versions and the corresponding version of Docker Engine - Enterprise Edition (EE):
- Nodes with Windows Server core version 1809 should use Docker EE-basic 18.09 or Docker EE-basic 19.03.
@@ -52,8 +52,8 @@ In order to add Windows worker nodes to a cluster, the node must be running one
>
> - If you are using AWS, Rancher recommends _Microsoft Windows Server 2019 Base with Containers_ as the Amazon Machine Image (AMI).
> - If you are using GCE, Rancher recommends _Windows Server 2019 Datacenter for Containers_ as the OS image.
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Kubernetes Version
diff --git a/versioned_docs/version-2.5/v2.5/en/installation/install-rancher-on-k8s/gke/gke.md b/versioned_docs/version-2.5/v2.5/en/installation/install-rancher-on-k8s/gke/gke.md
index 0bb0b01f39e..bbd1d2f75d9 100644
--- a/versioned_docs/version-2.5/v2.5/en/installation/install-rancher-on-k8s/gke/gke.md
+++ b/versioned_docs/version-2.5/v2.5/en/installation/install-rancher-on-k8s/gke/gke.md
@@ -69,8 +69,8 @@ To install `gcloud` and `kubectl`, perform the following steps:
- Using gcloud init, if you want to be walked through setting defaults.
- Using gcloud config, to individually set your project ID, zone, and region.
-{{% tabs %}}
-{{% tab "Using gloud init" %}}
+
+
1. Run gcloud init and follow the directions:
@@ -84,10 +84,10 @@ To install `gcloud` and `kubectl`, perform the following steps:
```
2. Follow the instructions to authorize gcloud to use your Google Cloud account and select the new project that you created.
-{{% /tab %}}
-{{% tab "Using gcloud config" %}}
-{{% /tab %}}
-{{% /tabs %}}
+
+
+
+
# 4. Confirm that gcloud is configured correctly
diff --git a/versioned_docs/version-2.5/v2.5/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md b/versioned_docs/version-2.5/v2.5/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md
index b0e02303a79..b2da6ec0e4b 100644
--- a/versioned_docs/version-2.5/v2.5/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md
+++ b/versioned_docs/version-2.5/v2.5/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md
@@ -159,8 +159,8 @@ The exact command to install Rancher differs depending on the certificate config
However, irrespective of the certificate configuration, the name of the Rancher installation in the `cattle-system` namespace should always be `rancher`.
-{{% tabs %}}
-{{% tab "Rancher-generated Certificates" %}}
+
+
The default is for Rancher to generate a self-signed CA, and uses `cert-manager` to issue the certificate for access to the Rancher server interface.
@@ -187,8 +187,8 @@ Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are
deployment "rancher" successfully rolled out
```
-{{% /tab %}}
-{{% tab "Let's Encrypt" %}}
+
+
This option uses `cert-manager` to automatically request and renew [Let's Encrypt](https://letsencrypt.org/) certificates. This is a free service that provides you with a valid certificate as Let's Encrypt is a trusted CA.
@@ -222,8 +222,8 @@ Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are
deployment "rancher" successfully rolled out
```
-{{% /tab %}}
-{{% tab "Certificates from Files" %}}
+
+
In this option, Kubernetes secrets are created from your own certificates for Rancher to use.
When you run this command, the `hostname` option must match the `Common Name` or a `Subject Alternative Names` entry in the server certificate, or the Ingress controller will fail to configure correctly.
@@ -257,8 +257,8 @@ helm install rancher rancher-/rancher \
```
Now that Rancher is deployed, see [Adding TLS Secrets]({{}}/rancher/v2.5/en/installation/resources/encryption/tls-secrets/) to publish the certificate files so Rancher and the Ingress controller can use them.
-{{% /tab %}}
-{{% /tabs %}}
+
+
The Rancher chart configuration has many options for customizing the installation to suit your specific environment. Here are some common advanced scenarios.
diff --git a/versioned_docs/version-2.5/v2.5/en/installation/install-rancher-on-k8s/upgrades/air-gap-upgrade/air-gap-upgrade.md b/versioned_docs/version-2.5/v2.5/en/installation/install-rancher-on-k8s/upgrades/air-gap-upgrade/air-gap-upgrade.md
index 2d591e83ab4..b8fdc86865f 100644
--- a/versioned_docs/version-2.5/v2.5/en/installation/install-rancher-on-k8s/upgrades/air-gap-upgrade/air-gap-upgrade.md
+++ b/versioned_docs/version-2.5/v2.5/en/installation/install-rancher-on-k8s/upgrades/air-gap-upgrade/air-gap-upgrade.md
@@ -22,8 +22,8 @@ Placeholder | Description
### Option A: Default Self-signed Certificate
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
```
helm template rancher ./rancher-.tgz --output-dir . \
@@ -36,8 +36,8 @@ helm template rancher ./rancher-.tgz --output-dir . \
--set useBundledSystemChart=true # Use the packaged Rancher system charts
```
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
```plain
helm template rancher ./rancher-.tgz --output-dir . \
@@ -49,16 +49,16 @@ helm template rancher ./rancher-.tgz --output-dir . \
--set useBundledSystemChart=true # Use the packaged Rancher system charts
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Option B: Certificates from Files using Kubernetes Secrets
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
```plain
@@ -86,8 +86,8 @@ helm template rancher ./rancher-.tgz --output-dir . \
--set useBundledSystemChart=true # Use the packaged Rancher system charts
```
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
```plain
@@ -112,8 +112,8 @@ helm template rancher ./rancher-.tgz --output-dir . \
--set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
--set useBundledSystemChart=true # Use the packaged Rancher system charts
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Apply the Rendered Templates
diff --git a/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/install-rancher/install-rancher.md b/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/install-rancher/install-rancher.md
index 78daa4f58a0..b931bd24888 100644
--- a/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/install-rancher/install-rancher.md
+++ b/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/install-rancher/install-rancher.md
@@ -138,8 +138,8 @@ Placeholder | Description
`` | The DNS name for your private registry.
`` | Cert-manager version running on k8s cluster.
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
```plain
helm template rancher ./rancher-.tgz --output-dir . \
--no-hooks \ # prevent files for Helm hooks from being generated
@@ -152,8 +152,8 @@ helm template rancher ./rancher-.tgz --output-dir . \
```
**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.5.8`
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
```plain
helm template rancher ./rancher-.tgz --output-dir . \
@@ -166,8 +166,8 @@ helm template rancher ./rancher-.tgz --output-dir . \
```
**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.5.6`
-{{% /tab %}}
-{{% /tabs %}}
+
+
@@ -188,8 +188,8 @@ Render the Rancher template, declaring your chosen options. Use the reference ta
| `` | The DNS name you pointed at your load balancer. |
| `` | The DNS name for your private registry. |
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
```plain
helm template rancher ./rancher-.tgz --output-dir . \
@@ -219,8 +219,8 @@ If you are using a Private CA signed cert, add `--set privateCA=true` following
**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.3.6`
Then refer to [Adding TLS Secrets]({{}}/rancher/v2.5/en/installation/resources/encryption/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them.
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
```plain
@@ -249,8 +249,8 @@ If you are using a Private CA signed cert, add `--set privateCA=true` following
**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.3.6`
Then refer to [Adding TLS Secrets]({{}}/rancher/v2.5/en/installation/resources/encryption/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them.
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md b/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md
index 58578d7d8ff..ebe710a0cfd 100644
--- a/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md
+++ b/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md
@@ -14,8 +14,8 @@ As of Rancher v2.5, Rancher can be installed on any Kubernetes cluster, includin
The steps to set up an air-gapped Kubernetes cluster on RKE or K3s are shown below.
-{{% tabs %}}
-{{% tab "K3s" %}}
+
+
In this guide, we are assuming you have created your nodes in your air gapped environment and have a secure Docker private registry on your bastion server.
@@ -138,8 +138,8 @@ Upgrading an air-gap environment can be accomplished in the following manner:
1. Download the new air-gap images (tar file) from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be upgrading to. Place the tar in the `/var/lib/rancher/k3s/agent/images/` directory on each node. Delete the old tar file.
2. Copy and replace the old K3s binary in `/usr/local/bin` on each node. Copy over the install script at https://get.k3s.io (as it is possible it has changed since the last release). Run the script again just as you had done in the past with the same environment variables.
3. Restart the K3s service (if not restarted automatically by installer).
-{{% /tab %}}
-{{% tab "RKE" %}}
+
+
We will create a Kubernetes cluster using Rancher Kubernetes Engine (RKE). Before being able to start your Kubernetes cluster, you’ll need to install RKE and create a RKE config file.
### 1. Install RKE
@@ -211,8 +211,8 @@ Save a copy of the following files in a secure location:
- `rancher-cluster.yml`: The RKE cluster configuration file.
- `kube_config_cluster.yml`: The [Kubeconfig file]({{}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file]({{}}/rke/latest/en/installation/#kubernetes-cluster-state), this file contains the current state of the cluster including the RKE configuration and the certificates.
_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
-{{% /tab %}}
-{{% /tabs %}}
+
+
> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
diff --git a/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md b/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md
index 5c152feb541..a1a4694d722 100644
--- a/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md
+++ b/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md
@@ -24,8 +24,8 @@ The steps in this section differ depending on whether or not you are planning to
>
> If the registry has certs, follow [this K3s documentation](https://rancher.com/docs/k3s/latest/en/installation/private-registry/) about adding a private registry. The certs and registry configuration files need to be mounted into the Rancher container.
-{{% tabs %}}
-{{% tab "Linux Only Clusters" %}}
+
+
For Rancher servers that will only provision Linux clusters, these are the steps to populate your private registry.
@@ -109,8 +109,8 @@ The `rancher-images.txt` is expected to be on the workstation in the same direct
```plain
./rancher-load-images.sh --image-list ./rancher-images.txt --registry
```
-{{% /tab %}}
-{{% tab "Linux and Windows Clusters" %}}
+
+
For Rancher servers that will provision Linux and Windows clusters, there are distinctive steps to populate your private registry for the Windows images and the Linux images. Since a Windows cluster is a mix of Linux and Windows nodes, the Linux images pushed into the private registry are manifests.
@@ -288,8 +288,8 @@ The image list, `rancher-images.txt` or `rancher-windows-images.txt`, is expecte
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
### [Next step for Kubernetes Installs - Launch a Kubernetes Cluster]({{}}/rancher/v2.5/en/installation/other-installation-methods/air-gap/launch-kubernetes/)
diff --git a/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md b/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md
index 84bff627287..e85072ccebd 100644
--- a/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md
+++ b/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md
@@ -14,8 +14,8 @@ The infrastructure depends on whether you are installing Rancher on a K3s Kubern
As of Rancher v2.5, Rancher can be installed on any Kubernetes cluster. The RKE and K3s Kubernetes infrastructure tutorials below are still included for convenience.
-{{% tabs %}}
-{{% tab "K3s" %}}
+
+
We recommend setting up the following infrastructure for a high-availability installation:
- **Two Linux nodes,** typically virtual machines, in the infrastructure provider of your choice.
@@ -85,8 +85,8 @@ Rancher supports air gap installs using a private registry. You must have your o
In a later step, when you set up your K3s Kubernetes cluster, you will create a [private registries configuration file]({{}}/k3s/latest/en/installation/private-registry/) with details from this registry.
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry)
-{{% /tab %}}
-{{% tab "RKE" %}}
+
+
To install the Rancher management server on a high-availability RKE cluster, we recommend setting up the following infrastructure:
@@ -149,8 +149,8 @@ In a later step, when you set up your RKE Kubernetes cluster, you will create a
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry)
-{{% /tab %}}
-{{% tab "Docker" %}}
+
+
> The Docker installation is for Rancher users that are wanting to test out Rancher. Since there is only one node and a single Docker container, if the node goes down, you will lose all the data of your Rancher server.
>
> As of Rancher v2.5, the Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.]({{}}/rancher/v2.5/en/backups/migrating-rancher)
@@ -169,7 +169,7 @@ Rancher supports air gap installs using a Docker private registry on your bastio
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/)
-{{% /tab %}}
-{{% /tabs %}}
+
+
### [Next: Collect and Publish Images to your Private Registry]({{}}/rancher/v2.5/en/installation/other-installation-methods/air-gap/populate-private-registry/)
diff --git a/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md b/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md
index 6c55386d77d..2306c3542bb 100644
--- a/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md
+++ b/versioned_docs/version-2.5/v2.5/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md
@@ -130,8 +130,8 @@ To see the command to use when starting the new Rancher server container, choose
- Docker Upgrade
- Docker Upgrade for Air Gap Installs
-{{% tabs %}}
-{{% tab "Docker Upgrade" %}}
+
+
Select which option you had installed Rancher server
@@ -248,8 +248,8 @@ As of Rancher v2.5, privileged access is [required.]({{}}/rancher/v2.5/
{{% /accordion %}}
-{{% /tab %}}
-{{% tab "Docker Air Gap Upgrade" %}}
+
+
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
@@ -342,8 +342,8 @@ docker run -d --volumes-from rancher-data \
```
As of Rancher v2.5, privileged access is [required.]({{}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
{{% /accordion %}}
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:** You have upgraded Rancher. Data from your upgraded server is now saved to the `rancher-data` container for use in future upgrades.
diff --git a/versioned_docs/version-2.5/v2.5/en/installation/requirements/ports/ports.md b/versioned_docs/version-2.5/v2.5/en/installation/requirements/ports/ports.md
index af7f109a891..c3ecbc10b2d 100644
--- a/versioned_docs/version-2.5/v2.5/en/installation/requirements/ports/ports.md
+++ b/versioned_docs/version-2.5/v2.5/en/installation/requirements/ports/ports.md
@@ -280,8 +280,8 @@ When using the [AWS EC2 node driver]({{}}/rancher/v2.5/en/cluster-provi
SUSE Linux may have a firewall that blocks all ports by default. To open the ports needed for adding the host to a custom cluster,
-{{% tabs %}}
-{{% tab "SLES 15 / openSUSE Leap 15" %}}
+
+
1. SSH into the instance.
1. Start YaST in text mode:
```
@@ -299,8 +299,8 @@ UDP Ports
1. When all required ports are enter, select **Accept**.
-{{% /tab %}}
-{{% tab "SLES 12 / openSUSE Leap 42" %}}
+
+
1. SSH into the instance.
1. Edit /`etc/sysconfig/SuSEfirewall2` and open the required ports. In this example, ports 9796 and 10250 are also opened for monitoring:
```
@@ -312,7 +312,7 @@ UDP Ports
```
SuSEfirewall2
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:** The node has the open ports required to be added to a custom cluster.
diff --git a/versioned_docs/version-2.5/v2.5/en/installation/resources/choosing-version/choosing-version.md b/versioned_docs/version-2.5/v2.5/en/installation/resources/choosing-version/choosing-version.md
index 618e8e36c33..ddfc9e87ca0 100644
--- a/versioned_docs/version-2.5/v2.5/en/installation/resources/choosing-version/choosing-version.md
+++ b/versioned_docs/version-2.5/v2.5/en/installation/resources/choosing-version/choosing-version.md
@@ -16,8 +16,8 @@ The Helm chart version also applies to RancherD installs because RancherD instal
> **Note:** RancherD was an experimental feature available as part of Rancher v2.5.4 through v2.5.10 but is now deprecated and not available for recent releases.
-{{% tabs %}}
-{{% tab "Helm Charts" %}}
+
+
When installing, upgrading, or rolling back Rancher Server when it is [installed on a Kubernetes cluster]({{}}/rancher/v2.5/en/installation/install-rancher-on-k8s/), Rancher server is installed using a Helm chart on a Kubernetes cluster. Therefore, as you prepare to install or upgrade a high availability Rancher configuration, you must add a Helm chart repository that contains the charts for installing Rancher.
@@ -80,8 +80,8 @@ After installing Rancher, if you want to change which Helm chart repository to i
```
4. Continue to follow the steps to [upgrade Rancher]({{}}/rancher/v2.5/en/installation/upgrades-rollbacks/upgrades/ha) from the new Helm chart repository.
-{{% /tab %}}
-{{% tab "Docker Images" %}}
+
+
When performing [Docker installs]({{}}/rancher/v2.5/en/installation/single-node), upgrades, or rollbacks, you can use _tags_ to install a specific version of Rancher.
### Server Tags
@@ -99,5 +99,5 @@ Rancher Server is distributed as a Docker image, which have tags attached to the
> - The `master` tag or any tag with `-rc` or another suffix is meant for the Rancher testing team to validate. You should not use these tags, as these builds are not officially supported.
> - Want to install an alpha review for preview? Install using one of the alpha tags listed on our [announcements page](https://forums.rancher.com/c/announcements) (e.g., `v2.2.0-alpha1`). Caveat: Alpha releases cannot be upgraded to or from any other release.
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.5/v2.5/en/installation/resources/feature-flags/feature-flags.md b/versioned_docs/version-2.5/v2.5/en/installation/resources/feature-flags/feature-flags.md
index b06257e74e4..a917d8a6bd7 100644
--- a/versioned_docs/version-2.5/v2.5/en/installation/resources/feature-flags/feature-flags.md
+++ b/versioned_docs/version-2.5/v2.5/en/installation/resources/feature-flags/feature-flags.md
@@ -77,8 +77,8 @@ Here is an example of a command for passing in the feature flag names when rende
The Helm 3 command is as follows:
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
```
helm template rancher ./rancher-.tgz --output-dir . \
@@ -92,8 +92,8 @@ helm template rancher ./rancher-.tgz --output-dir . \
--set 'extraEnv[0].name=CATTLE_FEATURES'
--set 'extraEnv[0].value==true,=true'
```
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
```
helm template rancher ./rancher-.tgz --output-dir . \
@@ -106,8 +106,8 @@ helm template rancher ./rancher-.tgz --output-dir . \
--set 'extraEnv[0].name=CATTLE_FEATURES'
--set 'extraEnv[0].value==true,=true'
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
The Helm 2 command is as follows:
diff --git a/versioned_docs/version-2.5/v2.5/en/istio/configuration-reference/enable-istio-with-psp/enable-istio-with-psp.md b/versioned_docs/version-2.5/v2.5/en/istio/configuration-reference/enable-istio-with-psp/enable-istio-with-psp.md
index 48d1d317f46..ab893c47111 100644
--- a/versioned_docs/version-2.5/v2.5/en/istio/configuration-reference/enable-istio-with-psp/enable-istio-with-psp.md
+++ b/versioned_docs/version-2.5/v2.5/en/istio/configuration-reference/enable-istio-with-psp/enable-istio-with-psp.md
@@ -15,8 +15,8 @@ The Istio CNI plugin removes the need for each application pod to have a privile
The steps differ based on the Rancher version.
-{{% tabs %}}
-{{% tab "v2.5.4+" %}}
+
+
> **Prerequisites:**
>
@@ -59,8 +59,8 @@ Istio should install successfully with the CNI enabled in the cluster.
Verify that the CNI is working by deploying a [sample application](https://istio.io/latest/docs/examples/bookinfo/) or deploying one of your own applications.
-{{% /tab %}}
-{{% tab "v2.5.0-v2.5.3" %}}
+
+
> **Prerequisites:**
>
@@ -107,5 +107,5 @@ Follow the [primary instructions]({{}}/rancher/v2.5/en/istio/setup/enab
After Istio has finished installing, the Apps page in System Projects should show both istio and `istio-cni` applications deployed successfully. Sidecar injection will now be functional.
-{{% /tab %}}
-{{% /tabs %}}
\ No newline at end of file
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-2.5/v2.5/en/istio/resources/resources.md b/versioned_docs/version-2.5/v2.5/en/istio/resources/resources.md
index b1b4c7bafe9..c173ff61960 100644
--- a/versioned_docs/version-2.5/v2.5/en/istio/resources/resources.md
+++ b/versioned_docs/version-2.5/v2.5/en/istio/resources/resources.md
@@ -21,8 +21,8 @@ The table below shows a summary of the minimum recommended resource requests and
In Kubernetes, the resource request indicates that the workload will not be deployed on a node unless the node has at least the specified amount of memory and CPU available. If the workload surpasses the limit for CPU or memory, it can be terminated or evicted from the node. For more information on managing resource limits for containers, refer to the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
-{{% tabs %}}
-{{% tab "v2.5.6+" %}}
+
+
| Workload | CPU - Request | Memory - Request | CPU - Limit | Memory - Limit |
|----------------------|---------------|------------|-----------------|-------------------|
@@ -32,8 +32,8 @@ In Kubernetes, the resource request indicates that the workload will not be depl
| proxy | 10m | 10mi | 2000m | 1024mi |
| **Totals:** | **710m** | **2314Mi** | **6000m** | **3072Mi** |
-{{% /tab %}}
-{{% tab "v2.5.0-v2.5.5" %}}
+
+
Workload | CPU - Request | Memory - Request | CPU - Limit | Mem - Limit | Configurable
---------:|---------------:|---------------:|-------------:|-------------:|-------------:
@@ -43,8 +43,8 @@ Istio-ingressgateway | 100m | 128Mi | 2000m | 1024Mi | Y |
Others | 10m | - | - | - | Y |
Totals: | 1710m | 3304Mi | >8800m | >6048Mi | -
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.5/v2.5/en/logging/custom-resource-config/flows/flows.md b/versioned_docs/version-2.5/v2.5/en/logging/custom-resource-config/flows/flows.md
index 7f2a7dc321c..8cc1d7fb8eb 100644
--- a/versioned_docs/version-2.5/v2.5/en/logging/custom-resource-config/flows/flows.md
+++ b/versioned_docs/version-2.5/v2.5/en/logging/custom-resource-config/flows/flows.md
@@ -10,8 +10,8 @@ For the full details on configuring `Flows` and `ClusterFlows`, see the [Banzai
# Configuration
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
- [Flows](#flows-2-5-8)
- [Matches](#matches-2-5-8)
@@ -73,8 +73,8 @@ Matches, filters and `Outputs` are configured for `ClusterFlows` in the same way
After `ClusterFlow` selects logs from all namespaces in the cluster, logs from the cluster will be collected and logged to the selected `ClusterOutput`.
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
- [Flows](#flows-2-5-0)
- [Matches](#matches-2-5-0)
@@ -130,8 +130,8 @@ Matches, filters and `Outputs` are also configured for `ClusterFlows`. The only
`ClusterFlows` need to be defined in YAML.
-{{% /tab %}}
-{{% /tabs %}}
+
+
# YAML Example
diff --git a/versioned_docs/version-2.5/v2.5/en/logging/custom-resource-config/outputs/outputs.md b/versioned_docs/version-2.5/v2.5/en/logging/custom-resource-config/outputs/outputs.md
index 6e86e5d54cd..e42535a4884 100644
--- a/versioned_docs/version-2.5/v2.5/en/logging/custom-resource-config/outputs/outputs.md
+++ b/versioned_docs/version-2.5/v2.5/en/logging/custom-resource-config/outputs/outputs.md
@@ -14,8 +14,8 @@ For the full details on configuring `Outputs` and `ClusterOutputs`, see the [Ban
# Configuration
-{{% tabs %}}
-{{% tab "v2.5.8+" %}}
+
+
- [Outputs](#outputs-2-5-8)
- [ClusterOutputs](#clusteroutputs-2-5-8)
@@ -68,8 +68,8 @@ For example configuration for each logging plugin supported by the logging opera
For the details of the `ClusterOutput` custom resource, see [ClusterOutput.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/clusteroutput_types/)
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
- [Outputs](#outputs-2-5-0)
@@ -101,8 +101,8 @@ The Rancher UI provides forms for configuring the `ClusterOutput` type, target,
For example configuration for each logging plugin supported by the logging operator, see the [logging operator documentation.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/)
-{{% /tab %}}
-{{% /tabs %}}
+
+
# YAML Examples
diff --git a/versioned_docs/version-2.5/v2.5/en/logging/logging.md b/versioned_docs/version-2.5/v2.5/en/logging/logging.md
index df4bdc8a9d6..1dd571ffbb9 100644
--- a/versioned_docs/version-2.5/v2.5/en/logging/logging.md
+++ b/versioned_docs/version-2.5/v2.5/en/logging/logging.md
@@ -80,20 +80,20 @@ For a list of options that can be configured when the logging application is ins
### Windows Support
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
As of Rancher v2.5.8, logging support for Windows clusters has been added and logs can be collected from Windows nodes.
For details on how to enable or disable Windows node logging, see [this section.](./helm-chart-options/#enable-disable-windows-node-logging)
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
Clusters with Windows workers support exporting logs from Linux nodes, but Windows node logs are currently unable to be exported.
Only Linux node logs are able to be exported.
To allow the logging pods to be scheduled on Linux nodes, tolerations must be added to the pods. Refer to the [Working with Taints and Tolerations]({{}}/rancher/v2.5/en/logging/taints-tolerations/) section for details and an example.
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Working with a Custom Docker Root Directory
diff --git a/versioned_docs/version-2.5/v2.5/en/logging/taints-tolerations/taints-tolerations.md b/versioned_docs/version-2.5/v2.5/en/logging/taints-tolerations/taints-tolerations.md
index 9e75640385e..f9fe571bd02 100644
--- a/versioned_docs/version-2.5/v2.5/en/logging/taints-tolerations/taints-tolerations.md
+++ b/versioned_docs/version-2.5/v2.5/en/logging/taints-tolerations/taints-tolerations.md
@@ -19,20 +19,20 @@ Both provide choice for the what node(s) the pod will run on.
### Default Implementation in Rancher's Logging Stack
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
By default, Rancher taints all Linux nodes with `cattle.io/os=linux`, and does not taint Windows nodes.
The logging stack pods have `tolerations` for this taint, which enables them to run on Linux nodes.
Moreover, most logging stack pods run on Linux only and have a `nodeSelector` added to ensure they run on Linux nodes.
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
By default, Rancher taints all Linux nodes with `cattle.io/os=linux`, and does not taint Windows nodes.
The logging stack pods have `tolerations` for this taint, which enables them to run on Linux nodes.
Moreover, we can populate the `nodeSelector` to ensure that our pods *only* run on Linux nodes.
-{{% /tab %}}
-{{% /tabs %}}
+
+
This example Pod YAML file shows a nodeSelector being used with a toleration:
diff --git a/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/configuration/advanced/prometheusrules/prometheusrules.md b/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/configuration/advanced/prometheusrules/prometheusrules.md
index d727bc37fd1..5f1f562b4cb 100644
--- a/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/configuration/advanced/prometheusrules/prometheusrules.md
+++ b/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/configuration/advanced/prometheusrules/prometheusrules.md
@@ -44,8 +44,8 @@ For examples, refer to the Prometheus documentation on [recording rules](https:/
# Configuration
-{{% tabs %}}
-{{% tab "Rancher v2.5.4" %}}
+
+
Rancher v2.5.4 introduced the capability to configure PrometheusRules by filling out forms in the Rancher UI.
@@ -81,8 +81,8 @@ Rancher v2.5.4 introduced the capability to configure PrometheusRules by filling
| PromQL Expression | The PromQL expression to evaluate. Prometheus will evaluate the current value of this PromQL expression on every evaluation cycle and the result will be recorded as a new set of time series with the metric name as given by 'record'. For more information about expressions, refer to the [Prometheus documentation](https://prometheus.io/docs/prometheus/latest/querying/basics/) or our [example PromQL expressions.](../expression) |
| Labels | Labels to add or overwrite before storing the result. |
-{{% /tab %}}
-{{% tab "Rancher v2.5.0-v2.5.3" %}}
+
+
For Rancher v2.5.0-v2.5.3, PrometheusRules must be configured in YAML. For examples, refer to the Prometheus documentation on [recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) and [alerting rules.](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/)
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/configuration/receiver/receiver.md b/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/configuration/receiver/receiver.md
index 71dcad6a195..79e768051eb 100644
--- a/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/configuration/receiver/receiver.md
+++ b/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/configuration/receiver/receiver.md
@@ -72,8 +72,8 @@ Rancher v2.5.8 added Microsoft Teams and SMS as configurable receivers in the Ra
Rancher v2.5.4 introduced the capability to configure receivers by filling out forms in the Rancher UI.
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
The following types of receivers can be configured in the Rancher UI:
@@ -229,8 +229,8 @@ url http://rancher-alerting-drivers-sachet.ns-1.svc:9876/alert
-{{% /tab %}}
-{{% tab "Rancher v2.5.4-2.5.7" %}}
+
+
The following types of receivers can be configured in the Rancher UI:
@@ -305,11 +305,11 @@ Opsgenie Responders:
The YAML provided here will be directly appended to your receiver within the Alertmanager Config Secret.
-{{% /tab %}}
-{{% tab "Rancher v2.5.0-2.5.3" %}}
+
+
The Alertmanager must be configured in YAML, as shown in these [examples.](#example-alertmanager-configs)
-{{% /tab %}}
-{{% /tabs %}}
+
+
# Configuring Multiple Receivers
diff --git a/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/configuration/route/route.md b/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/configuration/route/route.md
index 75c3294da74..8f58cc9a529 100644
--- a/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/configuration/route/route.md
+++ b/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/configuration/route/route.md
@@ -36,8 +36,8 @@ Labels should be used for identifying information that can affect the routing of
Annotations should be used for information that does not affect who receives the alert, such as a runbook url or error message.
-{{% tabs %}}
-{{% tab "Rancher v2.5.4+" %}}
+
+
### Receiver
The route needs to refer to a [receiver](#receiver-configuration) that has already been configured.
@@ -67,8 +67,8 @@ match_re:
[ : , ... ]
```
-{{% /tab %}}
-{{% tab "Rancher v2.5.0-2.5.3" %}}
+
+
The Alertmanager must be configured in YAML, as shown in this [example.](../examples/#alertmanager-config)
-{{% /tab %}}
-{{% /tabs %}}
\ No newline at end of file
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/guides/enable-monitoring/enable-monitoring.md b/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/guides/enable-monitoring/enable-monitoring.md
index 8301ccf6b61..6d70e4d22d4 100644
--- a/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/guides/enable-monitoring/enable-monitoring.md
+++ b/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/guides/enable-monitoring/enable-monitoring.md
@@ -32,8 +32,8 @@ For more information about the default limits, see [this page.]({{}}/ra
# Install the Monitoring Application
-{{% tabs %}}
-{{% tab "Rancher v2.5.8" %}}
+
+
### Enable Monitoring for use without SSL
@@ -70,8 +70,8 @@ key.pfx=`base64-content`
Then **Cert File Path** would be set to `/etc/alertmanager/secrets/cert.pem`.
-{{% /tab %}}
-{{% tab "Rancher v2.5.0-2.5.7" %}}
+
+
1. In the Rancher UI, go to the cluster where you want to install monitoring and click **Cluster Explorer.**
1. Click **Apps.**
@@ -81,6 +81,6 @@ Then **Cert File Path** would be set to `/etc/alertmanager/secrets/cert.pem`.
**Result:** The monitoring app is deployed in the `cattle-monitoring-system` namespace.
-{{% /tab %}}
+
-{{% /tabs %}}
+
diff --git a/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/guides/persist-grafana/persist-grafana.md b/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/guides/persist-grafana/persist-grafana.md
index 40fa07ee3d5..59117f72e32 100644
--- a/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/guides/persist-grafana/persist-grafana.md
+++ b/versioned_docs/version-2.5/v2.5/en/monitoring-alerting/guides/persist-grafana/persist-grafana.md
@@ -13,8 +13,8 @@ To allow the Grafana dashboard to persist after the Grafana instance restarts, a
# Creating a Persistent Grafana Dashboard
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
> **Prerequisites:**
>
@@ -84,8 +84,8 @@ grafana.sidecar.dashboards.searchNamespace=ALL
Note that the RBAC roles exposed by the Monitoring chart to add Grafana Dashboards are still restricted to giving permissions for users to add dashboards in the namespace defined in `grafana.dashboards.namespace`, which defaults to `cattle-dashboards`.
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
> **Prerequisites:**
>
> - The monitoring application needs to be installed.
@@ -123,8 +123,8 @@ To prevent the persistent dashboard from being deleted when Monitoring v2 is uni
helm.sh/resource-policy: "keep"
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
# Known Issues
diff --git a/versioned_docs/version-2.5/v2.5/en/pipelines/pipelines.md b/versioned_docs/version-2.5/v2.5/en/pipelines/pipelines.md
index 8c7ac545a87..5a74a928895 100644
--- a/versioned_docs/version-2.5/v2.5/en/pipelines/pipelines.md
+++ b/versioned_docs/version-2.5/v2.5/en/pipelines/pipelines.md
@@ -94,8 +94,8 @@ Before you can start configuring a pipeline for your repository, you must config
Select your provider's tab below and follow the directions.
-{{% tabs %}}
-{{% tab "GitHub" %}}
+
+
1. From the **Global** view, navigate to the project that you want to configure pipelines.
1. Select **Tools > Pipelines** in the navigation bar.
@@ -108,8 +108,8 @@ Select your provider's tab below and follow the directions.
1. Click **Authenticate**.
-{{% /tab %}}
-{{% tab "GitLab" %}}
+
+
1. From the **Global** view, navigate to the project that you want to configure pipelines.
@@ -126,8 +126,8 @@ Select your provider's tab below and follow the directions.
>**Note:**
> 1. Pipeline uses Gitlab [v4 API](https://docs.gitlab.com/ee/api/v3_to_v4.html) and the supported Gitlab version is 9.0+.
> 2. If you use GitLab 10.7+ and your Rancher setup is in a local network, enable the **Allow requests to the local network from hooks and services** option in GitLab admin settings.
-{{% /tab %}}
-{{% tab "Bitbucket Cloud" %}}
+
+
1. From the **Global** view, navigate to the project that you want to configure pipelines.
@@ -141,8 +141,8 @@ Select your provider's tab below and follow the directions.
1. Click **Authenticate**.
-{{% /tab %}}
-{{% tab "Bitbucket Server" %}}
+
+
1. From the **Global** view, navigate to the project that you want to configure pipelines.
@@ -162,8 +162,8 @@ Select your provider's tab below and follow the directions.
> 1. Setup Rancher server with a certificate from a trusted CA.
> 1. If you're using self-signed certificates, import Rancher server's certificate to the Bitbucket server. For instructions, see the Bitbucket server documentation for [configuring self-signed certificates](https://confluence.atlassian.com/bitbucketserver/if-you-use-self-signed-certificates-938028692.html).
>
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:** After the version control provider is authenticated, you will be automatically re-directed to start configuring which repositories you want start using with a pipeline.
diff --git a/versioned_docs/version-2.5/v2.5/en/pipelines/storage/storage.md b/versioned_docs/version-2.5/v2.5/en/pipelines/storage/storage.md
index f5eb987d4b9..fceb4d59dca 100644
--- a/versioned_docs/version-2.5/v2.5/en/pipelines/storage/storage.md
+++ b/versioned_docs/version-2.5/v2.5/en/pipelines/storage/storage.md
@@ -26,8 +26,8 @@ This section assumes that you understand how persistent storage works in Kuberne
- **Add Volume > Use an existing persistent volume (claim)**
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
-{{% tabs %}}
-{{% tab "Add a new persistent volume" %}}
+
+
1. Enter a **Name** for the volume claim.
@@ -40,9 +40,9 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
+
-{{% tab "Use an existing persistent volume" %}}
+
1. Enter a **Name** for the volume claim.
@@ -52,9 +52,9 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
+
-{{% /tabs %}}
+
1. From the **Mount Point** field, enter `/var/lib/registry`, which is the data storage path inside the Docker registry container.
@@ -70,9 +70,9 @@ This section assumes that you understand how persistent storage works in Kuberne
- **Add Volume > Use an existing persistent volume (claim)**
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
-{{% tabs %}}
+
-{{% tab "Add a new persistent volume" %}}
+
1. Enter a **Name** for the volume claim.
@@ -85,8 +85,8 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
-{{% tab "Use an existing persistent volume" %}}
+
+
1. Enter a **Name** for the volume claim.
@@ -96,8 +96,8 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
1. From the **Mount Point** field, enter `/data`, which is the data storage path inside the Minio container.
diff --git a/versioned_docs/version-2.5/v2.5/en/user-settings/preferences/preferences.md b/versioned_docs/version-2.5/v2.5/en/user-settings/preferences/preferences.md
index a692f70e7d3..d046fdc2e5b 100644
--- a/versioned_docs/version-2.5/v2.5/en/user-settings/preferences/preferences.md
+++ b/versioned_docs/version-2.5/v2.5/en/user-settings/preferences/preferences.md
@@ -9,8 +9,8 @@ Each user can choose preferences to personalize their Rancher experience. To cha
The preferences available will differ depending on whether the **User Settings** menu was accessed while on the Cluster Manager UI or the Cluster Explorer UI.
-{{% tabs %}}
-{{% tab "Cluster Manager" %}}
+
+
## Theme
Choose your background color for the Rancher UI. If you choose **Auto**, the background color changes from light to dark at 6 PM, and then changes back at 6 AM.
@@ -23,8 +23,8 @@ This section displays the **Name** (your display name) and **Username** (your lo
On pages that display system objects like clusters or deployments in a table, you can set the number of objects that display on the page before you must paginate. The default setting is `50`.
-{{% /tab %}}
-{{% tab "Cluster Explorer" %}}
+
+
## Theme
Choose your background color for the Rancher UI. If you choose **Auto**, the background color changes from light to dark at 6 PM, and then changes back at 6 AM.
@@ -61,5 +61,5 @@ Hides all description boxes.
When deploying applications from the "Apps & Marketplace", choose whether to show only released versions of the Helm chart or to include prerelease versions as well.
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.6/v2.6/en/admin-settings/authentication/keycloak-saml/keycloak-saml.md b/versioned_docs/version-2.6/v2.6/en/admin-settings/authentication/keycloak-saml/keycloak-saml.md
index ca2952111fb..2d9b0d7ab68 100644
--- a/versioned_docs/version-2.6/v2.6/en/admin-settings/authentication/keycloak-saml/keycloak-saml.md
+++ b/versioned_docs/version-2.6/v2.6/en/admin-settings/authentication/keycloak-saml/keycloak-saml.md
@@ -34,12 +34,12 @@ If your organization uses Keycloak Identity Provider (IdP) for user authenticati
## Getting the IDP Metadata
-{{% tabs %}}
-{{% tab "Keycloak 5 and earlier" %}}
+
+
To get the IDP metadata, export a `metadata.xml` file from your Keycloak client.
From the **Installation** tab, choose the **SAML Metadata IDPSSODescriptor** format option and download your file.
-{{% /tab %}}
-{{% tab "Keycloak 6-13" %}}
+
+
1. From the **Configure** section, click the **Realm Settings** tab.
1. Click the **General** tab.
@@ -77,8 +77,8 @@ You are left with something similar as the example below:
```
-{{% /tab %}}
-{{% tab "Keycloak 14+" %}}
+
+
1. From the **Configure** section, click the **Realm Settings** tab.
1. Click the **General** tab.
@@ -102,8 +102,8 @@ The following is an example process for Firefox, but will vary slightly for othe
1. From the details pane, click the **Response** tab.
1. Copy the raw response data.
-{{% /tab %}}
-{{% /tabs %}}
+
+
## Configuring Keycloak in Rancher
diff --git a/versioned_docs/version-2.6/v2.6/en/admin-settings/branding/branding.md b/versioned_docs/version-2.6/v2.6/en/admin-settings/branding/branding.md
index 4e5cff17e20..a882bb7df37 100644
--- a/versioned_docs/version-2.6/v2.6/en/admin-settings/branding/branding.md
+++ b/versioned_docs/version-2.6/v2.6/en/admin-settings/branding/branding.md
@@ -40,11 +40,11 @@ You can override the primary color used throughout the UI with a custom color of
### Fixed Banners
-{{% tabs %}}
-{{% tab "Rancher before v2.6.4" %}}
+
+
Display a custom fixed banner in the header, footer, or both.
-{{% /tab %}}
-{{% tab "Rancher v2.6.4+" %}}
+
+
Display a custom fixed banner in the header, footer, or both.
As of Rancher v2.6.4, configuration of fixed banners has moved from the **Branding** tab to the **Banners** tab.
@@ -53,8 +53,8 @@ To configure banner settings,
1. Click **☰ > Global settings**.
2. Click **Banners**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
# Custom Navigation Links
diff --git a/versioned_docs/version-2.6/v2.6/en/admin-settings/rbac/cluster-project-roles/cluster-project-roles.md b/versioned_docs/version-2.6/v2.6/en/admin-settings/rbac/cluster-project-roles/cluster-project-roles.md
index c2767b8a66f..4f171cdf3a2 100644
--- a/versioned_docs/version-2.6/v2.6/en/admin-settings/rbac/cluster-project-roles/cluster-project-roles.md
+++ b/versioned_docs/version-2.6/v2.6/en/admin-settings/rbac/cluster-project-roles/cluster-project-roles.md
@@ -81,24 +81,24 @@ To assign a custom role to a new cluster member, you can use the Rancher UI. To
To assign the role to a new cluster member,
-{{% tabs %}}
-{{% tab "Rancher before v2.6.4" %}}
+
+
1. Click **☰ > Cluster Management**.
1. Go to the cluster where you want to assign a role to a member and click **Explore**.
1. Click **RBAC > Cluster Members**.
1. Click **Add**.
1. In the **Cluster Permissions** section, choose the custom cluster role that should be assigned to the member.
1. Click **Create**.
-{{% /tab %}}
-{{% tab "Rancher v2.6.4+" %}}
+
+
1. Click **☰ > Cluster Management**.
1. Go to the cluster where you want to assign a role to a member and click **Explore**.
1. Click **Cluster > Cluster Members**.
1. Click **Add**.
1. In the **Cluster Permissions** section, choose the custom cluster role that should be assigned to the member.
1. Click **Create**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:** The member has the assigned role.
diff --git a/versioned_docs/version-2.6/v2.6/en/api/api.md b/versioned_docs/version-2.6/v2.6/en/api/api.md
index d1cc9cc4454..d4abcc7a836 100644
--- a/versioned_docs/version-2.6/v2.6/en/api/api.md
+++ b/versioned_docs/version-2.6/v2.6/en/api/api.md
@@ -7,20 +7,20 @@ weight: 24
The API has its own user interface accessible from a web browser. This is an easy way to see resources, perform actions, and see the equivalent cURL or HTTP request & response. To access it:
-{{% tabs %}}
-{{% tab "Rancher v2.6.4+" %}}
+
+
1. Click on your user avatar in the upper right corner.
1. Click **Account & API Keys**.
1. Under the **API Keys** section, find the **API Endpoint** field and click the link. The link will look something like `https:///v3`, where `` is the fully qualified domain name of your Rancher deployment.
-{{% /tab %}}
-{{% tab "Rancher before v2.6.4" %}}
+
+
Go to the URL endpoint at `https:///v3`, where `` is the fully qualified domain name of your Rancher deployment.
-{{% /tab %}}
-{{% /tabs %}}
+
+
## Authentication
diff --git a/versioned_docs/version-2.6/v2.6/en/cluster-admin/certificate-rotation/certificate-rotation.md b/versioned_docs/version-2.6/v2.6/en/cluster-admin/certificate-rotation/certificate-rotation.md
index c38a4dd0d5b..80f9ef327ef 100644
--- a/versioned_docs/version-2.6/v2.6/en/cluster-admin/certificate-rotation/certificate-rotation.md
+++ b/versioned_docs/version-2.6/v2.6/en/cluster-admin/certificate-rotation/certificate-rotation.md
@@ -9,8 +9,8 @@ By default, Kubernetes clusters require certificates and Rancher launched Kubern
Certificates can be rotated for the following services:
-{{% tabs %}}
-{{% tab "RKE" %}}
+
+
- etcd
- kubelet (node certificate)
@@ -20,8 +20,8 @@ Certificates can be rotated for the following services:
- kube-scheduler
- kube-controller-manager
-{{% /tab %}}
-{{% tab "RKE2" %}}
+
+
- admin
- api-server
@@ -35,8 +35,8 @@ Certificates can be rotated for the following services:
- kubelet
- kube-proxy
-{{% /tab %}}
-{{% /tabs %}}
+
+
> **Note:** For users who didn't rotate their webhook certificates, and they have expired after one year, please see this [page]({{}}/rancher/v2.6/en/troubleshooting/expired-webhook-certificates/) for help.
@@ -58,15 +58,15 @@ Rancher launched Kubernetes clusters have the ability to rotate the auto-generat
### Additional Notes
-{{% tabs %}}
-{{% tab "RKE" %}}
+
+
Even though the RKE CLI can use custom certificates for the Kubernetes cluster components, Rancher currently doesn't allow the ability to upload these in Rancher launched Kubernetes clusters.
-{{% /tab %}}
-{{% tab "RKE2" %}}
+
+
In RKE2, both etcd and control plane nodes are treated as the same `server` concept. As such, when rotating certificates of services specific to either of these components will result in certificates being rotated on both. The certificates will only change for the specified service, but you will see nodes for both components go into an updating state. You may also see worker only nodes go into an updating state. This is to restart the workers after a certificate change to ensure they get the latest client certs.
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.6/v2.6/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md b/versioned_docs/version-2.6/v2.6/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md
index d1cf15c6581..d3e0130accc 100644
--- a/versioned_docs/version-2.6/v2.6/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md
+++ b/versioned_docs/version-2.6/v2.6/en/cluster-admin/cleaning-cluster-nodes/cleaning-cluster-nodes.md
@@ -55,8 +55,8 @@ For registered clusters, the process for removing Rancher is a little different.
After the registered cluster is detached from Rancher, the cluster's workloads will be unaffected and you can access the cluster using the same methods that you did before the cluster was registered into Rancher.
-{{% tabs %}}
-{{% tab "By UI / API" %}}
+
+
>**Warning:** This process will remove data from your cluster. Make sure you have created a backup of files you want to keep before executing the command, as data will be lost.
After you initiate the removal of a registered cluster using the Rancher UI (or API), the following events occur.
@@ -69,8 +69,8 @@ After you initiate the removal of a registered cluster using the Rancher UI (or
**Result:** All components listed for registered clusters in [What Gets Removed?](#what-gets-removed) are deleted.
-{{% /tab %}}
-{{% tab "By Script" %}}
+
+
Rather than cleaning registered cluster nodes using the Rancher UI, you can run a script instead.
>**Prerequisite:**
@@ -100,8 +100,8 @@ Rather than cleaning registered cluster nodes using the Rancher UI, you can run
**Result:** The script runs. All components listed for registered clusters in [What Gets Removed?](#what-gets-removed) are deleted.
-{{% /tab %}}
-{{% /tabs %}}
+
+
### Windows Nodes
diff --git a/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/node-requirements/node-requirements.md b/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/node-requirements/node-requirements.md
index 519e8c31077..c98ae7ade56 100644
--- a/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/node-requirements/node-requirements.md
+++ b/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/node-requirements/node-requirements.md
@@ -45,8 +45,8 @@ SUSE Linux may have a firewall that blocks all ports by default. In that situati
When [Launching Kubernetes with Rancher]({{}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/) using Flatcar Container Linux nodes, it is required to use the following configuration in the [Cluster Config File]({{}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/options/#cluster-config-file)
-{{% tabs %}}
-{{% tab "Canal"%}}
+
+
```yaml
rancher_kubernetes_engine_config:
@@ -61,9 +61,9 @@ rancher_kubernetes_engine_config:
extra_args:
flex-volume-plugin-dir: /opt/kubernetes/kubelet-plugins/volume/exec/
```
-{{% /tab %}}
+
-{{% tab "Calico"%}}
+
```yaml
rancher_kubernetes_engine_config:
@@ -78,8 +78,8 @@ rancher_kubernetes_engine_config:
extra_args:
flex-volume-plugin-dir: /opt/kubernetes/kubelet-plugins/volume/exec/
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
It is also required to enable the Docker service, you can enable the Docker service using the following command:
diff --git a/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/rke-clusters/node-pools/azure/azure.md b/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/rke-clusters/node-pools/azure/azure.md
index 8e7deb9bf62..f4783ee058f 100644
--- a/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/rke-clusters/node-pools/azure/azure.md
+++ b/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/rke-clusters/node-pools/azure/azure.md
@@ -43,8 +43,8 @@ The creation of this service principal returns three pieces of identification in
# Creating an Azure Cluster
-{{% tabs %}}
-{{% tab "RKE" %}}
+
+
1. [Create your cloud credentials](#1-create-your-cloud-credentials)
2. [Create a node template with your cloud credentials](#2-create-a-node-template-with-your-cloud-credentials)
@@ -84,8 +84,8 @@ Use Rancher to create a Kubernetes cluster in Azure.
1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
1. Click **Create**.
-{{% /tab %}}
-{{% tab "RKE2" %}}
+
+
### 1. Create your cloud credentials
@@ -116,8 +116,8 @@ Use Rancher to create a Kubernetes cluster in Azure.
1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
1. Click **Create**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:**
diff --git a/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/digital-ocean.md b/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/digital-ocean.md
index 7d3e103cd1e..8f28ce92772 100644
--- a/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/digital-ocean.md
+++ b/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/digital-ocean.md
@@ -9,8 +9,8 @@ First, you will set up your DigitalOcean cloud credentials in Rancher. Then you
Then you will create a DigitalOcean cluster in Rancher, and when configuring the new cluster, you will define node pools for it. Each node pool will have a Kubernetes role of etcd, controlplane, or worker. Rancher will install RKE Kubernetes on the new nodes, and it will set up each node with the Kubernetes role defined by the node pool.
-{{% tabs %}}
-{{% tab "RKE" %}}
+
+
1. [Create your cloud credentials](#1-create-your-cloud-credentials)
2. [Create a node template with your cloud credentials](#2-create-a-node-template-with-your-cloud-credentials)
@@ -48,8 +48,8 @@ Creating a [node template]({{}}/rancher/v2.6/en/cluster-provisioning/rk
1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
1. Click **Create**.
-{{% /tab %}}
-{{% tab "RKE2" %}}
+
+
### 1. Create your cloud credentials
@@ -78,8 +78,8 @@ Use Rancher to create a Kubernetes cluster in DigitalOcean.
1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
1. Click **Create**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:**
diff --git a/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2.md b/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2.md
index 26d8d3c4576..6ed594b035b 100644
--- a/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2.md
+++ b/versioned_docs/version-2.6/v2.6/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2.md
@@ -23,8 +23,8 @@ Then you will create an EC2 cluster in Rancher, and when configuring the new clu
The steps to create a cluster differ based on your Rancher version.
-{{% tabs %}}
-{{% tab "RKE" %}}
+
+
1. [Create your cloud credentials](#1-create-your-cloud-credentials)
2. [Create a node template with your cloud credentials and information from EC2](#2-create-a-node-template-with-your-cloud-credentials-and-information-from-ec2)
@@ -69,8 +69,8 @@ Add one or more node pools to your cluster. For more information about node pool
>**Note:** If you want to use the [dual-stack](https://kubernetes.io/docs/concepts/services-networking/dual-stack/) feature, there are additional [requirements]({{}}/rke//latest/en/config-options/dual-stack#requirements) that must be taken into consideration.
1. Click **Create**.
-{{% /tab %}}
-{{% tab "RKE2" %}}
+
+
### 1. Create your cloud credentials
@@ -101,8 +101,8 @@ If you already have a set of cloud credentials to use, skip this section.
1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
1. Click **Create**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:**
diff --git a/versioned_docs/version-2.6/v2.6/en/installation/install-rancher-on-k8s/gke/gke.md b/versioned_docs/version-2.6/v2.6/en/installation/install-rancher-on-k8s/gke/gke.md
index 1c84eb38083..e88103f1f0a 100644
--- a/versioned_docs/version-2.6/v2.6/en/installation/install-rancher-on-k8s/gke/gke.md
+++ b/versioned_docs/version-2.6/v2.6/en/installation/install-rancher-on-k8s/gke/gke.md
@@ -69,8 +69,8 @@ To install `gcloud` and `kubectl`, perform the following steps:
- Using gcloud init, if you want to be walked through setting defaults.
- Using gcloud config, to individually set your project ID, zone, and region.
-{{% tabs %}}
-{{% tab "Using gloud init" %}}
+
+
1. Run gcloud init and follow the directions:
@@ -84,10 +84,10 @@ To install `gcloud` and `kubectl`, perform the following steps:
```
2. Follow the instructions to authorize gcloud to use your Google Cloud account and select the new project that you created.
-{{% /tab %}}
-{{% tab "Using gcloud config" %}}
-{{% /tab %}}
-{{% /tabs %}}
+
+
+
+
# 4. Confirm that gcloud is configured correctly
diff --git a/versioned_docs/version-2.6/v2.6/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md b/versioned_docs/version-2.6/v2.6/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md
index 55522d6caa9..5003a8a354c 100644
--- a/versioned_docs/version-2.6/v2.6/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md
+++ b/versioned_docs/version-2.6/v2.6/en/installation/install-rancher-on-k8s/install-rancher-on-k8s.md
@@ -154,8 +154,8 @@ However, irrespective of the certificate configuration, the name of the Rancher
> **Tip for testing and development:** This final command to install Rancher requires a domain name that forwards traffic to Rancher. If you are using the Helm CLI to set up a proof-of-concept, you can use a fake domain name when passing the `hostname` option. An example of a fake domain name would be `.sslip.io`, which would expose Rancher on an IP where it is running. Production installs would require a real domain name.
-{{% tabs %}}
-{{% tab "Rancher-generated Certificates" %}}
+
+
The default is for Rancher to generate a CA and uses `cert-manager` to issue the certificate for access to the Rancher server interface.
@@ -182,8 +182,8 @@ Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are
deployment "rancher" successfully rolled out
```
-{{% /tab %}}
-{{% tab "Let's Encrypt" %}}
+
+
This option uses `cert-manager` to automatically request and renew [Let's Encrypt](https://letsencrypt.org/) certificates. This is a free service that provides you with a valid certificate as Let's Encrypt is a trusted CA.
@@ -216,8 +216,8 @@ Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are
deployment "rancher" successfully rolled out
```
-{{% /tab %}}
-{{% tab "Certificates from Files" %}}
+
+
In this option, Kubernetes secrets are created from your own certificates for Rancher to use.
When you run this command, the `hostname` option must match the `Common Name` or a `Subject Alternative Names` entry in the server certificate or the Ingress controller will fail to configure correctly.
@@ -251,8 +251,8 @@ helm install rancher rancher-/rancher \
```
Now that Rancher is deployed, see [Adding TLS Secrets]({{}}/rancher/v2.6/en/installation/resources/tls-secrets/) to publish the certificate files so Rancher and the Ingress controller can use them.
-{{% /tab %}}
-{{% /tabs %}}
+
+
The Rancher chart configuration has many options for customizing the installation to suit your specific environment. Here are some common advanced scenarios.
diff --git a/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md b/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md
index 5245f52e95d..82b1e109944 100644
--- a/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md
+++ b/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/air-gap/launch-kubernetes/launch-kubernetes.md
@@ -11,8 +11,8 @@ Rancher can be installed on any Kubernetes cluster, including hosted Kubernetes
The steps to set up an air-gapped Kubernetes cluster on RKE or K3s are shown below.
-{{% tabs %}}
-{{% tab "K3s" %}}
+
+
In this guide, we are assuming you have created your nodes in your air gapped environment and have a secure Docker private registry on your bastion server.
@@ -135,8 +135,8 @@ Upgrading an air-gap environment can be accomplished in the following manner:
1. Download the new air-gap images (tar file) from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be upgrading to. Place the tar in the `/var/lib/rancher/k3s/agent/images/` directory on each node. Delete the old tar file.
2. Copy and replace the old K3s binary in `/usr/local/bin` on each node. Copy over the install script at https://get.k3s.io (as it is possible it has changed since the last release). Run the script again just as you had done in the past with the same environment variables.
3. Restart the K3s service (if not restarted automatically by installer).
-{{% /tab %}}
-{{% tab "RKE" %}}
+
+
We will create a Kubernetes cluster using Rancher Kubernetes Engine (RKE). Before being able to start your Kubernetes cluster, you’ll need to install RKE and create a RKE config file.
### 1. Install RKE
@@ -208,8 +208,8 @@ Save a copy of the following files in a secure location:
- `rancher-cluster.yml`: The RKE cluster configuration file.
- `kube_config_cluster.yml`: The [Kubeconfig file]({{}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file]({{}}/rke/latest/en/installation/#kubernetes-cluster-state), this file contains the current state of the cluster including the RKE configuration and the certificates.
_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
-{{% /tab %}}
-{{% /tabs %}}
+
+
> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
diff --git a/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md b/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md
index 18c8817f76f..5f5143ec2ee 100644
--- a/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md
+++ b/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/air-gap/populate-private-registry/populate-private-registry.md
@@ -17,8 +17,8 @@ The steps in this section differ depending on whether or not you are planning to
>
> If the registry has certs, follow [this K3s documentation](https://rancher.com/docs/k3s/latest/en/installation/private-registry/) about adding a private registry. The certs and registry configuration files need to be mounted into the Rancher container.
-{{% tabs %}}
-{{% tab "Linux Only Clusters" %}}
+
+
For Rancher servers that will only provision Linux clusters, these are the steps to populate your private registry.
@@ -104,8 +104,8 @@ The `rancher-images.txt` is expected to be on the workstation in the same direct
```plain
./rancher-load-images.sh --image-list ./rancher-images.txt --registry
```
-{{% /tab %}}
-{{% tab "Linux and Windows Clusters" %}}
+
+
For Rancher servers that will provision Linux and Windows clusters, there are distinctive steps to populate your private registry for the Windows images and the Linux images. Since a Windows cluster is a mix of Linux and Windows nodes, the Linux images pushed into the private registry are manifests.
@@ -283,8 +283,8 @@ The image list, `rancher-images.txt` or `rancher-windows-images.txt`, is expecte
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
### [Next step for Kubernetes Installs - Launch a Kubernetes Cluster]({{}}/rancher/v2.6/en/installation/other-installation-methods/air-gap/launch-kubernetes/)
diff --git a/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md b/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md
index 19cc8159774..3395adf7b41 100644
--- a/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md
+++ b/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/air-gap/prepare-nodes/prepare-nodes.md
@@ -11,8 +11,8 @@ The infrastructure depends on whether you are installing Rancher on a K3s Kubern
Rancher can be installed on any Kubernetes cluster. The RKE and K3s Kubernetes infrastructure tutorials below are still included for convenience.
-{{% tabs %}}
-{{% tab "K3s" %}}
+
+
We recommend setting up the following infrastructure for a high-availability installation:
- **Two Linux nodes,** typically virtual machines, in the infrastructure provider of your choice.
@@ -82,8 +82,8 @@ Rancher supports air gap installs using a private registry. You must have your o
In a later step, when you set up your K3s Kubernetes cluster, you will create a [private registries configuration file]({{}}/k3s/latest/en/installation/private-registry/) with details from this registry.
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry)
-{{% /tab %}}
-{{% tab "RKE" %}}
+
+
To install the Rancher management server on a high-availability RKE cluster, we recommend setting up the following infrastructure:
@@ -146,8 +146,8 @@ In a later step, when you set up your RKE Kubernetes cluster, you will create a
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry)
-{{% /tab %}}
-{{% tab "Docker" %}}
+
+
> The Docker installation is for Rancher users that are wanting to test out Rancher. Since there is only one node and a single Docker container, if the node goes down, you will lose all the data of your Rancher server.
>
> The Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.]({{}}/rancher/v2.6/en/backups/migrating-rancher)
@@ -166,7 +166,7 @@ Rancher supports air gap installs using a Docker private registry on your bastio
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/)
-{{% /tab %}}
-{{% /tabs %}}
+
+
### [Next: Collect and Publish Images to your Private Registry]({{}}/rancher/v2.6/en/installation/other-installation-methods/air-gap/populate-private-registry/)
diff --git a/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md b/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md
index 3b03de02a22..bf924bc0d36 100644
--- a/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md
+++ b/versioned_docs/version-2.6/v2.6/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/single-node-upgrades.md
@@ -125,8 +125,8 @@ To see the command to use when starting the new Rancher server container, choose
- Docker Upgrade
- Docker Upgrade for Air Gap Installs
-{{% tabs %}}
-{{% tab "Docker Upgrade" %}}
+
+
Select which option you had installed Rancher server
@@ -243,8 +243,8 @@ Privileged access is [required.]({{}}/rancher/v2.6/en/installation/othe
{{% /accordion %}}
-{{% /tab %}}
-{{% tab "Docker Air Gap Upgrade" %}}
+
+
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
@@ -337,8 +337,8 @@ docker run -d --volumes-from rancher-data \
```
privileged access is [required.]({{}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
{{% /accordion %}}
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:** You have upgraded Rancher. Data from your upgraded server is now saved to the `rancher-data` container for use in future upgrades.
diff --git a/versioned_docs/version-2.6/v2.6/en/installation/requirements/ports/ports.md b/versioned_docs/version-2.6/v2.6/en/installation/requirements/ports/ports.md
index a9eaaf1bd81..daa5cadf03f 100644
--- a/versioned_docs/version-2.6/v2.6/en/installation/requirements/ports/ports.md
+++ b/versioned_docs/version-2.6/v2.6/en/installation/requirements/ports/ports.md
@@ -279,8 +279,8 @@ When using the [AWS EC2 node driver]({{}}/rancher/v2.6/en/cluster-provi
SUSE Linux may have a firewall that blocks all ports by default. To open the ports needed for adding the host to a custom cluster,
-{{% tabs %}}
-{{% tab "SLES 15 / openSUSE Leap 15" %}}
+
+
1. SSH into the instance.
1. Start YaST in text mode:
```
@@ -298,8 +298,8 @@ UDP Ports
1. When all required ports are enter, select **Accept**.
-{{% /tab %}}
-{{% tab "SLES 12 / openSUSE Leap 42" %}}
+
+
1. SSH into the instance.
1. Edit /`etc/sysconfig/SuSEfirewall2` and open the required ports. In this example, ports 9796 and 10250 are also opened for monitoring:
```
@@ -311,7 +311,7 @@ UDP Ports
```
SuSEfirewall2
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:** The node has the open ports required to be added to a custom cluster.
diff --git a/versioned_docs/version-2.6/v2.6/en/installation/resources/choosing-version/choosing-version.md b/versioned_docs/version-2.6/v2.6/en/installation/resources/choosing-version/choosing-version.md
index db2a8afef31..a337e3d0be7 100644
--- a/versioned_docs/version-2.6/v2.6/en/installation/resources/choosing-version/choosing-version.md
+++ b/versioned_docs/version-2.6/v2.6/en/installation/resources/choosing-version/choosing-version.md
@@ -9,8 +9,8 @@ For a high-availability installation of Rancher, which is recommended for produc
For Docker installations of Rancher, which is used for development and testing, you will install Rancher as a **Docker image**.
-{{% tabs %}}
-{{% tab "Helm Charts" %}}
+
+
When installing, upgrading, or rolling back Rancher Server when it is [installed on a Kubernetes cluster]({{}}/rancher/v2.6/en/installation/install-rancher-on-k8s/), Rancher server is installed using a Helm chart on a Kubernetes cluster. Therefore, as you prepare to install or upgrade a high availability Rancher configuration, you must add a Helm chart repository that contains the charts for installing Rancher.
@@ -73,8 +73,8 @@ After installing Rancher, if you want to change which Helm chart repository to i
```
4. Continue to follow the steps to [upgrade Rancher]({{}}/rancher/v2.6/en/installation/install-rancher-on-k8s/upgrades) from the new Helm chart repository.
-{{% /tab %}}
-{{% tab "Docker Images" %}}
+
+
When performing [Docker installs]({{}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker), upgrades, or rollbacks, you can use _tags_ to install a specific version of Rancher.
### Server Tags
@@ -92,5 +92,5 @@ Rancher Server is distributed as a Docker image, which have tags attached to the
> - The `master` tag or any tag with `-rc` or another suffix is meant for the Rancher testing team to validate. You should not use these tags, as these builds are not officially supported.
> - Want to install an alpha review for preview? Install using one of the alpha tags listed on our [announcements page](https://forums.rancher.com/c/announcements) (e.g., `v2.2.0-alpha1`). Caveat: Alpha releases cannot be upgraded to or from any other release.
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.6/v2.6/en/monitoring-alerting/configuration/receiver/receiver.md b/versioned_docs/version-2.6/v2.6/en/monitoring-alerting/configuration/receiver/receiver.md
index 3a9daaeb6ce..7c18a7b9754 100644
--- a/versioned_docs/version-2.6/v2.6/en/monitoring-alerting/configuration/receiver/receiver.md
+++ b/versioned_docs/version-2.6/v2.6/en/monitoring-alerting/configuration/receiver/receiver.md
@@ -32,8 +32,8 @@ The [Alertmanager Config](https://prometheus.io/docs/alerting/latest/configurati
To create notification receivers in the Rancher UI,
-{{% tabs %}}
-{{% tab "Rancher v2.6.5+" %}}
+
+
1. Go to the cluster where you want to create receivers. Click **Monitoring -> Alerting -> AlertManagerConfigs**.
1. Ciick **Create**.
@@ -42,16 +42,16 @@ To create notification receivers in the Rancher UI,
1. Configure one or more providers for the receiver. For help filling out the forms, refer to the configuration options below.
1. Click **Create**.
-{{% /tab %}}
-{{% tab "Rancher before v2.6.5" %}}
+
+
1. Go to the cluster where you want to create receivers. Click **Monitoring** and click **Receiver**.
2. Enter a name for the receiver.
3. Configure one or more providers for the receiver. For help filling out the forms, refer to the configuration options below.
4. Click **Create**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:** Alerts can be configured to send notifications to the receiver(s).
diff --git a/versioned_docs/version-2.6/v2.6/en/monitoring-alerting/configuration/route/route.md b/versioned_docs/version-2.6/v2.6/en/monitoring-alerting/configuration/route/route.md
index 4366f20a9a5..8fa7dab7706 100644
--- a/versioned_docs/version-2.6/v2.6/en/monitoring-alerting/configuration/route/route.md
+++ b/versioned_docs/version-2.6/v2.6/en/monitoring-alerting/configuration/route/route.md
@@ -42,8 +42,8 @@ The route needs to refer to a [receiver](#receiver-configuration) that has alrea
### Grouping
-{{% tabs %}}
-{{% tab "Rancher v2.6.5+" %}}
+
+
> **Note** As of Rancher v2.6.5 `Group By` now accepts a list of strings instead of key-value pairs. See the [upstream documentation](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#route) for details.
@@ -54,8 +54,8 @@ The route needs to refer to a [receiver](#receiver-configuration) that has alrea
| Group Interval | 5m | How long to wait before sending an alert that has been added to a group of alerts for which an initial notification has already been sent. |
| Repeat Interval | 4h | How long to wait before re-sending a given alert that has already been sent. |
-{{% /tab %}}
-{{% tab "Rancher before v2.6.5" %}}
+
+
| Field | Default | Description |
|-------|--------------|---------|
@@ -64,8 +64,8 @@ The route needs to refer to a [receiver](#receiver-configuration) that has alrea
| Group Interval | 5m | How long to wait before sending an alert that has been added to a group of alerts for which an initial notification has already been sent. |
| Repeat Interval | 4h | How long to wait before re-sending a given alert that has already been sent. |
-{{% /tab %}}
-{{% /tabs %}}
+
+
diff --git a/versioned_docs/version-2.6/v2.6/en/monitoring-alerting/guides/persist-grafana/persist-grafana.md b/versioned_docs/version-2.6/v2.6/en/monitoring-alerting/guides/persist-grafana/persist-grafana.md
index 4e36acf3341..3097422a018 100644
--- a/versioned_docs/version-2.6/v2.6/en/monitoring-alerting/guides/persist-grafana/persist-grafana.md
+++ b/versioned_docs/version-2.6/v2.6/en/monitoring-alerting/guides/persist-grafana/persist-grafana.md
@@ -10,8 +10,8 @@ To allow the Grafana dashboard to persist after the Grafana instance restarts, a
# Creating a Persistent Grafana Dashboard
-{{% tabs %}}
-{{% tab "Rancher v2.5.8+" %}}
+
+
> **Prerequisites:**
>
@@ -82,8 +82,8 @@ grafana.sidecar.dashboards.searchNamespace=ALL
Note that the RBAC roles exposed by the Monitoring chart to add Grafana Dashboards are still restricted to giving permissions for users to add dashboards in the namespace defined in `grafana.dashboards.namespace`, which defaults to `cattle-dashboards`.
-{{% /tab %}}
-{{% tab "Rancher before v2.5.8" %}}
+
+
> **Prerequisites:**
>
> - The monitoring application needs to be installed.
@@ -124,8 +124,8 @@ To prevent the persistent dashboard from being deleted when Monitoring v2 is uni
helm.sh/resource-policy: "keep"
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
# Known Issues
diff --git a/versioned_docs/version-2.6/v2.6/en/pipelines/pipelines.md b/versioned_docs/version-2.6/v2.6/en/pipelines/pipelines.md
index 5c9a2e868d7..15e403d2b45 100644
--- a/versioned_docs/version-2.6/v2.6/en/pipelines/pipelines.md
+++ b/versioned_docs/version-2.6/v2.6/en/pipelines/pipelines.md
@@ -100,8 +100,8 @@ Before you can start configuring a pipeline for your repository, you must config
Select your provider's tab below and follow the directions.
-{{% tabs %}}
-{{% tab "GitHub" %}}
+
+
1. In the upper left corner, click **☰ > Cluster Management**.
1. Go to the cluster where you want to configure pipelines and click **Explore**.
@@ -113,8 +113,8 @@ Select your provider's tab below and follow the directions.
1. If you're using GitHub for enterprise, select **Use a private github enterprise installation**. Enter the host address of your GitHub installation.
1. Click **Authenticate**.
-{{% /tab %}}
-{{% tab "GitLab" %}}
+
+
1. In the upper left corner, click **☰ > Cluster Management**.
1. Go to the cluster where you want to configure pipelines and click **Explore**.
@@ -130,8 +130,8 @@ Select your provider's tab below and follow the directions.
>**Note:**
> 1. Pipeline uses Gitlab [v4 API](https://docs.gitlab.com/ee/api/v3_to_v4.html) and the supported Gitlab version is 9.0+.
> 2. If you use GitLab 10.7+ and your Rancher setup is in a local network, enable the **Allow requests to the local network from hooks and services** option in GitLab admin settings.
-{{% /tab %}}
-{{% tab "Bitbucket Cloud" %}}
+
+
1. In the upper left corner, click **☰ > Cluster Management**.
1. Go to the cluster where you want to configure pipelines and click **Explore**.
@@ -143,8 +143,8 @@ Select your provider's tab below and follow the directions.
1. From Bitbucket, copy the consumer **Key** and **Secret**. Paste them into Rancher.
1. Click **Authenticate**.
-{{% /tab %}}
-{{% tab "Bitbucket Server" %}}
+
+
1. In the upper left corner, click **☰ > Cluster Management**.
1. Go to the cluster where you want to configure pipelines and click **Explore**.
@@ -162,8 +162,8 @@ Select your provider's tab below and follow the directions.
> 1. Setup Rancher server with a certificate from a trusted CA.
> 1. If you're using self-signed certificates, import Rancher server's certificate to the Bitbucket server. For instructions, see the Bitbucket server documentation for [configuring self-signed certificates](https://confluence.atlassian.com/bitbucketserver/if-you-use-self-signed-certificates-938028692.html).
>
-{{% /tab %}}
-{{% /tabs %}}
+
+
**Result:** After the version control provider is authenticated, you will be automatically re-directed to start configuring which repositories you want start using with a pipeline.
diff --git a/versioned_docs/version-2.6/v2.6/en/pipelines/storage/storage.md b/versioned_docs/version-2.6/v2.6/en/pipelines/storage/storage.md
index 5e81c0595a2..a5f7aec5c01 100644
--- a/versioned_docs/version-2.6/v2.6/en/pipelines/storage/storage.md
+++ b/versioned_docs/version-2.6/v2.6/en/pipelines/storage/storage.md
@@ -25,8 +25,8 @@ This section assumes that you understand how persistent storage works in Kuberne
- **Add Volume > Use an existing persistent volume (claim)**
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
-{{% tabs %}}
-{{% tab "Add a new persistent volume" %}}
+
+
1. Enter a **Name** for the volume claim.
@@ -39,9 +39,9 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
+
-{{% tab "Use an existing persistent volume" %}}
+
1. Enter a **Name** for the volume claim.
@@ -51,9 +51,9 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
+
-{{% /tabs %}}
+
1. From the **Mount Point** field, enter `/var/lib/registry`, which is the data storage path inside the Docker registry container.
@@ -72,9 +72,9 @@ This section assumes that you understand how persistent storage works in Kuberne
- **Add Volume > Use an existing persistent volume (claim)**
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
-{{% tabs %}}
+
-{{% tab "Add a new persistent volume" %}}
+
1. Enter a **Name** for the volume claim.
@@ -87,8 +87,8 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
-{{% tab "Use an existing persistent volume" %}}
+
+
1. Enter a **Name** for the volume claim.
@@ -98,8 +98,8 @@ This section assumes that you understand how persistent storage works in Kuberne
1. Click **Define**.
-{{% /tab %}}
-{{% /tabs %}}
+
+
1. From the **Mount Point** field, enter `/data`, which is the data storage path inside the Minio container.
diff --git a/versioned_docs/version-2.6/v2.6/en/quick-start-guide/deployment/quickstart-manual-setup/quickstart-manual-setup.md b/versioned_docs/version-2.6/v2.6/en/quick-start-guide/deployment/quickstart-manual-setup/quickstart-manual-setup.md
index 28315797069..418f9584310 100644
--- a/versioned_docs/version-2.6/v2.6/en/quick-start-guide/deployment/quickstart-manual-setup/quickstart-manual-setup.md
+++ b/versioned_docs/version-2.6/v2.6/en/quick-start-guide/deployment/quickstart-manual-setup/quickstart-manual-setup.md
@@ -28,15 +28,15 @@ Save the IP of the Linux machine.
The kubeconfig file is important for accessing the Kubernetes cluster. Copy the file at `/etc/rancher/k3s/k3s.yaml` from the Linux machine and save it to your local workstation in the directory `~/.kube/config`. One way to do this is by using the `scp` tool and run this command on your local machine:
-{{% tabs %}}
-{{% tab "Mac and Linux" %}}
+
+
```
scp root@:/etc/rancher/k3s/k3s.yaml ~/.kube/config
```
-{{% /tab %}}
-{{% tab "Windows" %}}
+
+
By default, "scp" is not a recognized command, so we need to install a module first.
@@ -50,15 +50,15 @@ Install-Module Posh-SSH
scp root@:/etc/rancher/k3s/k3s.yaml $env:USERPROFILE\.kube\config
```
-{{% /tab %}}
-{{% /tabs %}}
+
+
## Edit the Rancher server URL in the kubeconfig
In the kubeconfig file, you will need to change the value of the `server` field to `:6443`. The Kubernetes API server will be reached at port 6443, while the Rancher server will be reached at ports 80 and 443. This edit is needed so that when you run Helm or kubectl commands from your local workstation, you will be able to communicate with the Kubernetes cluster that Rancher will be installed on.
-{{% tabs %}}
-{{% tab "Mac and Linux" %}}
+
+
One way to open the kubeconfig file for editing is to use Vim:
@@ -68,8 +68,8 @@ vi ~/.kube/config
Press `i` to put Vim in insert mode. To save your work, press `Esc`. Then press `:wq` and press `Enter`.
-{{% /tab %}}
-{{% tab "Windows" %}}
+
+
In Windows Powershell, you can use `notepad.exe` for editing the kubeconfig file:
@@ -80,8 +80,8 @@ notepad.exe $env:USERPROFILE\.kube\config
Once edited, either press `ctrl+s` or go to `File > Save` to save your work.
-{{% /tab %}}
-{{% /tabs %}}
+
+
## Install Rancher with Helm