diff --git a/content/_index.html b/content/_index.html
index 209d77f70cb..3cb6dc656a0 100644
--- a/content/_index.html
+++ b/content/_index.html
@@ -110,11 +110,10 @@
Rancher manages all of your Kubernetes clusters everywhere, unifies them under centralized RBAC, monitors them and lets you easily deploy and manage workloads through an intuitive user interface.
diff --git a/content/os/v1.x/en/installation/cloud/openstack/_index.md b/content/os/v1.x/en/installation/cloud/openstack/_index.md
index 679c48e998e..9ab19b45d8e 100644
--- a/content/os/v1.x/en/installation/cloud/openstack/_index.md
+++ b/content/os/v1.x/en/installation/cloud/openstack/_index.md
@@ -5,6 +5,6 @@ aliases:
- /os/v1.x/en/installation/running-rancheros/cloud/openstack
---
-As of v0.5.0, RancherOS releases include an Openstack image that can be found on our [releases page](https://github.com/rancher/os/releases). The image format is [QCOW3](https://wiki.qemu.org/Features/Qcow3#Fully_QCOW2_backwards-compatible_feature_set) that is backward compatible with QCOW2.
+As of v0.5.0, RancherOS releases include an OpenStack image that can be found on our [releases page](https://github.com/rancher/os/releases). The image format is [QCOW3](https://wiki.qemu.org/Features/Qcow3#Fully_QCOW2_backwards-compatible_feature_set) that is backward compatible with QCOW2.
When launching an instance using the image, you must enable **Advanced Options** -> **Configuration Drive** and in order to use a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file.
diff --git a/content/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md b/content/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md
index 28a45b5960a..586c827112a 100644
--- a/content/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md
+++ b/content/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md
@@ -43,4 +43,4 @@ The `Custom` cloud provider is available if you want to configure any [Kubernete
For the custom cloud provider option, you can refer to the [RKE docs]({{}}/rke/latest/en/config-options/cloud-providers/) on how to edit the yaml file for your specific cloud provider. There are specific cloud providers that have more detailed configuration :
* [vSphere]({{}}/rke/latest/en/config-options/cloud-providers/vsphere/)
-* [Openstack]({{}}/rke/latest/en/config-options/cloud-providers/openstack/)
\ No newline at end of file
+* [OpenStack]({{}}/rke/latest/en/config-options/cloud-providers/openstack/)
diff --git a/content/rancher/v2.5/en/cluster-admin/editing-clusters/_index.md b/content/rancher/v2.5/en/cluster-admin/editing-clusters/_index.md
index 011d4e92b1f..610282c58f8 100644
--- a/content/rancher/v2.5/en/cluster-admin/editing-clusters/_index.md
+++ b/content/rancher/v2.5/en/cluster-admin/editing-clusters/_index.md
@@ -34,7 +34,7 @@ Option | Description |
---------|----------|
Kubernetes Version | The version of Kubernetes installed on each cluster node. For more detail, see [Upgrading Kubernetes]({{}}/rancher/v2.5/en/cluster-admin/upgrading-kubernetes). |
Network Provider | The \container networking interface (CNI) that powers networking for your cluster.
**Note:** You can only choose this option while provisioning your cluster. It cannot be edited later. |
- Project Network Isolation | If you're using the Canal network provider, you can choose whether to enable or disable inter-project communication. |
+ Project Network Isolation | If your network provider allows project network isolation, you can choose whether to enable or disable inter-project communication. Before Rancher v2.5.8, project network isolation is only available if you are using the Canal network plugin for RKE. In v2.5.8+, project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.|
Nginx Ingress | If you want to publish your applications in a high-availability configuration, and you're hosting your nodes with a cloud-provider that doesn't have a native load-balancing feature, enable this option to use Nginx ingress within the cluster. |
Metrics Server Monitoring | Each cloud provider capable of launching a cluster using RKE can collect metrics and monitor for your cluster nodes. Enable this option to view your node metrics from your cloud provider's portal. |
Pod Security Policy Support | Enables [pod security policies]({{}}/rancher/v2.5/en/admin-settings/pod-security-policies/) for the cluster. After enabling this option, choose a policy using the **Default Pod Security Policy** drop-down. |
diff --git a/content/rancher/v2.5/en/cluster-admin/projects-and-namespaces/_index.md b/content/rancher/v2.5/en/cluster-admin/projects-and-namespaces/_index.md
index dbf7e3cf84f..b495d62f208 100644
--- a/content/rancher/v2.5/en/cluster-admin/projects-and-namespaces/_index.md
+++ b/content/rancher/v2.5/en/cluster-admin/projects-and-namespaces/_index.md
@@ -106,12 +106,7 @@ The `system` project:
- Allows you to add more namespaces or move its namespaces to other projects.
- Cannot be deleted because it's required for cluster operations.
->**Note:** In clusters where both:
->
-> - The Canal network plug-in is in use.
-> - The Project Network Isolation option is enabled.
->
->The `system` project overrides the Project Network Isolation option so that it can communicate with other projects, collect logs, and check health.
+>**Note:** In RKE clusters where the project network isolation option is enabled, the `system` project overrides the project network isolation option so that it can communicate with other projects, collect logs, and check health.
# Project Authorization
diff --git a/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md b/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md
index 31fc90aeb5c..a2541bd4a10 100644
--- a/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md
+++ b/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md
@@ -84,7 +84,7 @@ You can access your cluster after its state is updated to **Active.**
# Private Clusters
-We now support private GKE clusters. Note: This advanced setup can require more steps during the cluster provisioning process. For details, see [this section.](./private-clusters)
+Private GKE clusters are supported. Note: This advanced setup can require more steps during the cluster provisioning process. For details, see [this section.](./private-clusters)
# Configuration Reference
diff --git a/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/config-reference/_index.md b/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/config-reference/_index.md
index de0bf4d6f04..da4b1aa5a94 100644
--- a/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/config-reference/_index.md
+++ b/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/config-reference/_index.md
@@ -77,11 +77,11 @@ The name of an existing secondary range for service IP addresses. If selected, *
### Service Address Range
-The address range assigned to the services in the cluster. Must be a valid CIDR range, e.g. 10.94.0.0/18. If not provided, will be created automatically. Must be provided if using a Shared VPC network. For more information on how to determine the IP address range for your servicess, refer to [this section.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing_secondary_range_svcs)
+The address range assigned to the services in the cluster. Must be a valid CIDR range, e.g. 10.94.0.0/18. If not provided, will be created automatically. Must be provided if using a Shared VPC network. For more information on how to determine the IP address range for your services, refer to [this section.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing_secondary_range_svcs)
### Private Cluster
-> Warning: private clusters require additional planning and configuration outside of Rancher. Refer to the [private cluster guide]({{< baseurl >}}/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/#private-clusters).
+> Warning: private clusters require additional planning and configuration outside of Rancher. Refer to the [private cluster guide]({{< baseurl >}}/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/private-clusters/).
Assign nodes only internal IP addresses. Private cluster nodes cannot access the public internet unless additional networking steps are taken in GCP.
diff --git a/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/private-clusters/_index.md b/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/private-clusters/_index.md
index 046c2b044a9..d938b790361 100644
--- a/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/private-clusters/_index.md
+++ b/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/private-clusters/_index.md
@@ -1,9 +1,9 @@
----
+
title: Private Clusters
weight: 2
---
-In GKE, [private clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept) are clusters whose nodes are isolated from inbound and outbound traffic by assigning them internal IP addresses only. Private clusters in GKE have the option of exposing the control plane endpoint as a publicly accessible address or as a private address. This is different from other Kubernetes providers, which may refer to clusters with private control plane endpoints as "private clusters" but still allow traffic to and from nodes. You may want to create a cluster with private nodes, with or without a public control plane endpoint, depending on your organization's networking and security requirements. A GKE cluster hosted in Rancher can use isolated nodes by selecting "Private Cluster" in the Cluster Options (under "Show advanced options"). The control plane endpoint can optionally be made private by selecting "Enable Private Endpoint".
+In GKE, [private clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept) are clusters whose nodes are isolated from inbound and outbound traffic by assigning them internal IP addresses only. Private clusters in GKE have the option of exposing the control plane endpoint as a publicly accessible address or as a private address. This is different from other Kubernetes providers, which may refer to clusters with private control plane endpoints as "private clusters" but still allow traffic to and from nodes. You may want to create a cluster with private nodes, with or without a public control plane endpoint, depending on your organization's networking and security requirements. A GKE cluster provisioned from Rancher can use isolated nodes by selecting "Private Cluster" in the Cluster Options (under "Show advanced options"). The control plane endpoint can optionally be made private by selecting "Enable Private Endpoint".
### Private Nodes
@@ -14,14 +14,14 @@ Because the nodes in a private cluster only have internal IP addresses, they wil
>**Note**
>Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing).
-If restricting outgoing internet access is not a concern for your organization, use Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service to allow nodes in the private network to access the internet, allowing them to download the required images from Dockerhub. This is the simplest solution.
+If restricting outgoing internet access is not a concern for your organization, use Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service to allow nodes in the private network to access the internet, enabling them to download the required images from Dockerhub and contact the Rancher management server. This is the simplest solution.
#### Private registry
>**Note**
>This scenario is not officially supported, but is described for cases in which using the Cloud NAT service is not sufficient.
-If restricting both incoming and outgoing traffic to nodes is a requirement, follow the air-gapped installation instructions to set up a private container image [registry](https://rancher.com/docs/rancher/v2.x/en/installation/other-installation-methods/air-gap/) on the VPC where the cluster is going to be, allowing the cluster nodes to access and download the images they need to run the cluster agent.
+If restricting both incoming and outgoing traffic to nodes is a requirement, follow the air-gapped installation instructions to set up a private container image [registry](https://rancher.com/docs/rancher/v2.x/en/installation/other-installation-methods/air-gap/) on the VPC where the cluster is going to be, allowing the cluster nodes to access and download the images they need to run the cluster agent. If the control plane endpoint is also private, Rancher will need [direct access](#direct-access) to it.
### Private Control Plane Endpoint
@@ -37,6 +37,6 @@ to the cluster in order to run this command can be done by creating a temporary
#### Direct access
-If the Rancher server is run on the same VPC as the cluster's control plane, it will have direct access to the control plane's private endpoint. The cluster nodes will need to have access to a private registry to download images as described above.
+If the Rancher server is run on the same VPC as the cluster's control plane, it will have direct access to the control plane's private endpoint. The cluster nodes will need to have access to a [private registry](#private-registry) to download images as described above.
You can also use services from Google such as [Cloud VPN](https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview) or [Cloud Interconnect VLAN](https://cloud.google.com/network-connectivity/docs/interconnect) to facilitate connectivity between your organization's network and your Google VPC.
diff --git a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md
index 477d4b1315f..79e705f929c 100644
--- a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md
+++ b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md
@@ -43,4 +43,4 @@ The `Custom` cloud provider is available if you want to configure any [Kubernete
For the custom cloud provider option, you can refer to the [RKE docs]({{}}/rke/latest/en/config-options/cloud-providers/) on how to edit the yaml file for your specific cloud provider. There are specific cloud providers that have more detailed configuration :
* [vSphere]({{}}/rke/latest/en/config-options/cloud-providers/vsphere/)
-* [Openstack]({{}}/rke/latest/en/config-options/cloud-providers/openstack/)
\ No newline at end of file
+* [OpenStack]({{}}/rke/latest/en/config-options/cloud-providers/openstack/)
diff --git a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md
index 772b79b1740..419860882dd 100644
--- a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md
+++ b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md
@@ -58,7 +58,7 @@ Provision the host according to the [installation requirements]({{}}/ra
>- The only Network Provider available for clusters with Windows support is Flannel.
6. Click **Next**.
-7. From **Node Role**, choose the roles that you want filled by a cluster node.
+7. From **Node Role**, choose the roles that you want filled by a cluster node. You must provision at least one node for each role: `etcd`, `worker`, and `control plane`. All three roles are required for a custom cluster to finish provisioning. For more information on roles, see [this section.]({{}}/rancher/v2.5/en/overview/concepts/#roles-for-nodes-in-kubernetes-clusters)
>**Notes:**
>
diff --git a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/_index.md b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/_index.md
index 3067bb45b3a..ca30c9abf85 100644
--- a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/_index.md
+++ b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/_index.md
@@ -36,10 +36,10 @@ Please refer to [Amazon EC2 security group when using Node Driver]({{}}
### Instance Options
-Configure the instances that will be created. Make sure you configure the correct **SSH User** for the configured AMI.
+Configure the instances that will be created. Make sure you configure the correct **SSH User** for the configured AMI. It is possible that a selected region does not support the default instance type. In this scenario you must select an instance type that does exist, otherwise an error will occur stating the requested configuration is not supported.
If you need to pass an **IAM Instance Profile Name** (not ARN), for example, when you want to use a [Kubernetes Cloud Provider]({{}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/options/cloud-providers), you will need an additional permission in your policy. See [Example IAM policy with PassRole](#example-iam-policy-with-passrole) for an example policy.
### Engine Options
-In the **Engine Options** section of the node template, you can configure the Docker daemon. You may want to specify the docker version or a Docker registry mirror.
\ No newline at end of file
+In the **Engine Options** section of the node template, you can configure the Docker daemon. You may want to specify the docker version or a Docker registry mirror.
diff --git a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/options/_index.md b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/options/_index.md
index 0e394858eb4..aef26507b12 100644
--- a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/options/_index.md
+++ b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/options/_index.md
@@ -19,6 +19,7 @@ This section is a cluster configuration reference, covering the following topics
- [Rancher UI Options](#rancher-ui-options)
- [Kubernetes version](#kubernetes-version)
- [Network provider](#network-provider)
+ - [Project network isolation](#project-network-isolation)
- [Kubernetes cloud providers](#kubernetes-cloud-providers)
- [Private registries](#private-registries)
- [Authorized cluster endpoint](#authorized-cluster-endpoint)
@@ -58,15 +59,28 @@ Out of the box, Rancher is compatible with the following network providers:
- [Calico](https://docs.projectcalico.org/v3.11/introduction/)
- [Weave](https://github.com/weaveworks/weave)
-**Notes on Canal:**
-
-If you use Canal, you also have the option of using **Project Network Isolation**, which will enable or disable communication between pods in different [projects]({{}}/rancher/v2.5/en/k8s-in-rancher/projects-and-namespaces/).
-
**Notes on Weave:**
When Weave is selected as network provider, Rancher will automatically enable encryption by generating a random password. If you want to specify the password manually, please see how to configure your cluster using a [Config File]({{}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/options/#cluster-config-file) and the [Weave Network Plug-in Options]({{}}/rke/latest/en/config-options/add-ons/network-plugins/#weave-network-plug-in-options).
+### Project Network Isolation
+
+Project network isolation is used to enable or disable communication between pods in different projects.
+
+{{% tabs %}}
+{{% tab "Rancher v2.5.8+" %}}
+
+To enable project network isolation as a cluster option, you will need to use any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.
+
+{{% /tab %}}
+{{% tab "Rancher before v2.5.8" %}}
+
+To enable project network isolation as a cluster option, you will need to use Canal as the CNI.
+
+{{% /tab %}}
+{{% /tabs %}}
+
### Kubernetes Cloud Providers
You can configure a [Kubernetes cloud provider]({{}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/options/cloud-providers). If you want to use [volumes and storage]({{}}/rancher/v2.5/en/k8s-in-rancher/volumes-and-storage/) in Kubernetes, typically you must select the specific cloud provider in order to use it. For example, if you want to use Amazon EBS, you would need to select the `aws` cloud provider.
@@ -278,6 +292,10 @@ Option to enable or disable [Cluster Monitoring]({{}}/rancher/v2.5/en/m
Option to enable or disable Project Network Isolation.
+Before Rancher v2.5.8, project network isolation is only available if you are using the Canal network plugin for RKE.
+
+In v2.5.8+, project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.
+
### local_cluster_auth_endpoint
See [Authorized Cluster Endpoint](#authorized-cluster-endpoint).
diff --git a/content/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/_index.md b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/_index.md
index 1506c27e27b..f3ac8f7e527 100644
--- a/content/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/_index.md
+++ b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/_index.md
@@ -53,7 +53,7 @@ For migration of installs started with Helm 2, refer to the official [Helm 2 to
### For air gap installs: Populate private registry
--For [air gap installs only,]({{}}/rancher/v2.5/en/installation/other-installation-methods/air-gap) collect and populate images for the new Rancher server version. Follow the guide to [populate your private registry]({{}}/rancher/v2.5/en/installation/other-installation-methods/air-gap/populate-private-registry/) with the images for the Rancher version that you want to upgrade to.
+For [air gap installs only,]({{}}/rancher/v2.5/en/installation/other-installation-methods/air-gap) collect and populate images for the new Rancher server version. Follow the guide to [populate your private registry]({{}}/rancher/v2.5/en/installation/other-installation-methods/air-gap/populate-private-registry/) with the images for the Rancher version that you want to upgrade to.
### For upgrades from a Rancher server with a hidden local cluster
@@ -120,8 +120,8 @@ You'll use the backup as a restoration point if something goes wrong during upgr
This section describes how to upgrade normal (Internet-connected) or air gap installations of Rancher with Helm.
-{{% tabs %}}
-{{% tab "Kubernetes Upgrade" %}}
+> **Air Gap Instructions:** If you are installing Rancher in an air gapped environment, skip the rest of this page and render the Helm template by following the instructions on [this page.](./air-gap-upgrade)
+
Get the values, which were passed with `--set`, from the current Rancher Helm chart that is installed.
@@ -182,74 +182,6 @@ If you are currently running the cert-manger whose version is older than v0.11,
--set hostname=rancher.my.org
```
-{{% /tab %}}
-{{% tab "Kubernetes Air Gap Upgrade" %}}
-
-Render the Rancher template using the same chosen options that were used when installing Rancher. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
-
-Based on the choice you made during installation, complete one of the procedures below.
-
-Placeholder | Description
-------------|-------------
-`` | The version number of the output tarball.
-`` | The DNS name you pointed at your load balancer.
-`` | The DNS name for your private registry.
-`` | Cert-manager version running on k8s cluster.
-
-
-### Option A: Default Self-signed Certificate
-
- ```plain
-helm template ./rancher-.tgz --output-dir . \
- --name rancher \
- --namespace cattle-system \
- --set hostname= \
- --set certmanager.version= \
- --set rancherImage=/rancher/rancher \
- --set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
- --set useBundledSystemChart=true # Use the packaged Rancher system charts
-```
-
-### Option B: Certificates from Files using Kubernetes Secrets
-
-```plain
-helm template ./rancher-.tgz --output-dir . \
---name rancher \
---namespace cattle-system \
---set hostname= \
---set rancherImage=/rancher/rancher \
---set ingress.tls.source=secret \
---set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
---set useBundledSystemChart=true # Use the packaged Rancher system charts
-```
-
-If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`:
-
-```plain
-helm template ./rancher-.tgz --output-dir . \
---name rancher \
---namespace cattle-system \
---set hostname= \
---set rancherImage=/rancher/rancher \
---set ingress.tls.source=secret \
---set privateCA=true \
---set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
---set useBundledSystemChart=true # Use the packaged Rancher system charts
-```
-
-### Apply the Rendered Templates
-
-Copy the rendered manifest directories to a system with access to the Rancher server cluster and apply the rendered templates.
-
-Use `kubectl` to apply the rendered manifests.
-
-```plain
-kubectl -n cattle-system apply -R -f ./rancher
-```
-
-{{% /tab %}}
-{{% /tabs %}}
-
# 4. Verify the Upgrade
Log into Rancher to confirm that the upgrade succeeded.
diff --git a/content/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/air-gap-upgrade/_index.md b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/air-gap-upgrade/_index.md
new file mode 100644
index 00000000000..ea0fe3ad61f
--- /dev/null
+++ b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/air-gap-upgrade/_index.md
@@ -0,0 +1,144 @@
+---
+title: Rendering the Helm Template in an Air Gapped Environment
+shortTitle: Air Gap Upgrade
+weight: 1
+---
+
+> These instructions assume you have already followed the instructions for a Kubernetes upgrade on [this page,]({{}}/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/) including the prerequisites, up until step 3. Upgrade Rancher.
+
+### Rancher Helm Template Options
+
+Render the Rancher template using the same chosen options that were used when installing Rancher. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
+
+Based on the choice you made during installation, complete one of the procedures below.
+
+Placeholder | Description
+------------|-------------
+`` | The version number of the output tarball.
+`` | The DNS name you pointed at your load balancer.
+`` | The DNS name for your private registry.
+`` | Cert-manager version running on k8s cluster.
+
+
+### Option A: Default Self-signed Certificate
+
+{{% tabs %}}
+{{% tab "Rancher v2.5.8+" %}}
+
+```
+helm template rancher ./rancher-.tgz --output-dir . \
+ --no-hooks \ # prevent files for Helm hooks from being generated
+ --namespace cattle-system \
+ --set hostname= \
+ --set certmanager.version= \
+ --set rancherImage=/rancher/rancher \
+ --set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
+ --set useBundledSystemChart=true # Use the packaged Rancher system charts
+```
+
+{{% /tab %}}
+{{% tab "Rancher before v2.5.8" %}}
+
+ ```plain
+helm template ./rancher-.tgz --output-dir . \
+ --name rancher \
+ --namespace cattle-system \
+ --set hostname= \
+ --set certmanager.version= \
+ --set rancherImage=/rancher/rancher \
+ --set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
+ --set useBundledSystemChart=true # Use the packaged Rancher system charts
+```
+
+{{% /tab %}}
+{{% /tabs %}}
+
+
+
+### Option B: Certificates from Files using Kubernetes Secrets
+
+
+{{% tabs %}}
+{{% tab "Rancher v2.5.8+" %}}
+
+
+```plain
+helm template ./rancher-.tgz --output-dir . \
+--name rancher \
+--no-hooks \ # prevent files for Helm hooks from being generated
+--namespace cattle-system \
+--set hostname= \
+--set rancherImage=/rancher/rancher \
+--set ingress.tls.source=secret \
+--set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
+--set useBundledSystemChart=true # Use the packaged Rancher system charts
+```
+
+If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`:
+
+```plain
+helm template ./rancher-.tgz --output-dir . \
+--name rancher \
+--no-hooks \ # prevent files for Helm hooks from being generated
+--namespace cattle-system \
+--set hostname= \
+--set rancherImage=/rancher/rancher \
+--set ingress.tls.source=secret \
+--set privateCA=true \
+--set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
+--set useBundledSystemChart=true # Use the packaged Rancher system charts
+```
+
+{{% /tab %}}
+{{% tab "Rancher before v2.5.8" %}}
+
+
+```plain
+helm template ./rancher-.tgz --output-dir . \
+--name rancher \
+--namespace cattle-system \
+--set hostname= \
+--set rancherImage=/rancher/rancher \
+--set ingress.tls.source=secret \
+--set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
+--set useBundledSystemChart=true # Use the packaged Rancher system charts
+```
+
+If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`:
+
+```plain
+helm template ./rancher-.tgz --output-dir . \
+--name rancher \
+--namespace cattle-system \
+--set hostname= \
+--set rancherImage=/rancher/rancher \
+--set ingress.tls.source=secret \
+--set privateCA=true \
+--set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
+--set useBundledSystemChart=true # Use the packaged Rancher system charts
+```
+{{% /tab %}}
+{{% /tabs %}}
+
+
+### Apply the Rendered Templates
+
+Copy the rendered manifest directories to a system with access to the Rancher server cluster and apply the rendered templates.
+
+Use `kubectl` to apply the rendered manifests.
+
+```plain
+kubectl -n cattle-system apply -R -f ./rancher
+```
+
+# Verify the Upgrade
+
+Log into Rancher to confirm that the upgrade succeeded.
+
+>**Having network issues following upgrade?**
+>
+> See [Restoring Cluster Networking]({{}}/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/namespace-migration).
+
+# Known Upgrade Issues
+
+A list of known issues for each Rancher version can be found in the release notes on [GitHub](https://github.com/rancher/rancher/releases) and on the [Rancher forums.](https://forums.rancher.com/c/announcements/12)
diff --git a/content/rancher/v2.5/en/installation/other-installation-methods/air-gap/install-rancher/_index.md b/content/rancher/v2.5/en/installation/other-installation-methods/air-gap/install-rancher/_index.md
index 7bb8227939f..de768a16285 100644
--- a/content/rancher/v2.5/en/installation/other-installation-methods/air-gap/install-rancher/_index.md
+++ b/content/rancher/v2.5/en/installation/other-installation-methods/air-gap/install-rancher/_index.md
@@ -10,18 +10,21 @@ aliases:
- /rancher/v2.5/en/installation/air-gap-high-availability/install-rancher/
---
-This section is about how to deploy Rancher for your air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. There are _tabs_ for either a high availability (recommended) or a Docker installation.
+This section is about how to deploy Rancher for your air gapped environment in a high-availability Kubernetes installation. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy.
### Privileged Access for Rancher v2.5+
When the Rancher server is deployed in the Docker container, a local Kubernetes cluster is installed within the container for Rancher to use. Because many features of Rancher run as deployments, and privileged mode is required to run containers within containers, you will need to install Rancher with the `--privileged` option.
-{{% tabs %}}
-{{% tab "Kubernetes Install (Recommended)" %}}
+# Docker Instructions
+
+If you want to continue the air gapped installation using Docker commands, skip the rest of this page and follow the instructions on [this page.](./docker-install-commands)
+
+# Kubernetes Instructions
Rancher recommends installing Rancher on a Kubernetes cluster. A highly available Kubernetes install is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails.
-This section describes installing Rancher in five parts:
+This section describes installing Rancher:
- [1. Add the Helm Chart Repository](#1-add-the-helm-chart-repository)
- [2. Choose your SSL Configuration](#2-choose-your-ssl-configuration)
@@ -63,7 +66,7 @@ When Rancher is installed on an air gapped Kubernetes cluster, there are two rec
| Rancher Generated Self-Signed Certificates | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)
This is the **default** and does not need to be added when rendering the Helm template. | yes |
| Certificates from Files | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s).
This option must be passed when rendering the Rancher Helm template. | no |
-# 3. Render the Rancher Helm Template
+# Helm Chart Options for Air Gap Installations
When setting up the Rancher Helm template, there are several options in the Helm chart that are designed specifically for air gap installations.
@@ -73,73 +76,108 @@ When setting up the Rancher Helm template, there are several options in the Helm
| `systemDefaultRegistry` | `` | Configure Rancher server to always pull from your private registry when provisioning clusters. |
| `useBundledSystemChart` | `true` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. |
-Based on the choice your made in [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration), complete one of the procedures below.
+# 3. Render the Rancher Helm Template
-### Option A: Default Self-Signed Certificate
+Based on the choice your made in [2. Choose your SSL Configuration](#2-choose-your-ssl-configuration), complete one of the procedures below.
+
+# Option A: Default Self-Signed Certificate
-{{% accordion id="k8s-1" label="Click to expand" %}}
By default, Rancher generates a CA and uses cert-manager to issue the certificate for access to the Rancher server interface.
> **Note:**
> Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.11.0, please see our [upgrade cert-manager documentation]({{}}/rancher/v2.5/en/installation/options/upgrading-cert-manager/).
-1. From a system connected to the internet, add the cert-manager repo to Helm.
- ```plain
- helm repo add jetstack https://charts.jetstack.io
- helm repo update
- ```
+### 1. Add the cert-manager repo
-1. Fetch the latest cert-manager chart available from the [Helm chart repository](https://hub.helm.sh/charts/jetstack/cert-manager).
+From a system connected to the internet, add the cert-manager repo to Helm:
- ```plain
- helm fetch jetstack/cert-manager --version v1.0.4
- ```
+```plain
+helm repo add jetstack https://charts.jetstack.io
+helm repo update
+```
-1. Render the cert manager template with the options you would like to use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files.
- ```plain
- helm template cert-manager ./cert-manager-v1.0.4.tgz --output-dir . \
- --namespace cert-manager \
- --set image.repository=/quay.io/jetstack/cert-manager-controller \
- --set webhook.image.repository=/quay.io/jetstack/cert-manager-webhook \
- --set cainjector.image.repository=/quay.io/jetstack/cert-manager-cainjector
- ```
+### 2. Fetch the cert-manager chart
-1. Download the required CRD file for cert-manager
+Fetch the latest cert-manager chart available from the [Helm chart repository](https://hub.helm.sh/charts/jetstack/cert-manager).
+
+```plain
+helm fetch jetstack/cert-manager --version v1.0.4
+```
+
+### 3. Render the cert-manager template
+
+Render the cert-manager template with the options you would like to use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files.
+
+```plain
+helm template cert-manager ./cert-manager-v1.0.4.tgz --output-dir . \
+ --namespace cert-manager \
+ --set image.repository=/quay.io/jetstack/cert-manager-controller \
+ --set webhook.image.repository=/quay.io/jetstack/cert-manager-webhook \
+ --set cainjector.image.repository=/quay.io/jetstack/cert-manager-cainjector
+```
+
+### 4. Download the cert-manager CRD
+
+Download the required CRD file for cert-manager:
```plain
curl -L -o cert-manager/cert-manager-crd.yaml https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.crds.yaml
```
-1. Render the Rancher template, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
+### 5. Render the Rancher template
+
+Render the Rancher template, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
- Placeholder | Description
- ------------|-------------
- `` | The version number of the output tarball.
- `` | The DNS name you pointed at your load balancer.
- `` | The DNS name for your private registry.
- `` | Cert-manager version running on k8s cluster.
+Placeholder | Description
+------------|-------------
+`` | The version number of the output tarball.
+`` | The DNS name you pointed at your load balancer.
+`` | The DNS name for your private registry.
+`` | Cert-manager version running on k8s cluster.
- ```plain
- helm template rancher ./rancher-.tgz --output-dir . \
- --namespace cattle-system \
- --set hostname= \
- --set certmanager.version= \
- --set rancherImage=/rancher/rancher \
- --set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
- --set useBundledSystemChart=true # Use the packaged Rancher system charts
+{{% tabs %}}
+{{% tab "Rancher v2.5.8" %}}
+```plain
+helm template rancher ./rancher-.tgz --output-dir . \
+ --no-hooks \ # prevent files for Helm hooks from being generated
+ --namespace cattle-system \
+ --set hostname= \
+ --set certmanager.version= \
+ --set rancherImage=/rancher/rancher \
+ --set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
+ --set useBundledSystemChart=true # Use the packaged Rancher system charts
```
-**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.3.6`
+**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.5.8`
+{{% /tab %}}
+{{% tab "Rancher before v2.5.8" %}}
-{{% /accordion %}}
+```plain
+helm template rancher ./rancher-.tgz --output-dir . \
+ --namespace cattle-system \
+ --set hostname= \
+ --set certmanager.version= \
+ --set rancherImage=/rancher/rancher \
+ --set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
+ --set useBundledSystemChart=true # Use the packaged Rancher system charts
+```
-### Option B: Certificates From Files using Kubernetes Secrets
+**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.5.6`
+{{% /tab %}}
+{{% /tabs %}}
-{{% accordion id="k8s-2" label="Click to expand" %}}
+
+
+# Option B: Certificates From Files using Kubernetes Secrets
+
+
+### 1. Create secrets
Create Kubernetes secrets from your own certificates for Rancher to use. The common name for the cert will need to match the `hostname` option in the command below, or the ingress controller will fail to provision the site for Rancher.
+### 2. Render the Rancher template
+
Render the Rancher template, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
| Placeholder | Description |
@@ -148,6 +186,41 @@ Render the Rancher template, declaring your chosen options. Use the reference ta
| `` | The DNS name you pointed at your load balancer. |
| `` | The DNS name for your private registry. |
+{{% tabs %}}
+{{% tab "Rancher v2.5.8+" %}}
+
+```plain
+ helm template rancher ./rancher-.tgz --output-dir . \
+ --no-hooks \ # prevent files for Helm hooks from being generated
+ --namespace cattle-system \
+ --set hostname= \
+ --set rancherImage=/rancher/rancher \
+ --set ingress.tls.source=secret \
+ --set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
+ --set useBundledSystemChart=true # Use the packaged Rancher system charts
+```
+
+If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`:
+
+```plain
+ helm template rancher ./rancher-.tgz --output-dir . \
+ --no-hooks \ # prevent files for Helm hooks from being generated
+ --namespace cattle-system \
+ --set hostname= \
+ --set rancherImage=/rancher/rancher \
+ --set ingress.tls.source=secret \
+ --set privateCA=true \
+ --set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
+ --set useBundledSystemChart=true # Use the packaged Rancher system charts
+```
+
+**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.3.6`
+
+Then refer to [Adding TLS Secrets]({{}}/rancher/v2.5/en/installation/resources/encryption/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them.
+{{% /tab %}}
+{{% tab "Rancher before v2.5.8" %}}
+
+
```plain
helm template rancher ./rancher-.tgz --output-dir . \
--namespace cattle-system \
@@ -174,8 +247,10 @@ If you are using a Private CA signed cert, add `--set privateCA=true` following
**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.3.6`
Then refer to [Adding TLS Secrets]({{}}/rancher/v2.5/en/installation/resources/encryption/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them.
+{{% /tab %}}
+{{% /tabs %}}
+
-{{% /accordion %}}
# 4. Install Rancher
@@ -228,135 +303,3 @@ These resources could be helpful when installing Rancher:
- [Rancher Helm chart options]({{}}/rancher/v2.5/en/installation/resources/chart-options/)
- [Adding TLS secrets]({{}}/rancher/v2.5/en/installation/resources/encryption/tls-secrets/)
- [Troubleshooting Rancher Kubernetes Installations]({{}}/rancher/v2.5/en/installation/options/troubleshooting/)
-
-{{% /tab %}}
-{{% tab "Docker Install" %}}
-
-The Docker installation is for Rancher users who want to test out Rancher.
-
-Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server.
-
-For Rancher v2.5+, the backup application can be used to migrate the Rancher server from a Docker install to a Kubernetes install using [these steps.]({{}}/rancher/v2.5/en/backups/migrating-rancher)
-
-For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
-
-| Environment Variable Key | Environment Variable Value | Description |
-| -------------------------------- | -------------------------------- | ---- |
-| `CATTLE_SYSTEM_DEFAULT_REGISTRY` | `` | Configure Rancher server to always pull from your private registry when provisioning clusters. |
-| `CATTLE_SYSTEM_CATALOG` | `bundled` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. |
-
-> **Do you want to...**
->
-> - Configure custom CA root certificate to access your services? See [Custom CA root certificate]({{}}/rancher/v2.5/en/installation/options/custom-ca-root-certificate/).
-> - Record all transactions with the Rancher API? See [API Auditing]({{}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/advanced/#api-audit-log).
-
-Choose from the following options:
-
-### Option A: Default Self-Signed Certificate
-
-{{% accordion id="option-a" label="Click to expand" %}}
-
-If you are installing Rancher in a development or testing environment where identity verification isn't a concern, install Rancher using the self-signed certificate that it generates. This installation option omits the hassle of generating a certificate yourself.
-
-Log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder.
-
-| Placeholder | Description |
-| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
-| `` | Your private registry URL and port. |
-| `` | The release tag of the [Rancher version]({{}}/rancher/v2.5/en/installation/resources/chart-options/) that you want to install. |
-
-As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5)
-
-```
-docker run -d --restart=unless-stopped \
- -p 80:80 -p 443:443 \
- -e CATTLE_SYSTEM_DEFAULT_REGISTRY= \ # Set a default private registry to be used in Rancher
- -e CATTLE_SYSTEM_CATALOG=bundled \ # Use the packaged Rancher system charts
- --privileged \
- /rancher/rancher:
-```
-
-{{% /accordion %}}
-
-### Option B: Bring Your Own Certificate: Self-Signed
-
-{{% accordion id="option-b" label="Click to expand" %}}
-
-In development or testing environments where your team will access your Rancher server, create a self-signed certificate for use with your install so that your team can verify they're connecting to your instance of Rancher.
-
-> **Prerequisites:**
-> From a computer with an internet connection, create a self-signed certificate using [OpenSSL](https://www.openssl.org/) or another method of your choice.
->
-> - The certificate files must be in PEM format.
-> - In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Certificate Troubleshooting.]({{}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/troubleshooting)
-
-After creating your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Use the `-v` flag and provide the path to your certificates to mount them in your container.
-
-| Placeholder | Description |
-| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
-| `` | The path to the directory containing your certificate files. |
-| `` | The path to your full certificate chain. |
-| `` | The path to the private key for your certificate. |
-| `` | The path to the certificate authority's certificate. |
-| `` | Your private registry URL and port. |
-| `` | The release tag of the [Rancher version]({{}}/rancher/v2.5/en/installation/resources/chart-options/) that you want to install. |
-
-As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5)
-
-```
-docker run -d --restart=unless-stopped \
- -p 80:80 -p 443:443 \
- -v //:/etc/rancher/ssl/cert.pem \
- -v //:/etc/rancher/ssl/key.pem \
- -v //:/etc/rancher/ssl/cacerts.pem \
- -e CATTLE_SYSTEM_DEFAULT_REGISTRY= \ # Set a default private registry to be used in Rancher
- -e CATTLE_SYSTEM_CATALOG=bundled \ # Use the packaged Rancher system charts
- --privileged \
- /rancher/rancher:
-```
-
-{{% /accordion %}}
-
-### Option C: Bring Your Own Certificate: Signed by Recognized CA
-
-{{% accordion id="option-c" label="Click to expand" %}}
-
-In development or testing environments where you're exposing an app publicly, use a certificate signed by a recognized CA so that your user base doesn't encounter security warnings.
-
-> **Prerequisite:** The certificate files must be in PEM format.
-
-After obtaining your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Because your certificate is signed by a recognized CA, mounting an additional CA certificate file is unnecessary.
-
-| Placeholder | Description |
-| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
-| `` | The path to the directory containing your certificate files. |
-| `` | The path to your full certificate chain. |
-| `` | The path to the private key for your certificate. |
-| `` | Your private registry URL and port. |
-| `` | The release tag of the [Rancher version]({{}}/rancher/v2.5/en/installation/resources/chart-options/) that you want to install. |
-
-> **Note:** Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher.
-
-As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5)
-
-```
-docker run -d --restart=unless-stopped \
- -p 80:80 -p 443:443 \
- --no-cacerts \
- -v //:/etc/rancher/ssl/cert.pem \
- -v //:/etc/rancher/ssl/key.pem \
- -e CATTLE_SYSTEM_DEFAULT_REGISTRY= \ # Set a default private registry to be used in Rancher
- -e CATTLE_SYSTEM_CATALOG=bundled \ # Use the packaged Rancher system charts
- --privileged
- /rancher/rancher:
-```
-
-{{% /accordion %}}
-
-
-
-> **Note:** If you don't intend to send telemetry data, opt out [telemetry]({{}}/rancher/v2.5/en/faq/telemetry/) during the initial login.
-
-
-{{% /tab %}}
-{{% /tabs %}}
diff --git a/content/rancher/v2.5/en/installation/other-installation-methods/air-gap/install-rancher/docker-install-commands/_index.md b/content/rancher/v2.5/en/installation/other-installation-methods/air-gap/install-rancher/docker-install-commands/_index.md
new file mode 100644
index 00000000000..81a30b69e84
--- /dev/null
+++ b/content/rancher/v2.5/en/installation/other-installation-methods/air-gap/install-rancher/docker-install-commands/_index.md
@@ -0,0 +1,130 @@
+---
+title: Docker Install Commands
+weight: 1
+---
+
+The Docker installation is for Rancher users who want to test out Rancher.
+
+Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server.
+
+For Rancher v2.5+, the backup application can be used to migrate the Rancher server from a Docker install to a Kubernetes install using [these steps.]({{}}/rancher/v2.5/en/backups/migrating-rancher)
+
+For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
+
+| Environment Variable Key | Environment Variable Value | Description |
+| -------------------------------- | -------------------------------- | ---- |
+| `CATTLE_SYSTEM_DEFAULT_REGISTRY` | `` | Configure Rancher server to always pull from your private registry when provisioning clusters. |
+| `CATTLE_SYSTEM_CATALOG` | `bundled` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. |
+
+> **Do you want to...**
+>
+> - Configure custom CA root certificate to access your services? See [Custom CA root certificate]({{}}/rancher/v2.5/en/installation/options/custom-ca-root-certificate/).
+> - Record all transactions with the Rancher API? See [API Auditing]({{}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/advanced/#api-audit-log).
+
+Choose from the following options:
+
+### Option A: Default Self-Signed Certificate
+
+{{% accordion id="option-a" label="Click to expand" %}}
+
+If you are installing Rancher in a development or testing environment where identity verification isn't a concern, install Rancher using the self-signed certificate that it generates. This installation option omits the hassle of generating a certificate yourself.
+
+Log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder.
+
+| Placeholder | Description |
+| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
+| `` | Your private registry URL and port. |
+| `` | The release tag of the [Rancher version]({{}}/rancher/v2.5/en/installation/resources/chart-options/) that you want to install. |
+
+As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5)
+
+```
+docker run -d --restart=unless-stopped \
+ -p 80:80 -p 443:443 \
+ -e CATTLE_SYSTEM_DEFAULT_REGISTRY= \ # Set a default private registry to be used in Rancher
+ -e CATTLE_SYSTEM_CATALOG=bundled \ # Use the packaged Rancher system charts
+ --privileged \
+ /rancher/rancher:
+```
+
+{{% /accordion %}}
+
+### Option B: Bring Your Own Certificate: Self-Signed
+
+{{% accordion id="option-b" label="Click to expand" %}}
+
+In development or testing environments where your team will access your Rancher server, create a self-signed certificate for use with your install so that your team can verify they're connecting to your instance of Rancher.
+
+> **Prerequisites:**
+> From a computer with an internet connection, create a self-signed certificate using [OpenSSL](https://www.openssl.org/) or another method of your choice.
+>
+> - The certificate files must be in PEM format.
+> - In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Certificate Troubleshooting.]({{}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/troubleshooting)
+
+After creating your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Use the `-v` flag and provide the path to your certificates to mount them in your container.
+
+| Placeholder | Description |
+| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
+| `` | The path to the directory containing your certificate files. |
+| `` | The path to your full certificate chain. |
+| `` | The path to the private key for your certificate. |
+| `` | The path to the certificate authority's certificate. |
+| `` | Your private registry URL and port. |
+| `` | The release tag of the [Rancher version]({{}}/rancher/v2.5/en/installation/resources/chart-options/) that you want to install. |
+
+As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5)
+
+```
+docker run -d --restart=unless-stopped \
+ -p 80:80 -p 443:443 \
+ -v //:/etc/rancher/ssl/cert.pem \
+ -v //:/etc/rancher/ssl/key.pem \
+ -v //:/etc/rancher/ssl/cacerts.pem \
+ -e CATTLE_SYSTEM_DEFAULT_REGISTRY= \ # Set a default private registry to be used in Rancher
+ -e CATTLE_SYSTEM_CATALOG=bundled \ # Use the packaged Rancher system charts
+ --privileged \
+ /rancher/rancher:
+```
+
+{{% /accordion %}}
+
+### Option C: Bring Your Own Certificate: Signed by Recognized CA
+
+{{% accordion id="option-c" label="Click to expand" %}}
+
+In development or testing environments where you're exposing an app publicly, use a certificate signed by a recognized CA so that your user base doesn't encounter security warnings.
+
+> **Prerequisite:** The certificate files must be in PEM format.
+
+After obtaining your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Because your certificate is signed by a recognized CA, mounting an additional CA certificate file is unnecessary.
+
+| Placeholder | Description |
+| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
+| `` | The path to the directory containing your certificate files. |
+| `` | The path to your full certificate chain. |
+| `` | The path to the private key for your certificate. |
+| `` | Your private registry URL and port. |
+| `` | The release tag of the [Rancher version]({{}}/rancher/v2.5/en/installation/resources/chart-options/) that you want to install. |
+
+> **Note:** Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher.
+
+As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5)
+
+```
+docker run -d --restart=unless-stopped \
+ -p 80:80 -p 443:443 \
+ --no-cacerts \
+ -v //:/etc/rancher/ssl/cert.pem \
+ -v //:/etc/rancher/ssl/key.pem \
+ -e CATTLE_SYSTEM_DEFAULT_REGISTRY= \ # Set a default private registry to be used in Rancher
+ -e CATTLE_SYSTEM_CATALOG=bundled \ # Use the packaged Rancher system charts
+ --privileged
+ /rancher/rancher:
+```
+
+{{% /accordion %}}
+
+
+
+> **Note:** If you don't intend to send telemetry data, opt out [telemetry]({{}}/rancher/v2.5/en/faq/telemetry/) during the initial login.
+
diff --git a/content/rancher/v2.5/en/installation/requirements/_index.md b/content/rancher/v2.5/en/installation/requirements/_index.md
index 2a418430059..b91223043ca 100644
--- a/content/rancher/v2.5/en/installation/requirements/_index.md
+++ b/content/rancher/v2.5/en/installation/requirements/_index.md
@@ -16,7 +16,9 @@ Make sure the node(s) for the Rancher server fulfill the following requirements:
- [RKE and Hosted Kubernetes](#rke-and-hosted-kubernetes)
- [K3s Kubernetes](#k3s-kubernetes)
- [RancherD](#rancherd)
+ - [RKE2](#rke2-kubernetes)
- [CPU and Memory for Rancher before v2.4.0](#cpu-and-memory-for-rancher-before-v2-4-0)
+- [Ingress](#ingress)
- [Disks](#disks)
- [Networking Requirements](#networking-requirements)
- [Node IP Addresses](#node-ip-addresses)
@@ -30,7 +32,7 @@ The Rancher UI works best in Firefox or Chrome.
Rancher should work with any modern Linux distribution.
-Docker is required for nodes that will run RKE Kubernetes clusters. It is not required for RancherD installs.
+Docker is required for nodes that will run RKE Kubernetes clusters. It is not required for RancherD or RKE2 Kubernetes installs.
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/)
@@ -52,7 +54,7 @@ For the container runtime, RKE should work with any modern Docker version.
For the container runtime, K3s should work with any modern version of Docker or containerd.
-Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/) To specify the K3s version, use the INSTALL_K3S_VERSION environment variable when running the K3s installation script.
+Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/) To specify the K3s version, use the INSTALL_K3S_VERSION environment variable when running the K3s installation script.
If you are installing Rancher on a K3s cluster with **Raspbian Buster**, follow [these steps]({{}}/k3s/latest/en/advanced/#enabling-legacy-iptables-on-raspbian-buster) to switch to legacy iptables.
@@ -66,7 +68,17 @@ At this time, only Linux OSes that leverage systemd are supported.
To install RancherD on SELinux Enforcing CentOS 8 or RHEL 8 nodes, some [additional steps](#rancherd-on-selinux-enforcing-centos-8-or-rhel-8-nodes) are required.
-Docker is not required for RancherD installs.
+Docker is not required for RancherD installs.
+
+### RKE2 Specific Requirements
+
+_The RKE2 install is available as of v2.5.6._
+
+For details on which OS versions were tested with RKE2, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/)
+
+Docker is not required for RKE2 installs.
+
+The Ingress should be deployed as DaemonSet to ensure your load balancer can successfully route traffic to all nodes. Currently, RKE2 deploys nginx-ingress as a deployment by default, so you will need to deploy it as a DaemonSet by following [these steps.]({{}}/rancher/v2.5/en/installation/resources/k8s-tutorials/ha-rke2/#5-configure-nginx-to-be-a-daemonset)
### Installing Docker
@@ -87,6 +99,8 @@ These CPU and memory requirements apply to each host in the Kubernetes cluster w
These requirements apply to RKE Kubernetes clusters, as well as to hosted Kubernetes clusters such as EKS.
+
+
| Deployment Size | Clusters | Nodes | vCPUs | RAM |
| --------------- | ---------- | ------------ | -------| ------- |
| Small | Up to 150 | Up to 1500 | 2 | 8 GB |
@@ -122,15 +136,41 @@ These CPU and memory requirements apply to each instance with RancherD installed
| Small | Up to 5 | Up to 50 | 2 | 5 GB |
| Medium | Up to 15 | Up to 200 | 3 | 9 GB |
+### RKE2 Kubernetes
+
+These CPU and memory requirements apply to each instance with RKE2 installed. Minimum recommendations are outlined here.
+
+| Deployment Size | Clusters | Nodes | vCPUs | RAM |
+| --------------- | -------- | --------- | ----- | ---- |
+| Small | Up to 5 | Up to 50 | 2 | 5 GB |
+| Medium | Up to 15 | Up to 200 | 3 | 9 GB |
+
### Docker
-These CPU and memory requirements apply to a host with a [single-node]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) installation of Rancher.
+These CPU and memory requirements apply to a host with a [single-node]({{}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker) installation of Rancher.
| Deployment Size | Clusters | Nodes | vCPUs | RAM |
| --------------- | -------- | --------- | ----- | ---- |
| Small | Up to 5 | Up to 50 | 1 | 4 GB |
| Medium | Up to 15 | Up to 200 | 2 | 8 GB |
+# Ingress
+
+Each node in the Kubernetes cluster that Rancher is installed on should run an Ingress.
+
+The Ingress should be deployed as DaemonSet to ensure your load balancer can successfully route traffic to all nodes.
+
+For RKE, K3s and RancherD installations, you don't have to install the Ingress manually because is is installed by default.
+
+For hosted Kubernetes clusters (EKS, GKE, AKS) and RKE2 Kubernetes installations, you will need to set up the ingress.
+
+### Ingress for RKE2
+
+Currently, RKE2 deploys nginx-ingress as a deployment by default, so you will need to deploy it as a DaemonSet by following [these steps.]({{}}/rancher/v2.5/en/installation/resources/k8s-tutorials/ha-rke2/#5-configure-nginx-to-be-a-daemonset)
+
+### Ingress for EKS
+For an example of how to deploy an nginx-ingress-controller with a LoadBalancer service, refer to [this section.]({{}}/rancher/v2.5/en/installation/install-rancher-on-k8s/amazon-eks/#5-install-an-ingress)
+
# Disks
Rancher performance depends on etcd in the cluster performance. To ensure optimal speed, we recommend always using SSD disks to back your Rancher management Kubernetes cluster. On cloud providers, you will also want to use the minimum size that allows the maximum IOPS. In larger clusters, consider using dedicated storage devices for etcd data and wal directories.
@@ -154,4 +194,4 @@ Before installing Rancher on SELinux Enforcing CentOS 8 nodes or RHEL 8 nodes, y
```
sudo yum install iptables
sudo yum install container-selinux
-```
\ No newline at end of file
+```
diff --git a/content/rancher/v2.5/en/installation/resources/feature-flags/_index.md b/content/rancher/v2.5/en/installation/resources/feature-flags/_index.md
index 2294108e749..4a4e422ddba 100644
--- a/content/rancher/v2.5/en/installation/resources/feature-flags/_index.md
+++ b/content/rancher/v2.5/en/installation/resources/feature-flags/_index.md
@@ -76,6 +76,24 @@ Here is an example of a command for passing in the feature flag names when rende
The Helm 3 command is as follows:
+{{% tabs %}}
+{{% tab "Rancher v2.5.8" %}}
+
+```
+helm template rancher ./rancher-.tgz --output-dir . \
+ --no-hooks \ # prevent files for Helm hooks from being generated
+ --namespace cattle-system \
+ --set hostname= \
+ --set rancherImage=/rancher/rancher \
+ --set ingress.tls.source=secret \
+ --set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher
+ --set useBundledSystemChart=true # Use the packaged Rancher system charts
+ --set 'extraEnv[0].name=CATTLE_FEATURES'
+ --set 'extraEnv[0].value==true,=true'
+```
+{{% /tab %}}
+{{% tab "Rancher before v2.5.8" %}}
+
```
helm template rancher ./rancher-.tgz --output-dir . \
--namespace cattle-system \
@@ -87,6 +105,8 @@ helm template rancher ./rancher-.tgz --output-dir . \
--set 'extraEnv[0].name=CATTLE_FEATURES'
--set 'extraEnv[0].value==true,=true'
```
+{{% /tab %}}
+{{% /tabs %}}
The Helm 2 command is as follows:
diff --git a/content/rancher/v2.5/en/installation/resources/k8s-tutorials/ha-rke2/_index.md b/content/rancher/v2.5/en/installation/resources/k8s-tutorials/ha-rke2/_index.md
index 2e9115d294a..55ed67ad198 100644
--- a/content/rancher/v2.5/en/installation/resources/k8s-tutorials/ha-rke2/_index.md
+++ b/content/rancher/v2.5/en/installation/resources/k8s-tutorials/ha-rke2/_index.md
@@ -11,7 +11,7 @@ This section describes how to install a Kubernetes cluster according to the [bes
# Prerequisites
-These instructions assume you have set up three nodes, a load balancer, and a DNS record as described [this section.]({{}}/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-rke2-ha)
+These instructions assume you have set up three nodes, a load balancer, and a DNS record, as described in [this section.]({{}}/rancher/v2.5/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-rke2-ha)
Note that in order for RKE2 to work correctly with the load balancer, you need to set up two listeners: one for the supervisor on port 9345, and one for the Kubernetes API on port 6443.
@@ -163,7 +163,7 @@ Currently, RKE2 deploys nginx-ingress as a deployment, and that can impact the R
To rectify that, place the following file in /var/lib/rancher/rke2/server/manifests on any of the server nodes:
-```
+```yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
@@ -175,7 +175,4 @@ spec:
kind: DaemonSet
daemonset:
useHostPort: true
- image:
- repository: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller
- tag: "v0.34.1"
```
diff --git a/content/rancher/v2.5/en/istio/configuration-reference/_index.md b/content/rancher/v2.5/en/istio/configuration-reference/_index.md
index 50e55650a0e..79164a8f965 100644
--- a/content/rancher/v2.5/en/istio/configuration-reference/_index.md
+++ b/content/rancher/v2.5/en/istio/configuration-reference/_index.md
@@ -11,7 +11,7 @@ aliases:
- [Selectors and Scrape Configs](#selectors-and-scrape-configs)
- [Enable Istio with Pod Security Policies](#enable-istio-with-pod-security-policies)
- [Additional Steps for Installing Istio on an RKE2 Cluster](#additional-steps-for-installing-istio-on-an-rke2-cluster)
-- [Additional Steps for Canal Network Plug-in with Project Network Isolation](#additional-steps-for-canal-network-plug-in-with-project-network-isolation)
+- [Additional Steps for Project Network Isolation](#additional-steps-for-project-network-isolation)
### Egress Support
@@ -45,6 +45,6 @@ Refer to [this section.](./enable-istio-with-psp)
Refer to [this section.](./rke2)
-### Additional Steps for Canal Network Plug-in with Project Network Isolation
+### Additional Steps for Project Network Isolation
Refer to [this section.](./canal-and-project-network)
\ No newline at end of file
diff --git a/content/rancher/v2.5/en/istio/configuration-reference/canal-and-project-network/_index.md b/content/rancher/v2.5/en/istio/configuration-reference/canal-and-project-network/_index.md
index 886d366e69b..77f82b11b3c 100644
--- a/content/rancher/v2.5/en/istio/configuration-reference/canal-and-project-network/_index.md
+++ b/content/rancher/v2.5/en/istio/configuration-reference/canal-and-project-network/_index.md
@@ -1,5 +1,5 @@
---
-title: Additional Steps for Canal Network Plug-in with Project Network Isolation
+title: Additional Steps for Project Network Isolation
weight: 4
aliases:
- /rancher/v2.5/en/istio/v2.5/configuration-reference/canal-and-project-network
@@ -7,8 +7,8 @@ aliases:
In clusters where:
-- The Canal network plug-in is in use.
-- The Project Network Isolation option is enabled.
+- You are using the Canal network plugin with Rancher before v2.5.8, or you are using Rancher v2.5.8+ with an any RKE network plug-in that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin
+- The Project Network Isolation option is enabled
- You install the Istio Ingress module
The Istio Ingress Gateway pod won't be able to redirect ingress traffic to the workloads by default. This is because all the namespaces will be inaccessible from the namespace where Istio is installed. You have two options.
diff --git a/content/rancher/v2.5/en/istio/setup/enable-istio-in-cluster/_index.md b/content/rancher/v2.5/en/istio/setup/enable-istio-in-cluster/_index.md
index 3bedfae64cf..1f7f9546c31 100644
--- a/content/rancher/v2.5/en/istio/setup/enable-istio-in-cluster/_index.md
+++ b/content/rancher/v2.5/en/istio/setup/enable-istio-in-cluster/_index.md
@@ -11,7 +11,7 @@ aliases:
>- Only a user with the `cluster-admin` [Kubernetes default role](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) assigned can configure and install Istio in a Kubernetes cluster.
>- If you have pod security policies, you will need to install Istio with the CNI enabled. For details, see [this section.]({{}}/rancher/v2.5/en/istio/v2.5/configuration-reference/enable-istio-with-psp)
>- To install Istio on an RKE2 cluster, additional steps are required. For details, see [this section.]({{}}/rancher/v2.5/en/istio/v2.5/configuration-reference/rke2/)
->- To install Istio in a cluster where the Canal network plug-in is in use and the Project Network isolation option is enabled, additional steps are required. For details, see [this section.]({{}}/rancher/v2.5/en/istio/v2.5/configuration-reference/canal-and-project-network)
+>- To install Istio in a cluster where project network isolation is enabled, additional steps are required. For details, see [this section.]({{}}/rancher/v2.5/en/istio/v2.5/configuration-reference/canal-and-project-network)
1. From the **Cluster Explorer**, navigate to available **Charts** in **Apps & Marketplace**
1. Select the Istio chart from the rancher provided charts
diff --git a/content/rancher/v2.5/en/logging/_index.md b/content/rancher/v2.5/en/logging/_index.md
index 0ae41042072..36acadc4b50 100644
--- a/content/rancher/v2.5/en/logging/_index.md
+++ b/content/rancher/v2.5/en/logging/_index.md
@@ -13,11 +13,14 @@ aliases:
- [Changes in Rancher v2.5](#changes-in-rancher-v2-5)
- [Enabling Logging for Rancher Managed Clusters](#enabling-logging-for-rancher-managed-clusters)
- [Uninstall Logging](#uninstall-logging)
+- [Windows Support](#windows-support)
- [Role-based Access Control](#role-based-access-control)
- [Configuring the Logging Application](#configuring-the-logging-application)
+- [Examples](#examples)
- [Working with a Custom Docker Root Directory](#working-with-a-custom-docker-root-directory)
- [Working with Taints and Tolerations](#working-with-taints-and-tolerations)
- [Logging v2 with SELinux](#logging-v2-with-selinux)
+- [Additional Logging Sources](#additional-logging-sources)
- [Troubleshooting](#troubleshooting)
# Changes in Rancher v2.5
@@ -58,6 +61,30 @@ You can enable the logging for a Rancher managed cluster by going to the Apps pa
**Result** `rancher-logging` is uninstalled.
+# Windows Support
+
+{{% tabs %}}
+{{% tab "Rancher v2.5.8" %}}
+As of Rancher v2.5.8, logging support for Windows clusters has been added and logs can be collected from Windows nodes.
+
+### Enabling and Disabling Windows Node Logging
+
+You can enable or disable Windows node logging by setting `global.cattle.windows.enabled` to either `true` or `false` in the `values.yaml`.
+By default, Windows node logging will be enabled if the Cluster Explorer UI is used to install the logging application on a Windows cluster.
+In this scenario, setting `global.cattle.windows.enabled` to `false` will disable Windows node logging on the cluster.
+When disabled, logs will still be collected from Linux nodes within the Windows cluster.
+
+> Note: Currently an [issue](https://github.com/rancher/rancher/issues/32325) exists where Windows nodeAgents are not deleted when performing a `helm upgrade` after disabling Windows logging in a Windows cluster. In this scenario, users may need to manually remove the Windows nodeAgents if they are already installed.
+
+{{% /tab %}}
+{{% tab "Rancher v2.5.0-2.5.7" %}}
+Clusters with Windows workers support exporting logs from Linux nodes, but Windows node logs are currently unable to be exported.
+Only Linux node logs are able to be exported.
+
+To allow the logging pods to be scheduled on Linux nodes, tolerations must be added to the pods. Refer to the [Working with Taints and Tolerations](#working-with-taints-and-tolerations) section for details and an example.
+{{% /tab %}}
+{{% /tabs %}}
+
# Role-based Access Control
Rancher logging has two roles, `logging-admin` and `logging-view`.
@@ -191,9 +218,50 @@ spec:
- "devteam-splunk"
```
+### Output to Syslog
+
+Let's say you wanted to send all logs in your cluster to an `syslog` server. First, we create a cluster output.
+
+```yaml
+apiVersion: logging.banzaicloud.io/v1beta1
+ kind: ClusterOutput
+ metadata:
+ name: "example-syslog"
+ namespace: "cattle-logging-system"
+ spec:
+ syslog:
+ buffer:
+ timekey: 30s
+ timekey_use_utc: true
+ timekey_wait: 10s
+ flush_interval: 5s
+ format:
+ type: json
+ app_name_field: test
+ host: syslog.example.com
+ insecure: true
+ port: 514
+ transport: tcp
+```
+
+Now that we have configured where we want the logs to go, let's configure all logs to go to that output.
+
+```yaml
+apiVersion: logging.banzaicloud.io/v1beta1
+ kind: ClusterFlow
+ metadata:
+ name: "all-logs"
+ namespace: cattle-logging-system
+ spec:
+ globalOutputRefs:
+ - "example-syslog"
+```
+
### Unsupported Output
-For the final example, we create an output to write logs to a destination that is not supported out of the box (e.g. syslog):
+For the final example, we create an output to write logs to a destination that is not supported out of the box:
+
+> **Note on syslog** As of Rancher v2.5.4, `syslog` is a supported output. However, this example still provides an overview on using unsupported plugins.
```yaml
apiVersion: v1
@@ -283,14 +351,14 @@ spec:
Let's break down what is happening here. First, we create a deployment of a container that has the additional `syslog` plugin and accepts logs forwarded from another `fluentd`. Next we create an output configured as a forwarder to our deployment. The deployment `fluentd` will then forward all logs to the configured `syslog` destination.
-> **Note on syslog** Official `syslog` support is coming in Rancher v2.5.4. However, this example still provides an overview on using unsupported plugins.
-
# Working with a Custom Docker Root Directory
_Applies to v2.5.6+_
If using a custom Docker root directory, you can set `global.dockerRootDirectory` in `values.yaml`.
This will ensure that the Logging CRs created will use your specified path rather than the default Docker `data-root` location.
+Note that this only affects Linux nodes.
+If there are any Windows nodes in the cluster, the change will not be applicable to those nodes.
# Working with Taints and Tolerations
@@ -300,11 +368,23 @@ Unless the pods have a `toleration` for that node's taint, they will run on othe
Using `nodeSelector` gives pods an affinity towards certain nodes.
Both provide choice for the what node(s) the pod will run on.
+
### Default Implementation in Rancher's Logging Stack
+{{% tabs %}}
+{{% tab "Rancher v2.5.8" %}}
+By default, Rancher taints all Linux nodes with `cattle.io/os=linux`, and does not taint Windows nodes.
+The logging stack pods have `tolerations` for this taint, which enables them to run on Linux nodes.
+Moreover, most logging stack pods run on Linux only and have a `nodeSelector` added to ensure they run on Linux nodes.
+
+{{% /tab %}}
+{{% tab "Rancher v2.5.0-2.5.7" %}}
By default, Rancher taints all Linux nodes with `cattle.io/os=linux`, and does not taint Windows nodes.
The logging stack pods have `tolerations` for this taint, which enables them to run on Linux nodes.
Moreover, we can populate the `nodeSelector` to ensure that our pods *only* run on Linux nodes.
+
+{{% /tab %}}
+{{% /tabs %}}
Let's look at an example pod YAML file with these settings...
```yaml
@@ -325,11 +405,6 @@ spec:
In the above example, we ensure that our pod only runs on Linux nodes, and we add a `toleration` for the taint we have on all of our Linux nodes.
You can do the same with Rancher's existing taints, or with your own custom ones.
-### Windows Support
-
-Clusters with Windows workers support exporting logs from Linux nodes, but Windows node logs are currently unable to be exported.
-Only Linux node logs are able to be exported.
-
### Adding NodeSelector Settings and Tolerations for Custom Taints
If you would like to add your own `nodeSelector` settings, or if you would like to add `tolerations` for additional taints, you can pass the following to the chart's values.
@@ -351,6 +426,7 @@ fluentbit_tolerations:
# insert tolerations list for fluentbit containers only...
```
+
# Logging v2 with SELinux
_Available as of v2.5.8_
@@ -363,6 +439,27 @@ To use Logging v2 with SELinux, we recommend installing the `rancher-selinux` RP
Then you will need to configure the logging application to work with SELinux as shown in [this section.]({{}}/rancher/v2.5/en/security/selinux/#configuring-the-logging-application-to-work-with-selinux)
+# Additional Logging Sources
+
+By default, Rancher collects logs for [control plane components](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) and [node components](https://kubernetes.io/docs/concepts/overview/components/#node-components) for all cluster types.
+In some cases, Rancher may be able to collect additional logs.
+
+The following table summarizes the sources where additional logs may be collected for each node types:
+
+| Logging Source | Linux Nodes (including in Windows cluster) | Windows Nodes |
+| --- | --- | ---|
+| RKE | ✓ | ✓ |
+| RKE2 | ✓ | |
+| K3s | ✓ | |
+| AKS | ✓ | |
+| EKS | ✓ | |
+| GKE | ✓ | |
+
+To enable hosted Kubernetes providers as additional logging sources, go to **Cluster Explorer > Logging > Chart Options** and select the **Enable enhanced cloud provider logging** option.
+When enabled, Rancher collects all additional node and control plane logs the provider has made available, which may vary between providers.
+If you're already using a cloud provider's own logging solution such as AWS CloudWatch or Google Cloud operations suite (formerly Stackdriver), it is not necessary to enable this option as the native solution will have unrestricted access to all logs.
+
+
# Troubleshooting
### The `cattle-logging` Namespace Being Recreated
diff --git a/content/rancher/v2.5/en/monitoring-alerting/_index.md b/content/rancher/v2.5/en/monitoring-alerting/_index.md
index f59f886c6fe..e45f6b8a830 100644
--- a/content/rancher/v2.5/en/monitoring-alerting/_index.md
+++ b/content/rancher/v2.5/en/monitoring-alerting/_index.md
@@ -213,7 +213,7 @@ For more information on configuring Alertmanager in Rancher, see [this page.](./
**Result:** `rancher-monitoring` is uninstalled.
-> **Note on Persistent Grafana Dashboards:** For users who are using Monitoring V2 v9.4.203 or below, uninstalling the Monitoring chart will delete the cattle-dashboards namespace, which will delete all persisted dashboards, unless the namespace is marked with the annotation `helm.sh/resource-policy: "keep"`. This annotation is added by default in Rancher v2.5.8+.
+> **Note on Persistent Grafana Dashboards:** For users who are using Monitoring V2 v9.4.203 or below, uninstalling the Monitoring chart will delete the cattle-dashboards namespace, which will delete all persisted dashboards, unless the namespace is marked with the annotation `helm.sh/resource-policy: "keep"`. This annotation is added by default in Monitoring V2 v14.5.100+ but can be manually applied on the cattle-dashboards namespace before an uninstall if an older version of the Monitoring chart is currently installed onto your cluster.
# Setting Resource Limits and Requests
diff --git a/content/rancher/v2.5/en/monitoring-alerting/configuration/alertmanager/_index.md b/content/rancher/v2.5/en/monitoring-alerting/configuration/alertmanager/_index.md
index b51fdd82907..ba0c63dac06 100644
--- a/content/rancher/v2.5/en/monitoring-alerting/configuration/alertmanager/_index.md
+++ b/content/rancher/v2.5/en/monitoring-alerting/configuration/alertmanager/_index.md
@@ -74,6 +74,13 @@ The notification integrations are configured with the `receiver`, which is expla
### Native vs. Non-native Receivers
+By default, AlertManager provides native integration with some receivers, which are listed in [this section.](https://prometheus.io/docs/alerting/latest/configuration/#receiver) All natively supported receivers are configurable through the Rancher UI.
+
+For notification mechanisms not natively supported by AlertManager, integration is achieved using the [webhook receiver.](https://prometheus.io/docs/alerting/latest/configuration/#webhook_config) A list of third-party drivers providing such integrations can be found [here.](https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver) Access to these drivers, and their associated integrations, is provided through the Alerting Drivers app. Once enabled, configuring non-native receivers can also be done through the Rancher UI.
+
+Currently the Rancher Alerting Drivers app provides access to the following integrations:
+- Microsoft Teams, based on the [prom2teams](https://github.com/idealista/prom2teams) driver
+- SMS, based on the [Sachet](https://github.com/messagebird/sachet) driver
### Changes in Rancher v2.5.8
@@ -99,7 +106,7 @@ The following types of receivers can be configured in the Rancher UI:
The custom receiver option can be used to configure any receiver in YAML that cannot be configured by filling out the other forms in the Rancher UI.
-### Slack
+# Slack
| Field | Type | Description |
|------|--------------|------|
@@ -108,7 +115,7 @@ The custom receiver option can be used to configure any receiver in YAML that ca
| Proxy URL | String | Proxy for the webhook notifications. |
| Enable Send Resolved Alerts | Bool | Whether to send a follow-up notification if an alert has been resolved (e.g. [Resolved] High CPU Usage). |
-### Email
+# Email
| Field | Type | Description |
|------|--------------|------|
@@ -125,7 +132,7 @@ SMTP options:
| Username | String | Enter a username to authenticate with the SMTP server. |
| Password | String | Enter a password to authenticate with the SMTP server. |
-### PagerDuty
+# PagerDuty
| Field | Type | Description |
|------|------|-------|
@@ -134,7 +141,7 @@ SMTP options:
| Proxy URL | String | Proxy for the PagerDuty notifications. |
| Enable Send Resolved Alerts | Bool | Whether to send a follow-up notification if an alert has been resolved (e.g. [Resolved] High CPU Usage). |
-### Opsgenie
+# Opsgenie
| Field | Description |
|------|-------------|
@@ -149,7 +156,7 @@ Opsgenie Responders:
| Type | String | Schedule, Team, User, or Escalation. For more information on alert responders, refer to the [Opsgenie documentation.](https://docs.opsgenie.com/docs/alert-recipients-and-teams) |
| Send To | String | Id, Name, or Username of the Opsgenie recipient. |
-### Webhook
+# Webhook
| Field | Description |
|-------|--------------|
@@ -159,13 +166,13 @@ Opsgenie Responders:
-### Custom
+# Custom
The YAML provided here will be directly appended to your receiver within the Alertmanager Config Secret.
-### Teams
+# Teams
-#### Enabling the Teams Receiver for Rancher Managed Clusters
+### Enabling the Teams Receiver for Rancher Managed Clusters
The Teams receiver is not a native receiver and must be enabled before it can be used. You can enable the Teams receiver for a Rancher managed cluster by going to the Apps page and installing the rancher-alerting-drivers app with the Teams option selected.
@@ -176,7 +183,7 @@ The Teams receiver is not a native receiver and must be enabled before it can be
1. Select the **Teams** option and click **Install**.
1. Take note of the namespace used as it will be required in a later step.
-#### Configure the Teams Receiver
+### Configure the Teams Receiver
The Teams receiver can be configured by updating its ConfigMap. For example, the following is a minimal Teams receiver configuration.
@@ -197,9 +204,9 @@ url: http://rancher-alerting-drivers-prom2teams.ns-1.svc:8089/v2/teams-instance-
-### SMS
+# SMS
-#### Enabling the SMS Receiver for Rancher Managed Clusters
+### Enabling the SMS Receiver for Rancher Managed Clusters
The SMS receiver is not a native receiver and must be enabled before it can be used. You can enable the SMS receiver for a Rancher managed cluster by going to the Apps page and installing the rancher-alerting-drivers app with the SMS option selected.
@@ -210,11 +217,15 @@ The SMS receiver is not a native receiver and must be enabled before it can be u
1. Select the **SMS** option and click **Install**.
1. Take note of the namespace used as it will be required in a later step.
-#### Configure the SMS Receiver
+### Configure the SMS Receiver
The SMS receiver can be configured by updating its ConfigMap. For example, the following is a minimal SMS receiver configuration.
```yaml
+providers:
+ telegram:
+ token: 'your-token-from-telegram'
+
receivers:
- name: 'telegram-receiver-1'
provider: 'telegram'
diff --git a/content/rancher/v2.5/en/monitoring-alerting/persist-grafana/_index.md b/content/rancher/v2.5/en/monitoring-alerting/persist-grafana/_index.md
index c1a5a246902..95d45176ce5 100644
--- a/content/rancher/v2.5/en/monitoring-alerting/persist-grafana/_index.md
+++ b/content/rancher/v2.5/en/monitoring-alerting/persist-grafana/_index.md
@@ -14,40 +14,64 @@ To allow the Grafana dashboard to persist after the Grafana instance restarts, a
{{% tabs %}}
{{% tab "Rancher v2.5.8+" %}}
+
> **Prerequisites:**
>
> - The monitoring application needs to be installed.
> - To create the persistent dashboard, you must have at least the **Manage Config Maps** Rancher RBAC permissions assigned to you in the project or namespace that contains the Grafana Dashboards. This correlates to the `monitoring-dashboard-edit` or `monitoring-dashboard-admin` Kubernetes native RBAC Roles exposed by the Monitoring chart.
> - To see the links to the external monitoring UIs, including Grafana dashboards, you will need at least a [project-member role.]({{}}/rancher/v2.5/en/monitoring-alerting/rbac/#users-with-rancher-cluster-manager-based-permissions)
-1. Open the Grafana dashboard. From the **Cluster Explorer,** click **Cluster Explorer > Monitoring.**
-1. Log in to Grafana. Note: The default Admin username and password for the Grafana instance is `admin/prom-operator`. (Regardless of who has the password, the **Manage Config Maps** permission in Rancher is still required to access the Grafana instance.) Alternative credentials can also be supplied on deploying or upgrading the chart.
-1. Go to the dashboard that you want to persist. In the top navigation menu, go to the dashboard settings by clicking the gear icon.
-1. In the left navigation menu, click **JSON Model.**
+### 1. Get the JSON model of the dashboard that you want to persist
+
+To create a persistent dashboard, you will need to get the JSON model of the dashboard you want to persist. You can use a premade dashboard or build your own.
+
+To use a premade dashboard, go to [https://grafana.com/grafana/dashboards](https://grafana.com/grafana/dashboards), open up its detail page, and click on the **Download JSON** button to get the JSON model for the next step.
+
+To use your own dashboard:
+
+1. Click on the link to open Grafana. From the **Cluster Explorer,** click **Cluster Explorer > Monitoring.**
+1. Log in to Grafana. Note: The default Admin username and password for the Grafana instance is `admin/prom-operator`. Alternative credentials can also be supplied on deploying or upgrading the chart.
+
+ > **Note:** Regardless of who has the password, in order to access the Grafana instance, you still need at least the Manage Services or View Monitoring permissions in the project that Rancher Monitoring is deployed into. Alternative credentials can also be supplied on deploying or upgrading the chart.
+1. Create a dashboard using Grafana's UI. Once complete, go to the dashboard's settings by clicking on the gear icon in the top navigation menu. In the left navigation menu, click **JSON Model.**
1. Copy the JSON data structure that appears.
-1. Create a ConfigMap in the `cattle-dashboards` namespace.
- Paste the JSON into the ConfigMap in the format shown in the example below:
- ```yaml
- apiVersion: v1
- kind: ConfigMap
- metadata:
- labels:
- grafana_dashboard: "1"
- name:
- namespace: cattle-dashboards
- data:
- .json: |-
-
- ```
+### 2. Create a ConfigMap using the Grafana JSON model
- > By default, Grafana is configured to watch all ConfigMaps with the `grafana_dashboard` label within the `cattle-dashboards` namespace.
- >
- > To specify that you would like Grafana to watch for ConfigMaps across all namespaces, refer to [this section.](#configuring-namespaces-for-the-grafana-dashboard-configmap)
+Create a ConfigMap in the namespace that contains your Grafana Dashboards (e.g. cattle-dashboards by default).
+
+The ConfigMap should look like this:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ labels:
+ grafana_dashboard: "1"
+ name:
+ namespace: cattle-dashboards # Change if using a non-default namespace
+data:
+ .json: |-
+
+```
+
+By default, Grafana is configured to watch all ConfigMaps with the `grafana_dashboard` label within the `cattle-dashboards` namespace.
+
+To specify that you would like Grafana to watch for ConfigMaps across all namespaces, refer to [this section.](#configuring-namespaces-for-the-grafana-dashboard-configmap)
+
+To create the ConfigMap in the Rancher UI,
+
+1. Go to the Cluster Explorer.
+1. Click **Core > ConfigMaps**.
+1. Click **Create**.
+1. Set up the key-value pairs similar to the example above. When entering the value for `.json`, click **Read from File** to upload the JSON data model as the value.
+1. Click **Create**.
**Result:** After the ConfigMap is created, it should show up on the Grafana UI and be persisted even if the Grafana pod is restarted.
-Dashboards that are persisted using ConfigMaps cannot be deleted from the Grafana UI. If you attempt to delete the dashboard in the Grafana UI, you will see the error message "Dashboard cannot be deleted because it was provisioned." To delete the dashboard, you will need to delete the ConfigMap.
+Dashboards that are persisted using ConfigMaps cannot be deleted or edited from the Grafana UI.
+
+If you attempt to delete the dashboard in the Grafana UI, you will see the error message "Dashboard cannot be deleted because it was provisioned." To delete the dashboard, you will need to delete the ConfigMap.
### Configuring Namespaces for the Grafana Dashboard ConfigMap
@@ -67,7 +91,9 @@ Note that the RBAC roles exposed by the Monitoring chart to add Grafana Dashboar
> - You must have the cluster-admin ClusterRole permission.
1. Open the Grafana dashboard. From the **Cluster Explorer,** click **Cluster Explorer > Monitoring.**
-1. Log in to Grafana. Note: The default Admin username and password for the Grafana instance is `admin/prom-operator`. (Regardless of who has the password, cluster administrator permission in Rancher is still required to access the Grafana instance.) Alternative credentials can also be supplied on deploying or upgrading the chart.
+1. Log in to Grafana. Note: The default Admin username and password for the Grafana instance is `admin/prom-operator`. Alternative credentials can also be supplied on deploying or upgrading the chart.
+
+ > **Note:** Regardless of who has the password, cluster administrator permission in Rancher is still required to access the Grafana instance.
1. Go to the dashboard that you want to persist. In the top navigation menu, go to the dashboard settings by clicking the gear icon.
1. In the left navigation menu, click **JSON Model.**
1. Copy the JSON data structure that appears.
@@ -103,4 +129,4 @@ helm.sh/resource-policy: "keep"
For users who are using Monitoring V2 v9.4.203 or below, uninstalling the Monitoring chart will delete the `cattle-dashboards` namespace, which will delete all persisted dashboards, unless the namespace is marked with the annotation `helm.sh/resource-policy: "keep"`.
-This annotation will be added by default in the new monitoring chart released by Rancher v2.5.8, but it still needs to be manually applied for users of earlier Rancher versions.
\ No newline at end of file
+This annotation will be added by default in the new monitoring chart released by Rancher v2.5.8, but it still needs to be manually applied for users of earlier Rancher versions.
diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md
index 0000e447a29..3a31a400ad5 100644
--- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md
+++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md
@@ -43,4 +43,4 @@ The `Custom` cloud provider is available if you want to configure any [Kubernete
For the custom cloud provider option, you can refer to the [RKE docs]({{}}/rke/latest/en/config-options/cloud-providers/) on how to edit the yaml file for your specific cloud provider. There are specific cloud providers that have more detailed configuration :
* [vSphere]({{}}/rke/latest/en/config-options/cloud-providers/vsphere/)
-* [Openstack]({{}}/rke/latest/en/config-options/cloud-providers/openstack/)
\ No newline at end of file
+* [OpenStack]({{}}/rke/latest/en/config-options/cloud-providers/openstack/)
diff --git a/content/rancher/v2.x/en/faq/kubectl/_index.md b/content/rancher/v2.x/en/faq/kubectl/_index.md
index b4172ab0a40..ffd8eee6789 100644
--- a/content/rancher/v2.x/en/faq/kubectl/_index.md
+++ b/content/rancher/v2.x/en/faq/kubectl/_index.md
@@ -11,12 +11,12 @@ See [kubectl Installation](https://kubernetes.io/docs/tasks/tools/install-kubect
### Configuration
-When you create a Kubernetes cluster with RKE, RKE creates a `kube_config_rancher-cluster.yml` in the local directory that contains credentials to connect to your new cluster with tools like `kubectl` or `helm`.
+When you create a Kubernetes cluster with RKE, RKE creates a `kube_config_cluster.yml` in the local directory that contains credentials to connect to your new cluster with tools like `kubectl` or `helm`.
-You can copy this file to `$HOME/.kube/config` or if you are working with multiple Kubernetes clusters, set the `KUBECONFIG` environmental variable to the path of `kube_config_rancher-cluster.yml`.
+You can copy this file to `$HOME/.kube/config` or if you are working with multiple Kubernetes clusters, set the `KUBECONFIG` environmental variable to the path of `kube_config_cluster.yml`.
```
-export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml
+export KUBECONFIG=$(pwd)/kube_config_cluster.yml
```
Test your connectivity with `kubectl` and see if you can get the list of nodes back.
diff --git a/content/rancher/v2.x/en/installation/requirements/_index.md b/content/rancher/v2.x/en/installation/requirements/_index.md
index d4c20002eb1..a2e4038ca87 100644
--- a/content/rancher/v2.x/en/installation/requirements/_index.md
+++ b/content/rancher/v2.x/en/installation/requirements/_index.md
@@ -16,7 +16,9 @@ Make sure the node(s) for the Rancher server fulfill the following requirements:
- [RKE and Hosted Kubernetes](#rke-and-hosted-kubernetes)
- [K3s Kubernetes](#k3s-kubernetes)
- [RancherD](#rancherd)
+ - [RKE2 Kubernetes](#rke2-kubernetes)
- [CPU and Memory for Rancher before v2.4.0](#cpu-and-memory-for-rancher-before-v2-4-0)
+- [Ingress](#ingress)
- [Disks](#disks)
- [Networking Requirements](#networking-requirements)
- [Node IP Addresses](#node-ip-addresses)
@@ -30,7 +32,7 @@ The Rancher UI works best in Firefox or Chrome.
Rancher should work with any modern Linux distribution.
-Docker is required for nodes that will run RKE Kubernetes clusters. It is not required for RancherD installs.
+Docker is required for nodes that will run RKE Kubernetes clusters. It is not required for RancherD or RKE2 Kubernetes installs.
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/)
@@ -66,7 +68,17 @@ At this time, only Linux OSes that leverage systemd are supported.
To install RancherD on SELinux Enforcing CentOS 8 or RHEL 8 nodes, some [additional steps](#rancherd-on-selinux-enforcing-centos-8-or-rhel-8-nodes) are required.
-Docker is not required for RancherD installs.
+Docker is not required for RancherD installs.
+
+### RKE2 Specific Requirements
+
+_The RKE2 install is available as of v2.5.6._
+
+For details on which OS versions were tested with RKE2, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/)
+
+Docker is not required for RKE2 installs.
+
+The Ingress should be deployed as DaemonSet to ensure your load balancer can successfully route traffic to all nodes. Currently, RKE2 deploys nginx-ingress as a deployment by default, so you will need to deploy it as a DaemonSet by following [these steps.]({{}}/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-rke2/#5-configure-nginx-to-be-a-daemonset)
### Installing Docker
@@ -124,6 +136,15 @@ These CPU and memory requirements apply to each instance with RancherD installed
| Small | Up to 5 | Up to 50 | 2 | 5 GB |
| Medium | Up to 15 | Up to 200 | 3 | 9 GB |
+### RKE2 Kubernetes
+
+These CPU and memory requirements apply to each instance with RKE2 installed. Minimum recommendations are outlined here.
+
+| Deployment Size | Clusters | Nodes | vCPUs | RAM |
+| --------------- | -------- | --------- | ----- | ---- |
+| Small | Up to 5 | Up to 50 | 2 | 5 GB |
+| Medium | Up to 15 | Up to 200 | 3 | 9 GB |
+
### Docker
These CPU and memory requirements apply to a host with a [single-node]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) installation of Rancher.
@@ -147,6 +168,23 @@ These CPU and memory requirements apply to installing Rancher on an RKE Kubernet
| XX-Large | 100+ | 1000+ | [Contact Rancher](https://rancher.com/contact/) | [Contact Rancher](https://rancher.com/contact/) |
{{% /accordion %}}
+# Ingress
+
+Each node in the Kubernetes cluster that Rancher is installed on should run an Ingress.
+
+The Ingress should be deployed as DaemonSet to ensure your load balancer can successfully route traffic to all nodes.
+
+For RKE, K3s and RancherD installations, you don't have to install the Ingress manually because is is installed by default.
+
+For hosted Kubernetes clusters (EKS, GKE, AKS) and RKE2 Kubernetes installations, you will need to set up the ingress.
+
+### Ingress for RKE2
+
+Currently, RKE2 deploys nginx-ingress as a deployment by default, so you will need to deploy it as a DaemonSet by following [these steps.]({{}}/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-rke2/#5-configure-nginx-to-be-a-daemonset)
+
+### Ingress for EKS
+For an example of how to deploy an nginx-ingress-controller with a LoadBalancer service, refer to [this section.]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/amazon-eks/#5-install-an-ingress)
+
# Disks
Rancher performance depends on etcd in the cluster performance. To ensure optimal speed, we recommend always using SSD disks to back your Rancher management Kubernetes cluster. On cloud providers, you will also want to use the minimum size that allows the maximum IOPS. In larger clusters, consider using dedicated storage devices for etcd data and wal directories.
diff --git a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-RKE2/_index.md b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-RKE2/_index.md
index 3d208d1e19c..4fed80ea758 100644
--- a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-RKE2/_index.md
+++ b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-RKE2/_index.md
@@ -9,7 +9,7 @@ This section describes how to install a Kubernetes cluster according to the [bes
# Prerequisites
-These instructions assume you have set up three nodes, a load balancer, a DNS record, [this section.]({{}}/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-rke2-ha)
+These instructions assume you have set up three nodes, a load balancer, and a DNS record, as described in [this section.]({{}}/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-rke2-ha)
Note that in order for RKE2 to work correctly with the load balancer, you need to set up two listeners: one for the supervisor on port 9345, and one for the Kubernetes API on port 6443.
@@ -161,7 +161,7 @@ Currently, RKE2 deploys nginx-ingress as a deployment, and that can impact the R
To rectify that, place the following file in /var/lib/rancher/rke2/server/manifests on any of the server nodes:
-```
+```yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
@@ -173,7 +173,4 @@ spec:
kind: DaemonSet
daemonset:
useHostPort: true
- image:
- repository: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller
- tag: "v0.34.1"
```
diff --git a/content/rancher/v2.x/en/opa-gatekeper/_index.md b/content/rancher/v2.x/en/opa-gatekeper/_index.md
index d0c4f81f75d..d85647418d5 100644
--- a/content/rancher/v2.x/en/opa-gatekeper/_index.md
+++ b/content/rancher/v2.x/en/opa-gatekeper/_index.md
@@ -41,7 +41,7 @@ OPA Gatekeeper can be installed from the new **Cluster Explorer** view in Ranche
1. Go to the cluster view in the Rancher UI. Click **Cluster Explorer.**
1. Click **Apps** in the top navigation bar.
-1. Click **rancher-gatekeeper.**
+1. Click **OPA Gatekeeper.**
1. Click **Install.**
**Result:** OPA Gatekeeper is deployed in your Kubernetes cluster.
diff --git a/content/rke/latest/en/config-options/cloud-providers/openstack/_index.md b/content/rke/latest/en/config-options/cloud-providers/openstack/_index.md
index b268ff300de..4675a779755 100644
--- a/content/rke/latest/en/config-options/cloud-providers/openstack/_index.md
+++ b/content/rke/latest/en/config-options/cloud-providers/openstack/_index.md
@@ -1,9 +1,9 @@
---
-title: Openstack Cloud Provider
+title: OpenStack Cloud Provider
weight: 253
---
-To enable the Openstack cloud provider, besides setting the name as `openstack`, there are specific configuration options that must be set. The Openstack configuration options are grouped into different sections.
+To enable the OpenStack cloud provider, besides setting the name as `openstack`, there are specific configuration options that must be set. The OpenStack configuration options are grouped into different sections.
```yaml
cloud_provider:
@@ -27,11 +27,11 @@ cloud_provider:
## Overriding the hostname
-The OpenStack cloud provider uses the instance name (as determined from OpenStack metadata) as the name of the Kubernetes Node object, you must override the Kubernetes name on the node by setting the `hostname_override` for each node. If you do not set the `hostname_override`, the Kubernetes node name will be set as the `address`, which will cause the Openstack cloud provider to fail.
+The OpenStack cloud provider uses the instance name (as determined from OpenStack metadata) as the name of the Kubernetes Node object, you must override the Kubernetes name on the node by setting the `hostname_override` for each node. If you do not set the `hostname_override`, the Kubernetes node name will be set as the `address`, which will cause the OpenStack cloud provider to fail.
-## Openstack Configuration Options
+## OpenStack Configuration Options
-The Openstack configuration options are divided into 5 groups.
+The OpenStack configuration options are divided into 5 groups.
* Global
* Load Balancer
@@ -103,4 +103,4 @@ These are the options that are available under the `metadata` directive.
| search-order | string | |
| request-timeout | int | |
-For more information of Openstack configurations options please refer to the official Kubernetes [documentation](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#openstack).
+For more information of OpenStack configurations options please refer to the official Kubernetes [documentation](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#openstack).