mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-16 18:13:17 +00:00
Merge branch 'staging' into v2.5.6
This commit is contained in:
@@ -48,7 +48,9 @@ After registration, the agent nodes establish a connection directly to one of th
|
||||
|
||||
Agent nodes are registered with a websocket connection initiated by the `k3s agent` process, and the connection is maintained by a client-side load balancer running as part of the agent process.
|
||||
|
||||
Agents will register with the server using the node cluster secret along with a randomly generated password for the node, stored at `/etc/rancher/node/password`. The server will store the passwords for individual nodes at `/var/lib/rancher/k3s/server/cred/node-passwd`, and any subsequent attempts must use the same password.
|
||||
Agents will register with the server using the node cluster secret along with a randomly generated password for the node, stored at `/etc/rancher/node/password`. The server will store the passwords for individual nodes as Kubernetes secrets, and any subsequent attempts must use the same password. Node password secrets are stored in the `kube-system` namespace with names using the template `<host>.node-password.k3s`.
|
||||
|
||||
Note: Prior to K3s v1.20.2 servers stored passwords on disk at `/var/lib/rancher/k3s/server/cred/node-passwd`.
|
||||
|
||||
If the `/etc/rancher/node` directory of an agent is removed, the password file should be recreated for the agent, or the entry removed from the server.
|
||||
|
||||
@@ -56,4 +58,4 @@ A unique node ID can be appended to the hostname by launching K3s servers or age
|
||||
|
||||
# Automatically Deployed Manifests
|
||||
|
||||
The [manifests](https://github.com/rancher/k3s/tree/master/manifests) located at the directory path `/var/lib/rancher/k3s/server/manifests` are bundled into the K3s binary at build time. These will be installed at runtime by the [rancher/helm-controller.](https://github.com/rancher/helm-controller#helm-controller)
|
||||
The [manifests](https://github.com/rancher/k3s/tree/master/manifests) located at the directory path `/var/lib/rancher/k3s/server/manifests` are bundled into the K3s binary at build time. These will be installed at runtime by the [rancher/helm-controller.](https://github.com/rancher/helm-controller#helm-controller)
|
||||
@@ -94,6 +94,6 @@ K3S_DATASTORE_KEYFILE='/path/to/client.key' \
|
||||
k3s server
|
||||
```
|
||||
|
||||
### Embedded etcd for HA (Experimental)
|
||||
### Embedded etcd for HA
|
||||
|
||||
Please see [High Availability with Embedded DB (Experimental)]({{<baseurl>}}/k3s/latest/en/installation/ha-embedded/) for instructions on how to run with this option.
|
||||
Please see [High Availability with Embedded DB]({{<baseurl>}}/k3s/latest/en/installation/ha-embedded/) for instructions on how to run with this option.
|
||||
|
||||
@@ -21,7 +21,7 @@ If there are specific node drivers that you don't want to show to your users, yo
|
||||
|
||||
By default, Rancher only activates drivers for the most popular cloud providers, Amazon EC2, Azure, DigitalOcean and vSphere. If you want to show or hide any node driver, you can change its status.
|
||||
|
||||
1. From the **Global** view, choose **Tools > Drivers** in the navigation bar. From the **Drivers** page, select the **Node Drivers** tab. In version prior to v2.2.0, you can select **Node Drivers** directly in the navigation bar.
|
||||
1. From the **Global** view, choose **Tools > Drivers** in the navigation bar. From the **Drivers** page, select the **Node Drivers** tab. In version before v2.2.0, you can select **Node Drivers** directly in the navigation bar.
|
||||
|
||||
2. Select the driver that you wish to **Activate** or **Deactivate** and select the appropriate icon.
|
||||
|
||||
@@ -29,7 +29,7 @@ By default, Rancher only activates drivers for the most popular cloud providers,
|
||||
|
||||
If you want to use a node driver that Rancher doesn't support out-of-the-box, you can add that provider's driver in order to start using them to create node templates and eventually node pools for your Kubernetes cluster.
|
||||
|
||||
1. From the **Global** view, choose **Tools > Drivers** in the navigation bar. From the **Drivers** page, select the **Node Drivers** tab. In version prior to v2.2.0, you can select **Node Drivers** directly in the navigation bar.
|
||||
1. From the **Global** view, choose **Tools > Drivers** in the navigation bar. From the **Drivers** page, select the **Node Drivers** tab. In version before v2.2.0, you can select **Node Drivers** directly in the navigation bar.
|
||||
|
||||
2. Click **Add Node Driver**.
|
||||
|
||||
|
||||
@@ -61,7 +61,7 @@ The steps to add custom roles differ depending on the version of Rancher.
|
||||
1. Click **Create**.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "Rancher prior to v2.0.7" %}}
|
||||
{{% tab "Rancher before v2.0.7" %}}
|
||||
|
||||
1. From the **Global** view, select **Security > Roles** from the main menu.
|
||||
|
||||
|
||||
@@ -42,7 +42,7 @@ Because the Kubernetes version is now included in the snapshot, it is possible t
|
||||
|
||||
The multiple components of the snapshot allow you to select from the following options if you need to restore a cluster from a snapshot:
|
||||
|
||||
- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher prior to v2.4.0.
|
||||
- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher before v2.4.0.
|
||||
- **Restore etcd and Kubernetes version:** This option should be used if a Kubernetes upgrade is the reason that your cluster is failing, and you haven't made any cluster configuration changes.
|
||||
- **Restore etcd, Kubernetes versions and cluster configuration:** This option should be used if you changed both the Kubernetes version and cluster configuration when upgrading.
|
||||
|
||||
@@ -85,7 +85,7 @@ On restore, the following process is used:
|
||||
5. The cluster is restored and post-restore actions will be done in the cluster.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "Rancher prior to v2.4.0" %}}
|
||||
{{% tab "Rancher before v2.4.0" %}}
|
||||
When Rancher creates a snapshot, only the etcd data is included in the snapshot.
|
||||
|
||||
Because the Kubernetes version is not included in the snapshot, there is no option to restore a cluster to a different Kubernetes version.
|
||||
@@ -217,4 +217,4 @@ This option is not available directly in the UI, and is only available through t
|
||||
|
||||
# Enabling Snapshot Features for Clusters Created Before Rancher v2.2.0
|
||||
|
||||
If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Rancher, you must [edit the cluster]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-admin/restoring-etcd/).
|
||||
If you have any Rancher launched Kubernetes clusters that were created before v2.2.0, after upgrading Rancher, you must [edit the cluster]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots before v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-admin/restoring-etcd/).
|
||||
|
||||
@@ -144,7 +144,7 @@ There are two drain modes: aggressive and safe.
|
||||
|
||||
If a node has standalone pods or ephemeral data it will be cordoned but not drained.
|
||||
{{% /tab %}}
|
||||
{{% tab "Rancher prior to v2.2.x" %}}
|
||||
{{% tab "Rancher before v2.2.x" %}}
|
||||
|
||||
The following list describes each drain option:
|
||||
|
||||
@@ -170,7 +170,7 @@ The timeout given to each pod for cleaning things up, so they will have chance t
|
||||
|
||||
The amount of time drain should continue to wait before giving up.
|
||||
|
||||
>**Kubernetes Known Issue:** The [timeout setting](https://github.com/kubernetes/kubernetes/pull/64378) was not enforced while draining a node prior to Kubernetes 1.12.
|
||||
>**Kubernetes Known Issue:** The [timeout setting](https://github.com/kubernetes/kubernetes/pull/64378) was not enforced while draining a node before Kubernetes 1.12.
|
||||
|
||||
### Drained and Cordoned State
|
||||
|
||||
|
||||
@@ -37,7 +37,7 @@ Restores changed in Rancher v2.4.0.
|
||||
|
||||
Snapshots are composed of the cluster data in etcd, the Kubernetes version, and the cluster configuration in the `cluster.yml.` These components allow you to select from the following options when restoring a cluster from a snapshot:
|
||||
|
||||
- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher prior to v2.4.0.
|
||||
- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher before v2.4.0.
|
||||
- **Restore etcd and Kubernetes version:** This option should be used if a Kubernetes upgrade is the reason that your cluster is failing, and you haven't made any cluster configuration changes.
|
||||
- **Restore etcd, Kubernetes versions and cluster configuration:** This option should be used if you changed both the Kubernetes version and cluster configuration when upgrading.
|
||||
|
||||
@@ -58,7 +58,7 @@ When rolling back to a prior Kubernetes version, the [upgrade strategy options](
|
||||
**Result:** The cluster will go into `updating` state and the process of restoring the `etcd` nodes from the snapshot will start. The cluster is restored when it returns to an `active` state.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "Rancher prior to v2.4.0" %}}
|
||||
{{% tab "Rancher before v2.4.0" %}}
|
||||
|
||||
> **Prerequisites:**
|
||||
>
|
||||
@@ -110,4 +110,4 @@ If the group of etcd nodes loses quorum, the Kubernetes cluster will report a fa
|
||||
|
||||
# Enabling Snapshot Features for Clusters Created Before Rancher v2.2.0
|
||||
|
||||
If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Rancher, you must [edit the cluster]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-admin/restoring-etcd/).
|
||||
If you have any Rancher launched Kubernetes clusters that were created before v2.2.0, after upgrading Rancher, you must [edit the cluster]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots before v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-admin/restoring-etcd/).
|
||||
|
||||
@@ -25,7 +25,7 @@ All the tests that are skipped and not applicable on this page will be counted a
|
||||
| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
|
||||
| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. |
|
||||
| 1.2.34 | Ensure that encryption providers are appropriately configured (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. |
|
||||
| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true. |
|
||||
| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required before provisioning the cluster in order for this argument to be set to true. |
|
||||
| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. |
|
||||
| 5.1.5 | Ensure that default service accounts are not actively used. (Scored) | Kubernetes provides default service accounts to be used. |
|
||||
| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
|
||||
@@ -81,7 +81,7 @@ Number | Description | Reason for Skipping
|
||||
1.7.3 | "Do not admit containers wishing to share the host IPC namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail.
|
||||
1.7.4 | "Do not admit containers wishing to share the host network namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail.
|
||||
1.7.5 | " Do not admit containers with allowPrivilegeEscalation (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail.
|
||||
2.1.6 | "Ensure that the --protect-kernel-defaults argument is set to true (Scored)" | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true.
|
||||
2.1.6 | "Ensure that the --protect-kernel-defaults argument is set to true (Scored)" | System level configurations are required before provisioning the cluster in order for this argument to be set to true.
|
||||
2.1.10 | "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored)" | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers.
|
||||
|
||||
### CIS Benchmark v1.4 Not Applicable Tests
|
||||
|
||||
+1
-1
@@ -106,7 +106,7 @@ Workload metrics display the hardware utilization for a Kubernetes workload. You
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to view workload metrics.
|
||||
|
||||
1. From the main navigation bar, choose **Resources > Workloads.** In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.
|
||||
1. From the main navigation bar, choose **Resources > Workloads.** In versions before v2.3.0, choose **Workloads** on the main navigation bar.
|
||||
|
||||
1. Select a specific workload and click on its name.
|
||||
|
||||
|
||||
+1
-1
@@ -73,7 +73,7 @@ To access a project-level Grafana instance,
|
||||
|
||||
1. Go to a project that has monitoring enabled.
|
||||
|
||||
1. From the project view, click **Apps.** In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar.
|
||||
1. From the project view, click **Apps.** In versions before v2.2.0, choose **Catalog Apps** on the main navigation bar.
|
||||
|
||||
1. Go to the `project-monitoring` application.
|
||||
|
||||
|
||||
+3
-3
@@ -20,9 +20,9 @@ Rancher's dashboards are available at multiple locations:
|
||||
|
||||
- **Cluster Dashboard**: From the **Global** view, navigate to the cluster.
|
||||
- **Node Metrics**: From the **Global** view, navigate to the cluster. Select **Nodes**. Find the individual node and click on its name. Click **Node Metrics.**
|
||||
- **Workload Metrics**: From the **Global** view, navigate to the project. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Find the individual workload and click on its name. Click **Workload Metrics.**
|
||||
- **Workload Metrics**: From the **Global** view, navigate to the project. From the main navigation bar, choose **Resources > Workloads.** (In versions before v2.3.0, choose **Workloads** on the main navigation bar.) Find the individual workload and click on its name. Click **Workload Metrics.**
|
||||
- **Pod Metrics**: From the **Global** view, navigate to the project. Select **Workloads > Workloads**. Find the individual workload and click on its name. Find the individual pod and click on its name. Click **Pod Metrics.**
|
||||
- **Container Metrics**: From the **Global** view, navigate to the project. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Find the individual workload and click on its name. Find the individual pod and click on its name. Find the individual container and click on its name. Click **Container Metrics.**
|
||||
- **Container Metrics**: From the **Global** view, navigate to the project. From the main navigation bar, choose **Resources > Workloads.** (In versions before v2.3.0, choose **Workloads** on the main navigation bar.) Find the individual workload and click on its name. Find the individual pod and click on its name. Find the individual container and click on its name. Click **Container Metrics.**
|
||||
|
||||
Prometheus metrics are displayed and are denoted with the Grafana icon. If you click on the icon, the metrics will open a new tab in Grafana.
|
||||
|
||||
@@ -54,7 +54,7 @@ When you go to the Grafana instance, you will be logged in with the username `ad
|
||||
|
||||
1. Go to the **System** project view. This project is where the cluster-level Grafana instance runs.
|
||||
|
||||
1. Click **Apps.** In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar.
|
||||
1. Click **Apps.** In versions before v2.2.0, choose **Catalog Apps** on the main navigation bar.
|
||||
|
||||
1. Go to the `cluster-monitoring` application.
|
||||
|
||||
|
||||
@@ -54,7 +54,7 @@ When upgrading the Kubernetes version of a cluster, we recommend that you:
|
||||
|
||||
The restore operation will work on a cluster that is not in a healthy or active state.
|
||||
{{% /tab %}}
|
||||
{{% tab "Rancher prior to v2.4" %}}
|
||||
{{% tab "Rancher before v2.4" %}}
|
||||
When upgrading the Kubernetes version of a cluster, we recommend that you:
|
||||
|
||||
1. Take a snapshot.
|
||||
|
||||
+1
-1
@@ -57,7 +57,7 @@ These steps describe how to set up a PVC in the namespace where your stateful wo
|
||||
|
||||
1. Go to the project containing a workload that you want to add a persistent volume claim to.
|
||||
|
||||
1. Then click the **Volumes** tab and click **Add Volume**. (In versions prior to v2.3.0, click **Workloads** on the main navigation bar, then **Volumes.**)
|
||||
1. Then click the **Volumes** tab and click **Add Volume**. (In versions before v2.3.0, click **Workloads** on the main navigation bar, then **Volumes.**)
|
||||
|
||||
1. Enter a **Name** for the volume claim.
|
||||
|
||||
|
||||
+1
-1
@@ -34,7 +34,7 @@ Persistent volume claims (PVCs) are objects that request storage resources from
|
||||
|
||||
To access persistent storage, a pod must have a PVC mounted as a volume. This PVC lets your deployment application store its data in an external location, so that if a pod fails, it can be replaced with a new pod and continue accessing its data stored externally, as though an outage never occurred.
|
||||
|
||||
Each Rancher project contains a list of PVCs that you've created, available from **Resources > Workloads > Volumes.** (In versions prior to v2.3.0, the PVCs are in the **Volumes** tab.) You can reuse these PVCs when creating deployments in the future.
|
||||
Each Rancher project contains a list of PVCs that you've created, available from **Resources > Workloads > Volumes.** (In versions before v2.3.0, the PVCs are in the **Volumes** tab.) You can reuse these PVCs when creating deployments in the future.
|
||||
|
||||
### PVCs are Required for Both New and Existing Persistent Storage
|
||||
|
||||
|
||||
+1
-1
@@ -66,7 +66,7 @@ These steps describe how to set up a PVC in the namespace where your stateful wo
|
||||
|
||||
1. Go to the project containing a workload that you want to add a PVC to.
|
||||
|
||||
1. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Then select the **Volumes** tab. Click **Add Volume**.
|
||||
1. From the main navigation bar, choose **Resources > Workloads.** (In versions before v2.3.0, choose **Workloads** on the main navigation bar.) Then select the **Volumes** tab. Click **Add Volume**.
|
||||
|
||||
1. Enter a **Name** for the volume claim.
|
||||
|
||||
|
||||
+1
-1
@@ -221,7 +221,7 @@ Service Role | The service role provides Kubernetes the permissions it requires
|
||||
VPC | Provides isolated network resources utilised by EKS and worker nodes. Rancher can create the VPC resources with the following [VPC Permissions]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/hosted-kubernetes-clusters/eks/#vpc-permissions).
|
||||
|
||||
|
||||
Resource targeting uses `*` as the ARN of many of the resources created cannot be known prior to creating the EKS cluster in Rancher.
|
||||
Resource targeting uses `*` as the ARN of many of the resources created cannot be known before creating the EKS cluster in Rancher.
|
||||
|
||||
```json
|
||||
{
|
||||
|
||||
+1
-1
@@ -12,7 +12,7 @@ Follow these steps while creating the vSphere cluster in Rancher:
|
||||
{{< img "/img/rancher/vsphere-node-driver-cloudprovider.png" "vsphere-node-driver-cloudprovider">}}
|
||||
|
||||
1. Click on **Edit as YAML**
|
||||
1. Insert the following structure to the pre-populated cluster YAML. As of Rancher v2.3+, this structure must be placed under `rancher_kubernetes_engine_config`. In versions prior to v2.3, it has to be defined as a top-level field. Note that the `name` *must* be set to `vsphere`.
|
||||
1. Insert the following structure to the pre-populated cluster YAML. As of Rancher v2.3+, this structure must be placed under `rancher_kubernetes_engine_config`. In versions before v2.3, it has to be defined as a top-level field. Note that the `name` *must* be set to `vsphere`.
|
||||
|
||||
```yaml
|
||||
rancher_kubernetes_engine_config: # Required as of Rancher v2.3+
|
||||
|
||||
+1
-1
@@ -88,7 +88,7 @@ You can access your cluster after its state is updated to **Active.**
|
||||
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "Rancher prior to v2.2.0" %}}
|
||||
{{% tab "Rancher before v2.2.0" %}}
|
||||
|
||||
Use Rancher to create a Kubernetes cluster in Azure.
|
||||
|
||||
|
||||
+1
-1
@@ -22,7 +22,7 @@ The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-d
|
||||
- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/)
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "Rancher prior to v2.2.0" %}}
|
||||
{{% tab "Rancher before v2.2.0" %}}
|
||||
|
||||
- **Account Access** stores your account information for authenticating with Azure.
|
||||
- **Placement** sets the geographical region where your cluster is hosted and other location metadata.
|
||||
|
||||
+1
-1
@@ -58,7 +58,7 @@ You can access your cluster after its state is updated to **Active.**
|
||||
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "Rancher prior to v2.2.0" %}}
|
||||
{{% tab "Rancher before v2.2.0" %}}
|
||||
|
||||
1. From the **Clusters** page, click **Add Cluster**.
|
||||
1. Choose **DigitalOcean**.
|
||||
|
||||
+1
-1
@@ -21,7 +21,7 @@ The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-d
|
||||
- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon
|
||||
- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/)
|
||||
{{% /tab %}}
|
||||
{{% tab "Rancher prior to v2.2.0" %}}
|
||||
{{% tab "Rancher before v2.2.0" %}}
|
||||
|
||||
### Access Token
|
||||
|
||||
|
||||
+1
-1
@@ -76,7 +76,7 @@ You can access your cluster after its state is updated to **Active.**
|
||||
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "Rancher prior to v2.2.0" %}}
|
||||
{{% tab "Rancher before v2.2.0" %}}
|
||||
|
||||
1. From the **Clusters** page, click **Add Cluster**.
|
||||
1. Choose **Amazon EC2**.
|
||||
|
||||
+1
-1
@@ -49,7 +49,7 @@ If you need to pass an **IAM Instance Profile Name** (not ARN), for example, whe
|
||||
In the **Engine Options** section of the node template, you can configure the Docker daemon. You may want to specify the docker version or a Docker registry mirror.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "Rancher prior to v2.2.0" %}}
|
||||
{{% tab "Rancher before v2.2.0" %}}
|
||||
|
||||
### Account Access
|
||||
|
||||
|
||||
+1
-1
@@ -43,7 +43,7 @@ For the fields to be populated, your setup needs to fulfill the [prerequisites.]
|
||||
|
||||
In Rancher v2.3.3+, you can provision VMs with any operating system that supports `cloud-init`. Only YAML format is supported for the [cloud config.](https://cloudinit.readthedocs.io/en/latest/topics/examples.html)
|
||||
|
||||
In Rancher prior to v2.3.3, the vSphere node driver included in Rancher only supported the provisioning of VMs with [RancherOS]({{<baseurl>}}/os/v1.x/en/) as the guest operating system.
|
||||
In Rancher before v2.3.3, the vSphere node driver included in Rancher only supported the provisioning of VMs with [RancherOS]({{<baseurl>}}/os/v1.x/en/) as the guest operating system.
|
||||
|
||||
### Video Walkthrough of v2.3.3 Node Template Features
|
||||
|
||||
|
||||
+4
-4
@@ -33,7 +33,7 @@ Refer to this [how-to guide]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisio
|
||||
It must be ensured that the hosts running the Rancher server are able to establish the following network connections:
|
||||
|
||||
- To the vSphere API on the vCenter server (usually port 443/TCP).
|
||||
- To the Host API (port 443/TCP) on all ESXi hosts used to instantiate virtual machines for the clusters (*only required with Rancher prior to v2.3.3 or when using the ISO creation method in later versions*).
|
||||
- To the Host API (port 443/TCP) on all ESXi hosts used to instantiate virtual machines for the clusters (*only required with Rancher before v2.3.3 or when using the ISO creation method in later versions*).
|
||||
- To port 22/TCP and 2376/TCP on the created VMs
|
||||
|
||||
See [Node Networking Requirements]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/node-requirements/#networking-requirements) for a detailed list of port requirements applicable for creating nodes on an infrastructure provider.
|
||||
@@ -102,11 +102,11 @@ You can access your cluster after its state is updated to **Active.**
|
||||
- `Default`, containing the `default` namespace
|
||||
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
|
||||
{{% /tab %}}
|
||||
{{% tab "Rancher prior to v2.2.0" %}}
|
||||
{{% tab "Rancher before v2.2.0" %}}
|
||||
|
||||
Use Rancher to create a Kubernetes cluster in vSphere.
|
||||
|
||||
For Rancher versions prior to v2.0.4, when you create the cluster, you will also need to follow the steps in [this section](http://localhost:9001/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vpshere-node-template-config/prior-to-2.0.4/#disk-uuids) to enable disk UUIDs.
|
||||
For Rancher versions before v2.0.4, when you create the cluster, you will also need to follow the steps in [this section](http://localhost:9001/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vpshere-node-template-config/prior-to-2.0.4/#disk-uuids) to enable disk UUIDs.
|
||||
|
||||
1. From the **Clusters** page, click **Add Cluster**.
|
||||
1. Choose **vSphere**.
|
||||
@@ -116,7 +116,7 @@ For Rancher versions prior to v2.0.4, when you create the cluster, you will also
|
||||
1. If you want to dynamically provision persistent storage or other infrastructure later, you will need to enable the vSphere cloud provider by modifying the cluster YAML file. For details, refer to [this section.]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere)
|
||||
1. Add one or more [node pools]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/#node-pools) to your cluster. Each node pool uses a node template to provision new nodes. To create a node template, click **Add Node Template** and complete the **vSphere Options** form. For help filling out the form, refer to the vSphere node template configuration reference. Refer to the newest version of the configuration reference that is less than or equal to your Rancher version:
|
||||
- [v2.0.4]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.0.4)
|
||||
- [prior to v2.0.4]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4)
|
||||
- [before v2.0.4]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4)
|
||||
1. Review your options to confirm they're correct. Then click **Create** to start provisioning the VMs and Kubernetes services.
|
||||
|
||||
**Result:**
|
||||
|
||||
+1
-1
@@ -13,4 +13,4 @@ The vSphere node templates in Rancher were updated in the following Rancher vers
|
||||
- [v2.2.0](./v2.2.0)
|
||||
- [v2.0.4](./v2.0.4)
|
||||
|
||||
For Rancher versions prior to v2.0.4, refer to [this version.](./prior-to-2.0.4)
|
||||
For Rancher versions before v2.0.4, refer to [this version.](./prior-to-2.0.4)
|
||||
+2
-2
@@ -1,6 +1,6 @@
|
||||
---
|
||||
title: vSphere Node Template Configuration in Rancher prior to v2.0.4
|
||||
shortTitle: Prior to v2.0.4
|
||||
title: vSphere Node Template Configuration in Rancher before v2.0.4
|
||||
shortTitle: Before v2.0.4
|
||||
weight: 5
|
||||
---
|
||||
|
||||
|
||||
@@ -267,7 +267,7 @@ windows_prefered_cluster: false
|
||||
|
||||
An example cluster config file is included below.
|
||||
|
||||
{{% accordion id="prior-to-v2.3.0-cluster-config-file" label="Example Cluster Config File for Rancher v2.0.0-v2.2.x" %}}
|
||||
{{% accordion id="before-v2.3.0-cluster-config-file" label="Example Cluster Config File for Rancher v2.0.0-v2.2.x" %}}
|
||||
```yaml
|
||||
addon_job_timeout: 30
|
||||
authentication:
|
||||
|
||||
+1
-1
@@ -11,7 +11,7 @@ When you create a [custom cluster]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-pr
|
||||
|
||||
You can provision a custom Windows cluster using Rancher by using a mix of Linux and Windows hosts as your cluster nodes.
|
||||
|
||||
>**Important:** In versions of Rancher prior to v2.3, support for Windows nodes is experimental. Therefore, it is not recommended to use Windows nodes for production environments if you are using Rancher prior to v2.3.
|
||||
>**Important:** In versions of Rancher before v2.3, support for Windows nodes is experimental. Therefore, it is not recommended to use Windows nodes for production environments if you are using Rancher before v2.3.
|
||||
|
||||
This guide walks you through create of a custom cluster that includes three nodes:
|
||||
|
||||
|
||||
@@ -49,7 +49,7 @@ When you create a custom catalog, you will have to configure the catalog to use
|
||||
|
||||
When you launch a new app from a catalog, the app will be managed by the catalog's Helm version. A Helm 2 catalog will use Helm 2 to manage all of the apps, and a Helm 3 catalog will use Helm 3 to manage all apps.
|
||||
|
||||
By default, catalogs are assumed to be deployed using Helm 2. If you run an app in Rancher prior to v2.4.0, then upgrade to Rancher v2.4.0+, the app will still be managed by Helm 2. If the app was already using a Helm 3 Chart (API version 2) it will no longer work in v2.4.0+. You must either downgrade the chart's API version or recreate the catalog to use Helm 3.
|
||||
By default, catalogs are assumed to be deployed using Helm 2. If you run an app in Rancher before v2.4.0, then upgrade to Rancher v2.4.0+, the app will still be managed by Helm 2. If the app was already using a Helm 3 Chart (API version 2) it will no longer work in v2.4.0+. You must either downgrade the chart's API version or recreate the catalog to use Helm 3.
|
||||
|
||||
Charts that are specific to Helm 2 should only be added to a Helm 2 catalog, and Helm 3 specific charts should only be added to a Helm 3 catalog.
|
||||
|
||||
|
||||
@@ -44,7 +44,7 @@ Private catalog repositories can be added using credentials like Username and Pa
|
||||
|
||||
For more information on private Git/Helm catalogs, refer to the [custom catalog configuration reference.]({{<baseurl>}}/rancher/v2.0-v2.4/en/catalog/catalog-config)
|
||||
|
||||
1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar.
|
||||
1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions before v2.2.0, you can select **Catalogs** directly in the navigation bar.
|
||||
2. Click **Add Catalog**.
|
||||
3. Complete the form and click **Create**.
|
||||
|
||||
@@ -57,7 +57,7 @@ For more information on private Git/Helm catalogs, refer to the [custom catalog
|
||||
>- [Administrator Global Permissions]({{<baseurl>}}/rancher/v2.0-v2.4/en/admin-settings/rbac/global-permissions/)
|
||||
>- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.0-v2.4/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Catalogs]({{<baseurl>}}/rancher/v2.0-v2.4/en/admin-settings/rbac/global-permissions/) role assigned.
|
||||
|
||||
1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar.
|
||||
1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions before v2.2.0, you can select **Catalogs** directly in the navigation bar.
|
||||
2. Click **Add Catalog**.
|
||||
3. Complete the form. Select the Helm version that will be used to launch all of the apps in the catalog. For more information about the Helm version, refer to [this section.](
|
||||
{{<baseurl>}}/rancher/v2.0-v2.4/en/helm-charts/legacy-catalogs/#catalog-helm-deployment-versions)
|
||||
|
||||
@@ -16,7 +16,7 @@ Within Rancher, there are default catalogs packaged as part of Rancher. These ca
|
||||
>- [Administrator Global Permissions]({{<baseurl>}}/rancher/v2.0-v2.4/en/admin-settings/rbac/global-permissions/)
|
||||
>- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.0-v2.4/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Catalogs]({{<baseurl>}}/rancher/v2.0-v2.4/en/admin-settings/rbac/global-permissions/#custom-global-permissions-reference) role assigned.
|
||||
|
||||
1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar.
|
||||
1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions before v2.2.0, you can select **Catalogs** directly in the navigation bar.
|
||||
|
||||
2. Toggle the default catalogs that you want to be enabled or disabled:
|
||||
|
||||
@@ -24,4 +24,4 @@ Within Rancher, there are default catalogs packaged as part of Rancher. These ca
|
||||
- **Helm Stable:** This catalog, which is maintained by the Kubernetes community, includes native [Helm charts](https://helm.sh/docs/chart_template_guide/). This catalog features the largest pool of apps.
|
||||
- **Helm Incubator:** Similar in user experience to Helm Stable, but this catalog is filled with applications in **beta**.
|
||||
|
||||
**Result**: The chosen catalogs are enabled. Wait a few minutes for Rancher to replicate the catalog charts. When replication completes, you'll be able to see them in any of your projects by selecting **Apps** from the main navigation bar. In versions prior to v2.2.0, within a project, you can select **Catalog Apps** from the main navigation bar.
|
||||
**Result**: The chosen catalogs are enabled. Wait a few minutes for Rancher to replicate the catalog charts. When replication completes, you'll be able to see them in any of your projects by selecting **Apps** from the main navigation bar. In versions before v2.2.0, within a project, you can select **Catalog Apps** from the main navigation bar.
|
||||
|
||||
@@ -27,7 +27,7 @@ Rancher's Global DNS feature provides a way to program an external DNS provider
|
||||
|
||||
# Global DNS Providers
|
||||
|
||||
Prior to adding in Global DNS entries, you will need to configure access to an external provider.
|
||||
Before adding in Global DNS entries, you will need to configure access to an external provider.
|
||||
|
||||
The following table lists the first version of Rancher each provider debuted.
|
||||
|
||||
|
||||
@@ -29,7 +29,7 @@ Before launching an app, you'll need to either [enable a built-in global catalog
|
||||
|
||||
1. From the **Global** view, open the project that you want to deploy an app to.
|
||||
|
||||
2. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**.
|
||||
2. From the main navigation bar, choose **Apps**. In versions before v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**.
|
||||
|
||||
3. Find the app that you want to launch, and then click **View Now**.
|
||||
|
||||
@@ -48,7 +48,7 @@ Before launching an app, you'll need to either [enable a built-in global catalog
|
||||
|
||||
7. Review the files in **Preview**. When you're satisfied, click **Launch**.
|
||||
|
||||
**Result**: Your application is deployed to your chosen namespace. You can view the application status from the project's **Workloads** view or **Apps** view. In versions prior to v2.2.0, this is the **Catalog Apps** view.
|
||||
**Result**: Your application is deployed to your chosen namespace. You can view the application status from the project's **Workloads** view or **Apps** view. In versions before v2.2.0, this is the **Catalog Apps** view.
|
||||
|
||||
# Configuration Options
|
||||
|
||||
|
||||
@@ -23,7 +23,7 @@ After an application is deployed, you can easily upgrade to a different template
|
||||
|
||||
1. From the **Global** view, navigate to the project that contains the catalog application that you want to upgrade.
|
||||
|
||||
1. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**.
|
||||
1. From the main navigation bar, choose **Apps**. In versions before v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**.
|
||||
|
||||
3. Find the application that you want to upgrade, and then click the ⋮ to find **Upgrade**.
|
||||
|
||||
@@ -40,7 +40,7 @@ After an application is deployed, you can easily upgrade to a different template
|
||||
**Result**: Your application is updated. You can view the application status from the project's:
|
||||
|
||||
- **Workloads** view
|
||||
- **Apps** view. In versions prior to v2.2.0, this is the **Catalog Apps** view.
|
||||
- **Apps** view. In versions before v2.2.0, this is the **Catalog Apps** view.
|
||||
|
||||
|
||||
### Rolling Back Catalog Applications
|
||||
@@ -49,7 +49,7 @@ After an application has been upgraded, you can easily rollback to a different t
|
||||
|
||||
1. From the **Global** view, navigate to the project that contains the catalog application that you want to upgrade.
|
||||
|
||||
1. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**.
|
||||
1. From the main navigation bar, choose **Apps**. In versions before v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**.
|
||||
|
||||
3. Find the application that you want to rollback, and then click the ⋮ to find **Rollback**.
|
||||
|
||||
@@ -64,7 +64,7 @@ After an application has been upgraded, you can easily rollback to a different t
|
||||
**Result**: Your application is updated. You can view the application status from the project's:
|
||||
|
||||
- **Workloads** view
|
||||
- **Apps** view. In versions prior to v2.2.0, this is the **Catalog Apps** view.
|
||||
- **Apps** view. In versions before v2.2.0, this is the **Catalog Apps** view.
|
||||
|
||||
### Deleting Catalog Application Deployments
|
||||
|
||||
|
||||
@@ -53,7 +53,7 @@ For that reason, we recommend that for a production-grade architecture, you shou
|
||||
> The type of cluster that Rancher needs to be installed on depends on the Rancher version.
|
||||
>
|
||||
> For Rancher v2.4.x, either an RKE Kubernetes cluster or K3s Kubernetes cluster can be used.
|
||||
> For Rancher prior to v2.4, an RKE cluster must be used.
|
||||
> For Rancher before v2.4, an RKE cluster must be used.
|
||||
|
||||
For testing or demonstration purposes, you can install Rancher in single Docker container. In this Docker install, you can use Rancher to set up Kubernetes clusters out-of-the-box. The Docker install allows you to explore the Rancher server functionality, but it is intended to be used for development and testing purposes only.
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ Set up the Rancher server's local Kubernetes cluster.
|
||||
The cluster requirements depend on the Rancher version:
|
||||
|
||||
- **In Rancher v2.4.x,** Rancher needs to be installed on a K3s Kubernetes cluster or an RKE Kubernetes cluster.
|
||||
- **In Rancher prior to v2.4,** Rancher needs to be installed on an RKE Kubernetes cluster.
|
||||
- **In Rancher before v2.4,** Rancher needs to be installed on an RKE Kubernetes cluster.
|
||||
|
||||
For the tutorial to install an RKE Kubernetes cluster, refer to [this page.]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/resources/k8s-tutorials/ha-rke/) For help setting up the infrastructure for a high-availability RKE cluster, refer to [this page.]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-ha)
|
||||
|
||||
|
||||
+2
-2
@@ -57,8 +57,8 @@ For information on enabling experimental features, refer to [this page.]({{<base
|
||||
| `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials |
|
||||
| `ingress.configurationSnippet` | "" | `string` - Add additional Nginx configuration. Can be used for proxy configuration. _Note: Available as of v2.0.15, v2.1.10 and v2.2.4_ |
|
||||
| `ingress.extraAnnotations` | {} | `map` - additional annotations to customize the ingress |
|
||||
| `letsEncrypt.ingress.class` | "" | `string` - optional ingress class for the cert-manager acmesolver ingress that responds to the Let's Encrypt ACME challenges. Options: traefik, nginx. |
|
||||
| `noProxy` | "127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local" | `string` - comma separated list of hostnames or ip address not to use the proxy |
|
||||
| `letsEncrypt.ingress.class` | "" | `string` - optional ingress class for the cert-manager acmesolver ingress that responds to the Let's Encrypt ACME challenges. Options: traefik, nginx. | |
|
||||
| `noProxy` | "127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local,cattle-system.svc" | `string` - comma separated list of hostnames or ip address not to use the proxy | |
|
||||
| `proxy` | "" | `string` - HTTP[S] proxy server for Rancher |
|
||||
| `rancherImage` | "rancher/rancher" | `string` - rancher image source |
|
||||
| `rancherImagePullPolicy` | "IfNotPresent" | `string` - Override imagePullPolicy for rancher server images - "Always", "Never", "IfNotPresent" |
|
||||
|
||||
@@ -100,7 +100,7 @@ You'll use the backup as a restoration point if something goes wrong during upgr
|
||||
helm repo list
|
||||
|
||||
NAME URL
|
||||
stable https://kubernetes-charts.storage.googleapis.com
|
||||
stable https://charts.helm.sh/stable
|
||||
rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
|
||||
+3
-2
@@ -64,7 +64,7 @@ of your Kubernetes cluster running Rancher server. You'll use the snapshot as a
|
||||
helm repo list
|
||||
|
||||
NAME URL
|
||||
stable https://kubernetes-charts.storage.googleapis.com
|
||||
stable https://charts.helm.sh/stable
|
||||
rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
@@ -118,8 +118,9 @@ If you are currently running the cert-manger whose version is older than v0.11,
|
||||
1. Uninstall Rancher
|
||||
|
||||
```
|
||||
helm delete rancher -n cattle-system
|
||||
helm delete rancher
|
||||
```
|
||||
In case this results in an error that the release "rancher" was not found, make sure you are using the correct deployment name. Use `helm list` to list the helm-deployed releases.
|
||||
|
||||
2. Uninstall and reinstall `cert-manager` according to the instructions on the [Upgrading Cert-Manager]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/options/upgrading-cert-manager/helm-2-instructions) page.
|
||||
|
||||
|
||||
+1
-1
@@ -35,7 +35,7 @@ During upgrades from Rancher v2.0.6- to Rancher v2.0.7+, all system namespaces a
|
||||
|
||||
You can prevent cluster networking issues from occurring during your upgrade to v2.0.7+ by unassigning system namespaces from all of your Rancher projects. Complete this task if you've assigned any of a cluster's system namespaces into a Rancher project.
|
||||
|
||||
1. Log into the Rancher UI prior to upgrade.
|
||||
1. Log into the Rancher UI before upgrade.
|
||||
|
||||
1. From the context menu, open the **local** cluster (or any of your other clusters).
|
||||
|
||||
|
||||
+5
-5
@@ -21,7 +21,7 @@ This section describes installing Rancher in five parts:
|
||||
- [2. Choose your SSL Configuration](#2-choose-your-ssl-configuration)
|
||||
- [3. Render the Rancher Helm Template](#3-render-the-rancher-helm-template)
|
||||
- [4. Install Rancher](#4-install-rancher)
|
||||
- [5. For Rancher versions prior to v2.3.0, Configure System Charts](#5-for-rancher-versions-prior-to-v2-3-0-configure-system-charts)
|
||||
- [5. For Rancher versions before v2.3.0, Configure System Charts](#5-for-rancher-versions-before-v2-3-0-configure-system-charts)
|
||||
|
||||
# 1. Add the Helm Chart Repository
|
||||
|
||||
@@ -216,9 +216,9 @@ kubectl -n cattle-system apply -R -f ./rancher
|
||||
|
||||
> **Note:** If you don't intend to send telemetry data, opt out [telemetry]({{<baseurl>}}/rancher/v2.0-v2.4/en/faq/telemetry/) during the initial login. Leaving this active in an air-gapped environment can cause issues if the sockets cannot be opened successfully.
|
||||
|
||||
# 5. For Rancher versions prior to v2.3.0, Configure System Charts
|
||||
# 5. For Rancher versions before v2.3.0, Configure System Charts
|
||||
|
||||
If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/resources/local-system-charts/).
|
||||
If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/resources/local-system-charts/).
|
||||
|
||||
# Additional Resources
|
||||
|
||||
@@ -249,7 +249,7 @@ For security purposes, SSL (Secure Sockets Layer) is required when using Rancher
|
||||
> - Configure custom CA root certificate to access your services? See [Custom CA root certificate]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/options/custom-ca-root-certificate/).
|
||||
> - Record all transactions with the Rancher API? See [API Auditing]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/other-installation-methods/single-node-docker/advanced/#api-audit-log).
|
||||
|
||||
- For Rancher prior to v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher prior to v2.3.0.]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/resources/local-system-charts/)
|
||||
- For Rancher before v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher before v2.3.0.]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/resources/local-system-charts/)
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
@@ -351,7 +351,7 @@ If you are installing Rancher v2.3.0+, the installation is complete.
|
||||
|
||||
> **Note:** If you don't intend to send telemetry data, opt out [telemetry]({{<baseurl>}}/rancher/v2.0-v2.4/en/faq/telemetry/) during the initial login.
|
||||
|
||||
If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/resources/local-system-charts/).
|
||||
If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/resources/local-system-charts/).
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
+1
-1
@@ -9,7 +9,7 @@ aliases:
|
||||
|
||||
This section describes how to install a Kubernetes cluster according to our [best practices for the Rancher server environment.]({{<baseurl>}}/rancher/v2.0-v2.4/en/overview/architecture-recommendations/#environment-for-kubernetes-installations) This cluster should be dedicated to run only the Rancher server.
|
||||
|
||||
For Rancher prior to v2.4, Rancher should be installed on an [RKE]({{<baseurl>}}/rke/latest/en/) (Rancher Kubernetes Engine) Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers.
|
||||
For Rancher before v2.4, Rancher should be installed on an [RKE]({{<baseurl>}}/rke/latest/en/) (Rancher Kubernetes Engine) Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers.
|
||||
|
||||
In Rancher v2.4, the Rancher management server can be installed on either an RKE cluster or a K3s Kubernetes cluster. K3s is also a fully certified Kubernetes distribution released by Rancher, but is newer than RKE. We recommend installing Rancher on K3s because K3s is easier to use, and more lightweight, with a binary size of less than 100 MB. The Rancher management server can only be run on a Kubernetes cluster in an infrastructure provider where Kubernetes is installed using RKE or K3s. Use of Rancher on hosted Kubernetes providers, such as EKS, is not supported. Note: After Rancher is installed on an RKE cluster, there is no migration path to a K3s setup at this time.
|
||||
|
||||
|
||||
+1
-1
@@ -34,7 +34,7 @@ helm upgrade --install cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager --version v0.15.2 \
|
||||
--set http_proxy=http://${proxy_host} \
|
||||
--set https_proxy=http://${proxy_host} \
|
||||
--set no_proxy=127.0.0.0/8\\,10.0.0.0/8\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local
|
||||
--set no_proxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local
|
||||
```
|
||||
|
||||
Now you should wait until cert-manager is finished starting up:
|
||||
|
||||
+2
-2
@@ -15,7 +15,7 @@ For convenience export the IP address and port of your proxy into an environment
|
||||
export proxy_host="10.0.0.5:8888"
|
||||
export HTTP_PROXY=http://${proxy_host}
|
||||
export HTTPS_PROXY=http://${proxy_host}
|
||||
export NO_PROXY=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
|
||||
export NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16
|
||||
```
|
||||
|
||||
Next configure apt to use this proxy when installing packages. If you are not using Ubuntu, you have to adapt this step accordingly:
|
||||
@@ -47,7 +47,7 @@ cat <<'EOF' | sudo tee /etc/systemd/system/docker.service.d/http-proxy.conf > /d
|
||||
[Service]
|
||||
Environment="HTTP_PROXY=http://${proxy_host}"
|
||||
Environment="HTTPS_PROXY=http://${proxy_host}"
|
||||
Environment="NO_PROXY=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
|
||||
Environment="NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16"
|
||||
EOF
|
||||
```
|
||||
|
||||
|
||||
+2
-1
@@ -26,6 +26,7 @@ Passing environment variables to the Rancher container can be done using `-e KEY
|
||||
- `127.0.0.1`
|
||||
- `0.0.0.0`
|
||||
- `10.0.0.0/8`
|
||||
- `cattle-system.svc`
|
||||
- `.svc`
|
||||
- `.cluster.local`
|
||||
|
||||
@@ -36,6 +37,6 @@ docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-e HTTP_PROXY="http://192.168.10.1:3128" \
|
||||
-e HTTPS_PROXY="http://192.168.10.1:3128" \
|
||||
-e NO_PROXY="localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,192.168.10.0/24,.svc,.cluster.local,example.com" \
|
||||
-e NO_PROXY="localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,cattle-system.svc,192.168.10.0/24,.svc,.cluster.local,example.com" \
|
||||
rancher/rancher:latest
|
||||
```
|
||||
+2
-2
@@ -44,7 +44,7 @@ If you have issues upgrading Rancher, roll it back to its latest known healthy s
|
||||
|
||||
1. Using a remote Terminal connection, log into the node running your Rancher Server.
|
||||
|
||||
1. Pull the version of Rancher that you were running prior to upgrade. Replace the `<PRIOR_RANCHER_VERSION>` with that version.
|
||||
1. Pull the version of Rancher that you were running before upgrade. Replace the `<PRIOR_RANCHER_VERSION>` with that version.
|
||||
|
||||
For example, if you were running Rancher v2.0.5 before upgrade, pull v2.0.5.
|
||||
|
||||
@@ -83,4 +83,4 @@ If you have issues upgrading Rancher, roll it back to its latest known healthy s
|
||||
|
||||
1. Wait a few moments and then open Rancher in a web browser. Confirm that the rollback succeeded and that your data is restored.
|
||||
|
||||
**Result:** Rancher is rolled back to its version and data state prior to upgrade.
|
||||
**Result:** Rancher is rolled back to its version and data state before upgrade.
|
||||
|
||||
+1
-1
@@ -242,7 +242,7 @@ docker run -d --volumes-from rancher-data \
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
> For Rancher versions from v2.2.0 to v2.2.x, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher prior to v2.3.0.]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/resources/local-system-charts/)
|
||||
> For Rancher versions from v2.2.0 to v2.2.x, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher before v2.3.0.]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/resources/local-system-charts/)
|
||||
|
||||
When starting the new Rancher server container, choose from the following options:
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ Make sure the node(s) for the Rancher server fulfill the following requirements:
|
||||
- [Operating Systems and Container Runtime Requirements](#operating-systems-and-container-runtime-requirements)
|
||||
- [Hardware Requirements](#hardware-requirements)
|
||||
- [CPU and Memory](#cpu-and-memory)
|
||||
- [CPU and Memory for Rancher prior to v2.4.0](#cpu-and-memory-for-rancher-prior-to-v2-4-0)
|
||||
- [CPU and Memory for Rancher before v2.4.0](#cpu-and-memory-for-rancher-before-v2-4-0)
|
||||
- [Disks](#disks)
|
||||
- [Networking Requirements](#networking-requirements)
|
||||
- [Node IP Addresses](#node-ip-addresses)
|
||||
@@ -68,7 +68,7 @@ Hardware requirements scale based on the size of your Rancher deployment. Provis
|
||||
|
||||
These requirements apply to each host in an [RKE Kubernetes cluster where the Rancher server is installed.]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/)
|
||||
|
||||
Performance increased in Rancher v2.4.0. For the requirements of Rancher prior to v2.4.0, refer to [this section.](#cpu-and-memory-for-rancher-prior-to-v2-4-0)
|
||||
Performance increased in Rancher v2.4.0. For the requirements of Rancher before v2.4.0, refer to [this section.](#cpu-and-memory-for-rancher-before-v2-4-0)
|
||||
|
||||
| Deployment Size | Clusters | Nodes | vCPUs | RAM |
|
||||
| --------------- | ---------- | ------------ | -------| ------- |
|
||||
@@ -109,10 +109,10 @@ These requirements apply to a host with a [single-node]({{<baseurl>}}/rancher/v2
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
### CPU and Memory for Rancher prior to v2.4.0
|
||||
### CPU and Memory for Rancher before v2.4.0
|
||||
|
||||
{{% accordion label="Click to expand" %}}
|
||||
These requirements apply to installing Rancher on an RKE Kubernetes cluster prior to Rancher v2.4.0:
|
||||
These requirements apply to installing Rancher on an RKE Kubernetes cluster before Rancher v2.4.0:
|
||||
|
||||
| Deployment Size | Clusters | Nodes | vCPUs | RAM |
|
||||
| --------------- | --------- | ---------- | ----------------------------------------------- | ----------------------------------------------- |
|
||||
|
||||
+5
-5
@@ -23,7 +23,7 @@ This section describes installing Rancher in five parts:
|
||||
- [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration)
|
||||
- [C. Render the Rancher Helm Template](#c-render-the-rancher-helm-template)
|
||||
- [D. Install Rancher](#d-install-rancher)
|
||||
- [E. For Rancher versions prior to v2.3.0, Configure System Charts](#e-for-rancher-versions-prior-to-v2-3-0-configure-system-charts)
|
||||
- [E. For Rancher versions before v2.3.0, Configure System Charts](#e-for-rancher-versions-before-v2-3-0-configure-system-charts)
|
||||
|
||||
### A. Add the Helm Chart Repository
|
||||
|
||||
@@ -209,9 +209,9 @@ kubectl -n cattle-system apply -R -f ./rancher
|
||||
|
||||
**Step Result:** If you are installing Rancher v2.3.0+, the installation is complete.
|
||||
|
||||
### E. For Rancher versions prior to v2.3.0, Configure System Charts
|
||||
### E. For Rancher versions before v2.3.0, Configure System Charts
|
||||
|
||||
If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/options/local-system-charts/).
|
||||
If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/options/local-system-charts/).
|
||||
|
||||
### Additional Resources
|
||||
|
||||
@@ -238,7 +238,7 @@ For security purposes, SSL (Secure Sockets Layer) is required when using Rancher
|
||||
> - Configure custom CA root certificate to access your services? See [Custom CA root certificate]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/options/chart-options/#additional-trusted-cas).
|
||||
> - Record all transactions with the Rancher API? See [API Auditing]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/other-installation-methods/single-node-docker/advanced/#api-audit-log).
|
||||
|
||||
- For Rancher prior to v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher prior to v2.3.0.]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/options/local-system-charts/)
|
||||
- For Rancher before v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher before v2.3.0.]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/options/local-system-charts/)
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
@@ -328,7 +328,7 @@ docker run -d --restart=unless-stopped \
|
||||
|
||||
If you are installing Rancher v2.3.0+, the installation is complete.
|
||||
|
||||
If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/options/local-system-charts/).
|
||||
If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/options/local-system-charts/).
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
+1
-1
@@ -64,7 +64,7 @@ kubectl -n cattle-system logs -f rancher-84d886bdbb-s4s69 rancher-audit-log
|
||||
#### Rancher Web GUI
|
||||
|
||||
1. From the context menu, select **Cluster: local > System**.
|
||||
1. From the main navigation bar, choose **Resources > Workloads.** (In versions prior to v2.3.0, choose **Workloads** on the main navigation bar.) Find the `cattle-system` namespace. Open the `rancher` workload by clicking its link.
|
||||
1. From the main navigation bar, choose **Resources > Workloads.** (In versions before v2.3.0, choose **Workloads** on the main navigation bar.) Find the `cattle-system` namespace. Open the `rancher` workload by clicking its link.
|
||||
1. Pick one of the `rancher` pods and select **⋮ > View Logs**.
|
||||
1. From the **Logs** drop-down, select `rancher-audit-log`.
|
||||
|
||||
|
||||
+1
-1
@@ -37,7 +37,7 @@ aliases:
|
||||
| `ingress.extraAnnotations` | {} | `map` - additional annotations to customize the ingress |
|
||||
| `ingress.configurationSnippet` | "" | `string` - Add additional Nginx configuration. Can be used for proxy configuration. _Note: Available as of v2.0.15, v2.1.10 and v2.2.4_ |
|
||||
| `proxy` | "" | `string` - HTTP[S] proxy server for Rancher |
|
||||
| `noProxy` | "127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16" | `string` - comma separated list of hostnames or ip address not to use the proxy |
|
||||
| `noProxy` | "127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16" | `string` - comma separated list of hostnames or ip address not to use the proxy |
|
||||
| `resources` | {} | `map` - rancher pod resource requests & limits |
|
||||
| `rancherImage` | "rancher/rancher" | `string` - rancher image source |
|
||||
| `rancherImageTag` | same as chart version | `string` - rancher/rancher image tag |
|
||||
|
||||
@@ -31,7 +31,7 @@ Rancher provides several different Helm chart repositories to choose from. We al
|
||||
<br/>
|
||||
Instructions on when to select these repos are available below in [Switching to a Different Helm Chart Repository](#switching-to-a-different-helm-chart-repository).
|
||||
|
||||
> **Note:** The introduction of the `rancher-latest` and `rancher-stable` Helm Chart repositories was introduced after Rancher v2.1.0, so the `rancher-stable` repository contains some Rancher versions that were never marked as `rancher/rancher:stable`. The versions of Rancher that were tagged as `rancher/rancher:stable` prior to v2.1.0 are v2.0.4, v2.0.6, v2.0.8. Post v2.1.0, all charts in the `rancher-stable` repository will correspond with any Rancher version tagged as `stable`.
|
||||
> **Note:** The introduction of the `rancher-latest` and `rancher-stable` Helm Chart repositories was introduced after Rancher v2.1.0, so the `rancher-stable` repository contains some Rancher versions that were never marked as `rancher/rancher:stable`. The versions of Rancher that were tagged as `rancher/rancher:stable` before v2.1.0 are v2.0.4, v2.0.6, v2.0.8. Post v2.1.0, all charts in the `rancher-stable` repository will correspond with any Rancher version tagged as `stable`.
|
||||
|
||||
### Helm Chart Versions
|
||||
|
||||
@@ -60,7 +60,7 @@ After installing Rancher, if you want to change which Helm chart repository to i
|
||||
helm repo list
|
||||
|
||||
NAME URL
|
||||
stable https://kubernetes-charts.storage.googleapis.com
|
||||
stable https://charts.helm.sh/stable
|
||||
rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
|
||||
@@ -5,6 +5,6 @@ weight: 4
|
||||
|
||||
This section contains information on how to install a Kubernetes cluster that the Rancher server can be installed on.
|
||||
|
||||
In Rancher prior to v2.4, the Rancher server needed to run on an RKE Kubernetes cluster.
|
||||
In Rancher before v2.4, the Rancher server needed to run on an RKE Kubernetes cluster.
|
||||
|
||||
In Rancher v2.4.x, Rancher needs to run on either an RKE Kubernetes cluster or a K3s Kubernetes cluster.
|
||||
@@ -9,7 +9,7 @@ aliases:
|
||||
|
||||
This section describes how to install a Kubernetes cluster. This cluster should be dedicated to run only the Rancher server.
|
||||
|
||||
For Rancher prior to v2.4, Rancher should be installed on an RKE Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers.
|
||||
For Rancher before v2.4, Rancher should be installed on an RKE Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers.
|
||||
|
||||
As of Rancher v2.4, the Rancher management server can be installed on either an RKE cluster or a K3s Kubernetes cluster. K3s is also a fully certified Kubernetes distribution released by Rancher, but is newer than RKE. We recommend installing Rancher on K3s because K3s is easier to use, and more lightweight, with a binary size of less than 100 MB. Note: After Rancher is installed on an RKE cluster, there is no migration path to a K3s setup at this time.
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ aliases:
|
||||
|
||||
The [System Charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS.
|
||||
|
||||
In an air gapped installation of Rancher, you will need to configure Rancher to use a local copy of the system charts. This section describes how to use local system charts using a CLI flag in Rancher v2.3.0, and using a Git mirror for Rancher versions prior to v2.3.0.
|
||||
In an air gapped installation of Rancher, you will need to configure Rancher to use a local copy of the system charts. This section describes how to use local system charts using a CLI flag in Rancher v2.3.0, and using a Git mirror for Rancher versions before v2.3.0.
|
||||
|
||||
# Using Local System Charts in Rancher v2.3.0
|
||||
|
||||
@@ -17,7 +17,7 @@ In Rancher v2.3.0, a local copy of `system-charts` has been packaged into the `r
|
||||
|
||||
Example commands for a Rancher installation with a bundled `system-charts` are included in the [air gap Docker installation]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/air-gap-single-node/install-rancher) instructions and the [air gap Kubernetes installation]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/air-gap-high-availability/install-rancher/) instructions.
|
||||
|
||||
# Setting Up System Charts for Rancher Prior to v2.3.0
|
||||
# Setting Up System Charts for Rancher Before v2.3.0
|
||||
|
||||
### A. Prepare System Charts
|
||||
|
||||
|
||||
@@ -1,9 +1,6 @@
|
||||
---
|
||||
title: Adding TLS Secrets
|
||||
weight: 2
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/tls-secrets/
|
||||
- /rancher/v2.0-v2.4/en/installation/resources/encryption/tls-secrets
|
||||
---
|
||||
|
||||
Kubernetes will create all the objects and services for Rancher, but it will not become available until we populate the `tls-rancher-ingress` secret in the `cattle-system` namespace with the certificate and key.
|
||||
@@ -23,7 +20,7 @@ kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
|
||||
> **Note:** If you want to replace the certificate, you can delete the `tls-rancher-ingress` secret using `kubectl -n cattle-system delete secret tls-rancher-ingress` and add a new one using the command shown above. If you are using a private CA signed certificate, replacing the certificate is only possible if the new certificate is signed by the same CA as the certificate currently in use.
|
||||
|
||||
### Using a Private CA Signed Certificate
|
||||
# Using a Private CA Signed Certificate
|
||||
|
||||
If you are using a private CA, Rancher requires a copy of the CA certificate which is used by the Rancher Agent to validate the connection to the server.
|
||||
|
||||
@@ -35,3 +32,7 @@ kubectl -n cattle-system create secret generic tls-ca \
|
||||
```
|
||||
|
||||
> **Note:** The configured `tls-ca` secret is retrieved when Rancher starts. On a running Rancher installation the updated CA will take effect after new Rancher pods are started.
|
||||
|
||||
# Updating a Private CA Certificate
|
||||
|
||||
Follow the steps on [this page]({{<baseurl>}}/rancher/v2.x/en/installation/resources/update-ca-cert) to update the SSL certificate of the ingress in a Rancher [high availability Kubernetes installation]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/) or to switch from the default self-signed certificate to a custom certificate.
|
||||
@@ -0,0 +1,145 @@
|
||||
---
|
||||
title: Updating a Private CA Certificate
|
||||
weight: 10
|
||||
---
|
||||
|
||||
Follow these steps to update the SSL certificate of the ingress in a Rancher [high availability Kubernetes installation]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/) or to switch from the default self-signed certificate to a custom certificate.
|
||||
|
||||
A summary of the steps is as follows:
|
||||
|
||||
1. Create or update the `tls-rancher-ingress` Kubernetes secret resource with the new certificate and private key.
|
||||
2. Create or update the `tls-ca` Kubernetes secret resource with the root CA certificate (only required when using a private CA).
|
||||
3. Update the Rancher installation using the Helm CLI.
|
||||
4. Reconfigure the Rancher agents to trust the new CA certificate.
|
||||
|
||||
The details of these instructions are below.
|
||||
|
||||
# 1. Create/update the certificate secret resource
|
||||
|
||||
First, concatenate the server certificate followed by any intermediate certificate(s) to a file named `tls.crt` and provide the corresponding certificate key in a file named `tls.key`.
|
||||
|
||||
If you are switching the install from using the Rancher self-signed certificate or Let’s Encrypt issued certificates, use the following command to create the `tls-rancher-ingress` secret resource in your Rancher HA cluster:
|
||||
|
||||
```
|
||||
$ kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key
|
||||
```
|
||||
|
||||
Alternatively, to update an existing certificate secret:
|
||||
|
||||
```
|
||||
$ kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key \
|
||||
--dry-run --save-config -o yaml | kubectl apply -f -
|
||||
```
|
||||
|
||||
# 2. Create/update the CA certificate secret resource
|
||||
|
||||
If the new certificate was signed by a private CA, you will need to copy the corresponding root CA certificate into a file named `cacerts.pem` and create or update the `tls-ca secret` in the `cattle-system` namespace. If the certificate was signed by an intermediate CA, then the `cacerts.pem` must contain both the intermediate and root CA certificates (in this order).
|
||||
|
||||
To create the initial secret:
|
||||
|
||||
```
|
||||
$ kubectl -n cattle-system create secret generic tls-ca \
|
||||
--from-file=cacerts.pem
|
||||
```
|
||||
|
||||
To update an existing `tls-ca` secret:
|
||||
|
||||
```
|
||||
$ kubectl -n cattle-system create secret generic tls-ca \
|
||||
--from-file=cacerts.pem \
|
||||
--dry-run --save-config -o yaml | kubectl apply -f -
|
||||
```
|
||||
|
||||
# 3. Reconfigure the Rancher deployment
|
||||
|
||||
> Before proceeding, generate an API token in the Rancher UI (<b>User > API & Keys</b>) and save the Bearer Token which you might need in step 4.
|
||||
|
||||
This step is required if Rancher was initially installed with self-signed certificates (`ingress.tls.source=rancher`) or with a Let's Encrypt issued certificate (`ingress.tls.source=letsEncrypt`).
|
||||
|
||||
It ensures that the Rancher pods and ingress resources are reconfigured to use the new server and optional CA certificate.
|
||||
|
||||
To update the Helm deployment you will need to use the same (`--set`) options that were used during initial installation. Check with:
|
||||
|
||||
```
|
||||
$ helm get values rancher -n cattle-system
|
||||
```
|
||||
|
||||
Also get the version string of the currently deployed Rancher chart:
|
||||
|
||||
```
|
||||
$ helm ls -A
|
||||
```
|
||||
|
||||
Upgrade the Helm application instance using the original configuration values and making sure to specify `ingress.tls.source=secret` as well as the current chart version to prevent an application upgrade.
|
||||
|
||||
If the certificate was signed by a private CA, add the `set privateCA=true` argument as well. Also make sure to read the documentation describing the initial installation using [custom certificates]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/install-rancher-on-Kubernetes/#6-install-rancher-with-helm-and-your-chosen-certificate-option).
|
||||
|
||||
```
|
||||
helm upgrade rancher rancher-stable/rancher \
|
||||
--namespace cattle-system \
|
||||
--version <DEPLOYED_CHART_VERSION> \
|
||||
--set hostname=rancher.my.org \
|
||||
--set ingress.tls.source=secret \
|
||||
--set ...
|
||||
```
|
||||
|
||||
When the upgrade is completed, navigate to `https://<Rancher_SERVER>/v3/settings/cacerts` to verify that the value matches the CA certificate written in the `tls-ca` secret earlier.
|
||||
|
||||
# 4. Reconfigure Rancher agents to trust the private CA
|
||||
|
||||
This section covers three methods to reconfigure Rancher agents to trust the private CA. This step is required if either of the following is true:
|
||||
|
||||
- Rancher was initially configured to use the Rancher self-signed certificate (`ingress.tls.source=rancher`) or with a Let's Encrypt issued certificate (`ingress.tls.source=letsEncrypt`)
|
||||
- The root CA certificate for the new custom certificate has changed
|
||||
|
||||
### Why is this step required?
|
||||
|
||||
When Rancher is configured with a certificate signed by a private CA, the CA certificate chain is downloaded into Rancher agent containers. Agents compare the checksum of the downloaded certificate against the `CATTLE_CA_CHECKSUM` environment variable. This means that, when the private CA certificate is changed on Rancher server side, the environvment variable `CATTLE_CA_CHECKSUM` must be updated accordingly.
|
||||
|
||||
### Which method should I choose?
|
||||
|
||||
Method 1 is the easiest one but requires all clusters to be connected to Rancher after the certificates have been rotated. This is usually the case if the process is performed right after updating the Rancher deployment (Step 3).
|
||||
|
||||
If the clusters have lost connection to Rancher but you have [Authorized Cluster Endpoints](https://rancher.com/docs/rancher/v2.0-v2.4/en/cluster-admin/cluster-access/ace/) enabled, then go with method 2.
|
||||
|
||||
Method 3 can be used as a fallback if method 1 and 2 are unfeasible.
|
||||
|
||||
### Method 1: Kubectl command
|
||||
|
||||
For each cluster under Rancher management (including `local`) run the following command using the Kubeconfig file of the Rancher management cluster (RKE or K3S).
|
||||
|
||||
```
|
||||
kubectl patch clusters <REPLACE_WITH_CLUSTERID> -p '{"status":{"agentImage":"dummy"}}' --type merge
|
||||
```
|
||||
|
||||
This command will cause all Agent Kubernetes resources to be reconfigured with the checksum of the new certificate.
|
||||
|
||||
|
||||
### Method 2: Manually update checksum
|
||||
|
||||
Manually patch the agent Kubernetes resources by updating the `CATTLE_CA_CHECKSUM` environment variable to the value matching the checksum of the new CA certificate. Generate the new checksum value like so:
|
||||
|
||||
```
|
||||
$ curl -k -s -fL <RANCHER_SERVER>/v3/settings/cacerts | jq -r .value > cacert.tmp
|
||||
$ sha256sum cacert.tmp | awk '{print $1}'
|
||||
```
|
||||
|
||||
Using a Kubeconfig for each downstream cluster update the environment variable for the two agent deployments.
|
||||
|
||||
```
|
||||
$ kubectl edit -n cattle-system ds/cattle-node-agent
|
||||
$ kubectl edit -n cattle-system deployment/cluster-agent
|
||||
```
|
||||
|
||||
### Method 3: Recreate Rancher agents
|
||||
|
||||
With this method you are recreating the Rancher agents by running a set of commands on a controlplane node of each downstream cluster.
|
||||
|
||||
First, generate the agent definitions as described here: https://gist.github.com/superseb/076f20146e012f1d4e289f5bd1bd4971
|
||||
|
||||
Then, connect to a controlplane node of the downstream cluster via SSH, create a Kubeconfig and apply the definitions:
|
||||
https://gist.github.com/superseb/b14ed3b5535f621ad3d2aa6a4cd6443b
|
||||
@@ -15,7 +15,7 @@ Add SSL certificates to either projects, namespaces, or both. A project scoped c
|
||||
|
||||
1. From the **Global** view, select the project where you want to deploy your ingress.
|
||||
|
||||
1. From the main menu, select **Resources > Secrets > Certificates**. Click **Add Certificate**. (For Rancher prior to v2.3, click **Resources > Certificates.**)
|
||||
1. From the main menu, select **Resources > Secrets > Certificates**. Click **Add Certificate**. (For Rancher before v2.3, click **Resources > Certificates.**)
|
||||
|
||||
1. Enter a **Name** for the certificate.
|
||||
|
||||
@@ -39,7 +39,7 @@ Add SSL certificates to either projects, namespaces, or both. A project scoped c
|
||||
|
||||
- If you added an SSL certificate to the project, the certificate is available for deployments created in any project namespace.
|
||||
- If you added an SSL certificate to a namespace, the certificate is available only for deployments in that namespace.
|
||||
- Your certificate is added to the **Resources > Secrets > Certificates** view. (For Rancher prior to v2.3, it is added to **Resources > Certificates.**)
|
||||
- Your certificate is added to the **Resources > Secrets > Certificates** view. (For Rancher before v2.3, it is added to **Resources > Certificates.**)
|
||||
|
||||
## What's Next?
|
||||
|
||||
|
||||
@@ -22,12 +22,12 @@ The way that you manage HPAs is different based on your version of the Kubernete
|
||||
HPAs are also managed differently based on your version of Rancher:
|
||||
|
||||
- **For Rancher v2.3.0+**: You can create, manage, and delete HPAs using the Rancher UI. From the Rancher UI you can configure the HPA to scale based on CPU and memory utilization. For more information, refer to [Managing HPAs with the Rancher UI]({{<baseurl>}}/rancher/v2.0-v2.4/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). To scale the HPA based on custom metrics, you still need to use `kubectl`. For more information, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{<baseurl>}}/rancher/v2.0-v2.4/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus).
|
||||
- **For Rancher Prior to v2.3.0:** To manage and configure HPAs, you need to use `kubectl`. For instructions on how to create, manage, and scale HPAs, refer to [Managing HPAs with kubectl]({{<baseurl>}}/rancher/v2.0-v2.4/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl).
|
||||
- **For Rancher Before v2.3.0:** To manage and configure HPAs, you need to use `kubectl`. For instructions on how to create, manage, and scale HPAs, refer to [Managing HPAs with kubectl]({{<baseurl>}}/rancher/v2.0-v2.4/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl).
|
||||
|
||||
You might have additional HPA installation steps if you are using an older version of Rancher:
|
||||
|
||||
- **For Rancher v2.0.7+:** Clusters created in Rancher v2.0.7 and higher automatically have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use HPA.
|
||||
- **For Rancher Prior to v2.0.7:** Clusters created in Rancher prior to v2.0.7 don't automatically have the requirements needed to use HPA. For instructions on installing HPA for these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{<baseurl>}}/rancher/v2.0-v2.4/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7).
|
||||
- **For Rancher Before v2.0.7:** Clusters created in Rancher before v2.0.7 don't automatically have the requirements needed to use HPA. For instructions on installing HPA for these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{<baseurl>}}/rancher/v2.0-v2.4/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7).
|
||||
|
||||
## Testing HPAs with a Service Deployment
|
||||
|
||||
|
||||
+1
-1
@@ -5,7 +5,7 @@ aliases:
|
||||
- /rancher/v2.0-v2.4/en/k8s-in-rancher/horizontal-pod-autoscaler/hpa-for-rancher-before-2_0_7
|
||||
---
|
||||
|
||||
This section describes how to manually install HPAs for clusters created with Rancher prior to v2.0.7. This section also describes how to configure your HPA to scale up or down, and how to assign roles to your HPA.
|
||||
This section describes how to manually install HPAs for clusters created with Rancher before v2.0.7. This section also describes how to configure your HPA to scale up or down, and how to assign roles to your HPA.
|
||||
|
||||
Before you can use HPA in your Kubernetes cluster, you must fulfill some requirements.
|
||||
|
||||
|
||||
+1
-1
@@ -17,7 +17,7 @@ This section describes HPA management with `kubectl`. This document has instruct
|
||||
|
||||
In Rancher v2.3.x, you can create, view, and delete HPAs from the Rancher UI. You can also configure them to scale based on CPU or memory usage from the Rancher UI. For more information, refer to [Managing HPAs with the Rancher UI]({{<baseurl>}}/rancher/v2.0-v2.4/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). For scaling HPAs based on other metrics than CPU or memory, you still need `kubectl`.
|
||||
|
||||
### Note For Rancher Prior to v2.0.7
|
||||
### Note For Rancher Before v2.0.7
|
||||
|
||||
Clusters created with older versions of Rancher don't automatically have all the requirements to create an HPA. To install an HPA on these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{<baseurl>}}/rancher/v2.0-v2.4/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7).
|
||||
|
||||
|
||||
+1
-1
@@ -10,7 +10,7 @@ aliases:
|
||||
Ingress can be added for workloads to provide load balancing, SSL termination and host/path based routing. When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a [Global DNS entry]({{<baseurl>}}/rancher/v2.0-v2.4/en/catalog/globaldns/).
|
||||
|
||||
1. From the **Global** view, open the project that you want to add ingress to.
|
||||
1. Click **Resources** in the main navigation bar. Click the **Load Balancing** tab. (In versions prior to v2.3.0, just click the **Load Balancing** tab.) Then click **Add Ingress**.
|
||||
1. Click **Resources** in the main navigation bar. Click the **Load Balancing** tab. (In versions before v2.3.0, just click the **Load Balancing** tab.) Then click **Add Ingress**.
|
||||
1. Enter a **Name** for the ingress.
|
||||
1. Select an existing **Namespace** from the drop-down list. Alternatively, you can create a new namespace on the fly by clicking **Add to a new namespace**.
|
||||
1. Create ingress forwarding **Rules**. For help configuring the rules, refer to [this section.](#ingress-rule-configuration) If any of your ingress rules handle requests for encrypted ports, add a certificate to encrypt/decrypt communications.
|
||||
|
||||
@@ -24,7 +24,7 @@ Currently, deployments pull the private registry credentials automatically only
|
||||
|
||||
1. From the **Global** view, select the project containing the namespace(s) where you want to add a registry.
|
||||
|
||||
1. From the main menu, click **Resources > Secrets > Registry Credentials.** (For Rancher prior to v2.3, click **Resources > Registries.)**
|
||||
1. From the main menu, click **Resources > Secrets > Registry Credentials.** (For Rancher before v2.3, click **Resources > Registries.)**
|
||||
|
||||
1. Click **Add Registry.**
|
||||
|
||||
@@ -53,7 +53,7 @@ You can deploy a workload with an image from a private registry through the Ranc
|
||||
To deploy a workload with an image from your private registry,
|
||||
|
||||
1. Go to the project view,
|
||||
1. Click **Resources > Workloads.** In versions prior to v2.3.0, go to the **Workloads** tab.
|
||||
1. Click **Resources > Workloads.** In versions before v2.3.0, go to the **Workloads** tab.
|
||||
1. Click **Deploy.**
|
||||
1. Enter a unique name for the workload and choose a namespace.
|
||||
1. In the **Docker Image** field, enter the URL of the path to the Docker image in your private registry. For example, if your private registry is on Quay.io, you could use `quay.io/<Quay profile name>/<Image name>`.
|
||||
|
||||
@@ -13,7 +13,7 @@ However, you also have the option of creating additional Service Discovery recor
|
||||
|
||||
1. From the **Global** view, open the project that you want to add a DNS record to.
|
||||
|
||||
1. Click **Resources** in the main navigation bar. Click the **Service Discovery** tab. (In versions prior to v2.3.0, just click the **Service Discovery** tab.) Then click **Add Record**.
|
||||
1. Click **Resources** in the main navigation bar. Click the **Service Discovery** tab. (In versions before v2.3.0, just click the **Service Discovery** tab.) Then click **Add Record**.
|
||||
|
||||
1. Enter a **Name** for the DNS record. This name is used for DNS resolution.
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ A _sidecar_ is a container that extends or enhances the main container in a pod.
|
||||
|
||||
1. From the **Global** view, open the project running the workload you want to add a sidecar to.
|
||||
|
||||
1. Click **Resources > Workloads.** In versions prior to v2.3.0, select the **Workloads** tab.
|
||||
1. Click **Resources > Workloads.** In versions before v2.3.0, select the **Workloads** tab.
|
||||
|
||||
1. Find the workload that you want to extend. Select **⋮ icon (...) > Add a Sidecar**.
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ Deploy a workload to run an application in one or more containers.
|
||||
|
||||
1. From the **Global** view, open the project that you want to deploy a workload to.
|
||||
|
||||
1. 1. Click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) From the **Workloads** view, click **Deploy**.
|
||||
1. 1. Click **Resources > Workloads.** (In versions before v2.3.0, click the **Workloads** tab.) From the **Workloads** view, click **Deploy**.
|
||||
|
||||
1. Enter a **Name** for the workload.
|
||||
|
||||
|
||||
@@ -32,7 +32,7 @@ We don't recommend installing Rancher in a single Docker container, because if t
|
||||
|
||||
As of v2.4, Rancher needs to be installed on either a high-availability [RKE (Rancher Kubernetes Engine)]({{<baseurl>}}/rke/latest/en/) Kubernetes cluster, or a high-availability [K3s (Lightweight Kubernetes)]({{<baseurl>}}/k3s/latest/en/) Kubernetes cluster. Both RKE and K3s are fully certified Kubernetes distributions.
|
||||
|
||||
Rancher versions prior to v2.4 need to be installed on an RKE cluster.
|
||||
Rancher versions before v2.4 need to be installed on an RKE cluster.
|
||||
|
||||
### K3s Kubernetes Cluster Installations
|
||||
|
||||
@@ -45,7 +45,7 @@ The option to install Rancher on a K3s cluster is a feature introduced in Ranche
|
||||
|
||||
### RKE Kubernetes Cluster Installations
|
||||
|
||||
If you are installing Rancher prior to v2.4, you will need to install Rancher on an RKE cluster, in which the cluster data is stored on each node with the etcd role. As of Rancher v2.4, there is no migration path to transition the Rancher server from an RKE cluster to a K3s cluster. All versions of the Rancher server, including v2.4+, can be installed on an RKE cluster.
|
||||
If you are installing Rancher before v2.4, you will need to install Rancher on an RKE cluster, in which the cluster data is stored on each node with the etcd role. As of Rancher v2.4, there is no migration path to transition the Rancher server from an RKE cluster to a K3s cluster. All versions of the Rancher server, including v2.4+, can be installed on an RKE cluster.
|
||||
|
||||
In an RKE installation, the cluster data is replicated on each of three etcd nodes in the cluster, providing redundancy and data duplication in case one of the nodes fails.
|
||||
|
||||
|
||||
@@ -99,7 +99,7 @@ Select your provider's tab below and follow the directions.
|
||||
{{% tab "GitHub" %}}
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Select **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**.
|
||||
1. Select **Tools > Pipelines** in the navigation bar. In versions before v2.2.0, you can select **Resources > Pipelines**.
|
||||
|
||||
1. Follow the directions displayed to **Setup a Github application**. Rancher redirects you to Github to setup an OAuth App in Github.
|
||||
|
||||
@@ -116,7 +116,7 @@ _Available as of v2.1.0_
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Select **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**.
|
||||
1. Select **Tools > Pipelines** in the navigation bar. In versions before v2.2.0, you can select **Resources > Pipelines**.
|
||||
|
||||
1. Follow the directions displayed to **Setup a GitLab application**. Rancher redirects you to GitLab.
|
||||
|
||||
@@ -180,7 +180,7 @@ After the version control provider is authorized, you are automatically re-direc
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. Click on **Configure Repositories**.
|
||||
|
||||
@@ -198,7 +198,7 @@ Now that repositories are added to your project, you can start configuring the p
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. Find the repository that you want to set up a pipeline for.
|
||||
|
||||
@@ -241,7 +241,7 @@ The configuration reference also covers how to configure:
|
||||
|
||||
# Running your Pipelines
|
||||
|
||||
Run your pipeline for the first time. From the project view in Rancher, go to **Resources > Pipelines.** (In versions prior to v2.3.0, go to the **Pipelines** tab.) Find your pipeline and select the vertical **⋮ > Run**.
|
||||
Run your pipeline for the first time. From the project view in Rancher, go to **Resources > Pipelines.** (In versions before v2.3.0, go to the **Pipelines** tab.) Find your pipeline and select the vertical **⋮ > Run**.
|
||||
|
||||
During this initial run, your pipeline is tested, and the following pipeline components are deployed to your project as workloads in a new namespace dedicated to the pipeline:
|
||||
|
||||
@@ -267,7 +267,7 @@ Available Events:
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to modify the event trigger for the pipeline.
|
||||
|
||||
1. 1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
1. 1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. Find the repository that you want to modify the event triggers. Select the vertical **⋮ > Setting**.
|
||||
|
||||
|
||||
@@ -393,7 +393,7 @@ This section covers the following topics:
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure a pipeline trigger rule.
|
||||
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. From the repository for which you want to manage trigger rules, select the vertical **⋮ > Edit Config**.
|
||||
|
||||
@@ -411,7 +411,7 @@ This section covers the following topics:
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule.
|
||||
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. From the repository for which you want to manage trigger rules, select the vertical **⋮ > Edit Config**.
|
||||
|
||||
@@ -436,7 +436,7 @@ This section covers the following topics:
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule.
|
||||
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. From the repository for which you want to manage trigger rules, select the vertical **⋮ > Edit Config**.
|
||||
|
||||
@@ -491,7 +491,7 @@ When configuring a pipeline, certain [step types](#step-types) allow you to use
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. From the pipeline for which you want to edit build triggers, select **⋮ > Edit Config**.
|
||||
|
||||
@@ -534,7 +534,7 @@ Create a secret in the same project as your pipeline, or explicitly in the names
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. From the pipeline for which you want to edit build triggers, select **⋮ > Edit Config**.
|
||||
|
||||
@@ -584,7 +584,7 @@ Variable Name | Description
|
||||
|
||||
# Global Pipeline Execution Settings
|
||||
|
||||
After configuring a version control provider, there are several options that can be configured globally on how pipelines are executed in Rancher. These settings can be edited by selecting **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**.
|
||||
After configuring a version control provider, there are several options that can be configured globally on how pipelines are executed in Rancher. These settings can be edited by selecting **Tools > Pipelines** in the navigation bar. In versions before v2.2.0, you can select **Resources > Pipelines**.
|
||||
|
||||
- [Executor Quota](#executor-quota)
|
||||
- [Resource Quota for Executors](#resource-quota-for-executors)
|
||||
|
||||
@@ -37,7 +37,7 @@ You can set up your pipeline to run a series of stages and steps to test your co
|
||||
|
||||
1. Go to the project you want this pipeline to run in.
|
||||
|
||||
2. Click **Resources > Pipelines.** In versions prior to v2.3.0,click **Workloads > Pipelines.**
|
||||
2. Click **Resources > Pipelines.** In versions before v2.3.0,click **Workloads > Pipelines.**
|
||||
|
||||
4. Click Add pipeline button.
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ By default, the example pipeline repositories are disabled. Enable one (or more)
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to test out pipelines.
|
||||
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. Click **Configure Repositories**.
|
||||
|
||||
@@ -52,7 +52,7 @@ After enabling an example repository, review the pipeline to see how it is set u
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to test out pipelines.
|
||||
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. Find the example repository, select the vertical **⋮**. There are two ways to view the pipeline:
|
||||
* **Rancher UI**: Click on **Edit Config** to view the stages and steps of the pipeline.
|
||||
@@ -64,7 +64,7 @@ After enabling an example repository, run the pipeline to see how it works.
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to test out pipelines.
|
||||
|
||||
1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.**
|
||||
1. Click **Resources > Pipelines.** In versions before v2.3.0, click **Workloads > Pipelines.**
|
||||
|
||||
1. Find the example repository, select the vertical **⋮ > Run**.
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ This section assumes that you understand how persistent storage works in Kuberne
|
||||
|
||||
### A. Configuring Persistent Data for Docker Registry
|
||||
|
||||
1. From the project that you're configuring a pipeline for, and click **Resources > Workloads.** In versions prior to v2.3.0, select the **Workloads** tab.
|
||||
1. From the project that you're configuring a pipeline for, and click **Resources > Workloads.** In versions before v2.3.0, select the **Workloads** tab.
|
||||
|
||||
1. Find the `docker-registry` workload and select **⋮ > Edit**.
|
||||
|
||||
@@ -61,7 +61,7 @@ This section assumes that you understand how persistent storage works in Kuberne
|
||||
|
||||
### B. Configuring Persistent Data for Minio
|
||||
|
||||
1. From the project view, click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) Find the `minio` workload and select **⋮ > Edit**.
|
||||
1. From the project view, click **Resources > Workloads.** (In versions before v2.3.0, click the **Workloads** tab.) Find the `minio` workload and select **⋮ > Edit**.
|
||||
|
||||
1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section:
|
||||
|
||||
|
||||
+1
-1
@@ -27,7 +27,7 @@ Edit [container default resource limit]({{<baseurl>}}/rancher/v2.0-v2.4/en/k8s-i
|
||||
|
||||
When the default container resource limit is set at a project level, the parameter will be propagated to any namespace created in the project after the limit has been set. For any existing namespace in a project, this limit will not be automatically propagated. You will need to manually set the default container resource limit for any existing namespaces in the project in order for it to be used when creating any containers.
|
||||
|
||||
> **Note:** Prior to v2.2.0, you could not launch catalog applications that did not have any limits set. With v2.2.0, you can set a default container resource limit on a project and launch any catalog applications.
|
||||
> **Note:** Before v2.2.0, you could not launch catalog applications that did not have any limits set. With v2.2.0, you can set a default container resource limit on a project and launch any catalog applications.
|
||||
|
||||
Once a container default resource limit is configured on a namespace, the default will be pre-populated for any containers created in that namespace. These limits/reservations can always be overridden during workload creation.
|
||||
|
||||
|
||||
@@ -54,7 +54,7 @@ For information on other default alerts, refer to the section on [cluster-level
|
||||
|
||||
>**Prerequisite:** Before you can receive project alerts, you must add a notifier.
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure project alerts for. Select **Tools > Alerts**. In versions prior to v2.2.0, you can choose **Resources > Alerts**.
|
||||
1. From the **Global** view, navigate to the project that you want to configure project alerts for. Select **Tools > Alerts**. In versions before v2.2.0, you can choose **Resources > Alerts**.
|
||||
|
||||
1. Click **Add Alert Group**.
|
||||
|
||||
@@ -76,7 +76,7 @@ For information on other default alerts, refer to the section on [cluster-level
|
||||
|
||||
# Managing Project Alerts
|
||||
|
||||
To manage project alerts, browse to the project that alerts you want to manage. Then select **Tools > Alerts**. In versions prior to v2.2.0, you can choose **Resources > Alerts**. You can:
|
||||
To manage project alerts, browse to the project that alerts you want to manage. Then select **Tools > Alerts**. In versions before v2.2.0, you can choose **Resources > Alerts**. You can:
|
||||
|
||||
- Deactivate/Reactive alerts
|
||||
- Edit alert settings
|
||||
|
||||
@@ -60,7 +60,7 @@ Logs that are sent to your logging service are from the following locations:
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure project logging.
|
||||
|
||||
1. Select **Tools > Logging** in the navigation bar. In versions prior to v2.2.0, you can choose **Resources > Logging**.
|
||||
1. Select **Tools > Logging** in the navigation bar. In versions before v2.2.0, you can choose **Resources > Logging**.
|
||||
|
||||
1. Select a logging service and enter the configuration. Refer to the specific service for detailed configuration. Rancher supports the following services:
|
||||
|
||||
|
||||
+2
-2
@@ -19,7 +19,7 @@ For this workload, you'll be deploying the application Rancher Hello-World.
|
||||
|
||||
3. Open the **Project: Default** project.
|
||||
|
||||
4. Click **Resources > Workloads.** In versions prior to v2.3.0, click **Workloads > Workloads.**
|
||||
4. Click **Resources > Workloads.** In versions before v2.3.0, click **Workloads > Workloads.**
|
||||
|
||||
5. Click **Deploy**.
|
||||
|
||||
@@ -49,7 +49,7 @@ Now that the application is up and running it needs to be exposed so that other
|
||||
|
||||
3. Open the **Default** project.
|
||||
|
||||
4. Click **Resources > Workloads > Load Balancing.** In versions prior to v2.3.0, click the **Workloads** tab. Click on the **Load Balancing** tab.
|
||||
4. Click **Resources > Workloads > Load Balancing.** In versions before v2.3.0, click the **Workloads** tab. Click on the **Load Balancing** tab.
|
||||
|
||||
5. Click **Add Ingress**.
|
||||
|
||||
|
||||
+1
-1
@@ -19,7 +19,7 @@ For this workload, you'll be deploying the application Rancher Hello-World.
|
||||
|
||||
3. Open the **Project: Default** project.
|
||||
|
||||
4. Click **Resources > Workloads.** In versions prior to v2.3.0, click **Workloads > Workloads.**
|
||||
4. Click **Resources > Workloads.** In versions before v2.3.0, click **Workloads > Workloads.**
|
||||
|
||||
5. Click **Deploy**.
|
||||
|
||||
|
||||
+1
-1
@@ -44,7 +44,7 @@ kernel.keys.root_maxbytes=25000000
|
||||
Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings.
|
||||
|
||||
### Configure `etcd` user and group
|
||||
A user account and group for the **etcd** service is required to be setup prior to installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time.
|
||||
A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time.
|
||||
|
||||
#### create `etcd` user and group
|
||||
To create the **etcd** group run the following console commands.
|
||||
|
||||
@@ -44,7 +44,7 @@ kernel.keys.root_maxbytes=25000000
|
||||
Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings.
|
||||
|
||||
### Configure `etcd` user and group
|
||||
A user account and group for the **etcd** service is required to be setup prior to installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time.
|
||||
A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time.
|
||||
|
||||
#### create `etcd` user and group
|
||||
To create the **etcd** group run the following console commands.
|
||||
|
||||
@@ -71,7 +71,7 @@ In the image below, the `web-deployment.yml` and `web-service.yml` files [create
|
||||
|
||||
Just as you can create an alias for Rancher v1.6 services, you can do the same for Rancher v2.x workloads. Similarly, you can also create DNS records pointing to services running externally, using either their hostname or IP address. These DNS records are Kubernetes service objects.
|
||||
|
||||
Using the v2.x UI, use the context menu to navigate to the `Project` view. Then click **Resources > Workloads > Service Discovery.** (In versions prior to v2.3.0, click the **Workloads > Service Discovery** tab.) All existing DNS records created for your workloads are listed under each namespace.
|
||||
Using the v2.x UI, use the context menu to navigate to the `Project` view. Then click **Resources > Workloads > Service Discovery.** (In versions before v2.3.0, click the **Workloads > Service Discovery** tab.) All existing DNS records created for your workloads are listed under each namespace.
|
||||
|
||||
Click **Add Record** to create new DNS records. Then view the various options supported to link to external services or to create aliases for another workload, DNS record, or set of pods.
|
||||
|
||||
|
||||
@@ -74,14 +74,14 @@ Although Rancher v2.x supports HTTP and HTTPS hostname and path-based load balan
|
||||
|
||||
## Deploying Ingress
|
||||
|
||||
You can launch a new load balancer to replace your load balancer from v1.6. Using the Rancher v2.x UI, browse to the applicable project and choose **Resources > Workloads > Load Balancing.** (In versions prior to v2.3.0, click **Workloads > Load Balancing.**) Then click **Deploy**. During deployment, you can choose a target project or namespace.
|
||||
You can launch a new load balancer to replace your load balancer from v1.6. Using the Rancher v2.x UI, browse to the applicable project and choose **Resources > Workloads > Load Balancing.** (In versions before v2.3.0, click **Workloads > Load Balancing.**) Then click **Deploy**. During deployment, you can choose a target project or namespace.
|
||||
|
||||
>**Prerequisite:** Before deploying Ingress, you must have a workload deployed that's running a scale of two or more pods.
|
||||
>
|
||||
|
||||

|
||||
|
||||
For balancing between these two pods, you must create a Kubernetes Ingress rule. To create this rule, navigate to your cluster and project, and click **Resources > Workloads > Load Balancing.** (In versions prior to v2.3.0, click **Workloads > Load Balancing.**) Then click **Add Ingress**. This GIF below depicts how to add Ingress to one of your projects.
|
||||
For balancing between these two pods, you must create a Kubernetes Ingress rule. To create this rule, navigate to your cluster and project, and click **Resources > Workloads > Load Balancing.** (In versions before v2.3.0, click **Workloads > Load Balancing.**) Then click **Add Ingress**. This GIF below depicts how to add Ingress to one of your projects.
|
||||
|
||||
<figcaption>Browsing to Load Balancer Tab and Adding Ingress</figcaption>
|
||||
|
||||
|
||||
@@ -263,7 +263,7 @@ Use the following Rancher CLI commands to deploy your application using Rancher
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
Following importation, you can view your v1.6 services in the v2.x UI as Kubernetes manifests by using the context menu to select `<CLUSTER> > <PROJECT>` that contains your services. The imported manifests will display on the **Resources > Workloads** and on the tab at **Resources > Workloads > Service Discovery.** (In Rancher v2.x prior to v2.3.0, these are on the **Workloads** and **Service Discovery** tabs in the top navigation bar.)
|
||||
Following importation, you can view your v1.6 services in the v2.x UI as Kubernetes manifests by using the context menu to select `<CLUSTER> > <PROJECT>` that contains your services. The imported manifests will display on the **Resources > Workloads** and on the tab at **Resources > Workloads > Service Discovery.** (In Rancher v2.x before v2.3.0, these are on the **Workloads** and **Service Discovery** tabs in the top navigation bar.)
|
||||
|
||||
<figcaption>Imported Services</figcaption>
|
||||
|
||||
|
||||
@@ -87,7 +87,7 @@ Rancher schedules pods to the node you select if 1) there are compute resource a
|
||||
|
||||
If you expose the workload using a NodePort that conflicts with another workload, the deployment gets created successfully, but no NodePort service is created. Therefore, the workload isn't exposed outside of the cluster.
|
||||
|
||||
After the workload is created, you can confirm that the pods are scheduled to your chosen node. From the project view, click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) Click the **Group by Node** icon to sort your workloads by node. Note that both Nginx pods are scheduled to the same node.
|
||||
After the workload is created, you can confirm that the pods are scheduled to your chosen node. From the project view, click **Resources > Workloads.** (In versions before v2.3.0, click the **Workloads** tab.) Click the **Group by Node** icon to sort your workloads by node. Note that both Nginx pods are scheduled to the same node.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -70,7 +70,7 @@ The `restricted-admin` permissions are as follows:
|
||||
|
||||
### Upgrading from Rancher with a Hidden Local Cluster
|
||||
|
||||
Prior to Rancher v2.5, it was possible to run the Rancher server using this flag to hide the local cluster:
|
||||
Before Rancher v2.5, it was possible to run the Rancher server using this flag to hide the local cluster:
|
||||
|
||||
```
|
||||
--add-local=false
|
||||
|
||||
@@ -19,7 +19,7 @@ Backups are created as .tar.gz files. These files can be pushed to S3 or Minio,
|
||||
|
||||
1. In the Rancher UI, go to the **Cluster Explorer.**
|
||||
1. Click **Apps.**
|
||||
1. Click `rancher-backup`.
|
||||
1. Click **Rancher Backups.**
|
||||
1. Configure the default storage location. For help, refer to the [storage configuration section.](../configuration/storage-config)
|
||||
|
||||
### 2. Perform a Backup
|
||||
|
||||
@@ -48,9 +48,15 @@ A restore is performed by creating a Restore custom resource.
|
||||
2. Cluster-scoped resources
|
||||
3. Namespaced resources
|
||||
|
||||
### Logs
|
||||
|
||||
To check how the restore is progressing, you can check the logs of the operator. Follow these steps to get the logs:
|
||||
|
||||
```yaml
|
||||
kubectl get pods -n cattle-resources-system
|
||||
kubectl logs <pod name from above command> -n cattle-resources-system -f
|
||||
```
|
||||
```
|
||||
|
||||
### Cleanup
|
||||
|
||||
If you created the restore resource with kubectl, remove the resource to prevent a naming conflict with future restores.
|
||||
@@ -13,7 +13,7 @@ In this guide, we recommend best practices for cluster-level logging and applica
|
||||
|
||||
# Changes in Logging in Rancher v2.5
|
||||
|
||||
Prior to Rancher v2.5, logging in Rancher has historically been a pretty static integration. There were a fixed list of aggregators to choose from (ElasticSearch, Splunk, Kafka, Fluentd and Syslog), and only two configuration points to choose (Cluster-level and Project-level).
|
||||
Before Rancher v2.5, logging in Rancher has historically been a pretty static integration. There were a fixed list of aggregators to choose from (ElasticSearch, Splunk, Kafka, Fluentd and Syslog), and only two configuration points to choose (Cluster-level and Project-level).
|
||||
|
||||
Logging in 2.5 has been completely overhauled to provide a more flexible experience for log aggregation. With the new logging feature, administrators and users alike can deploy logging that meets fine-grained collection criteria while offering a wider array of destinations and configuration options.
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ This section lists the tests that are skipped in the permissive test profile for
|
||||
| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
|
||||
| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. |
|
||||
| 1.2.34 | Ensure that encryption providers are appropriately configured (Not Scored) | Enabling encryption changes how data can be recovered as data is encrypted. |
|
||||
| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true. |
|
||||
| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Scored) | System level configurations are required before provisioning the cluster in order for this argument to be set to true. |
|
||||
| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. |
|
||||
| 5.1.5 | Ensure that default service accounts are not actively used. (Scored) | Kubernetes provides default service accounts to be used. |
|
||||
| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Scored) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
|
||||
|
||||
@@ -37,7 +37,7 @@ Because the Kubernetes version is now included in the snapshot, it is possible t
|
||||
|
||||
The multiple components of the snapshot allow you to select from the following options if you need to restore a cluster from a snapshot:
|
||||
|
||||
- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher prior to v2.4.0.
|
||||
- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher before v2.4.0.
|
||||
- **Restore etcd and Kubernetes version:** This option should be used if a Kubernetes upgrade is the reason that your cluster is failing, and you haven't made any cluster configuration changes.
|
||||
- **Restore etcd, Kubernetes versions and cluster configuration:** This option should be used if you changed both the Kubernetes version and cluster configuration when upgrading.
|
||||
|
||||
@@ -163,4 +163,4 @@ This option is not available directly in the UI, and is only available through t
|
||||
|
||||
# Enabling Snapshot Features for Clusters Created Before Rancher v2.2.0
|
||||
|
||||
If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Rancher, you must [edit the cluster]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/restoring-etcd/).
|
||||
If you have any Rancher launched Kubernetes clusters that were created before v2.2.0, after upgrading Rancher, you must [edit the cluster]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots before v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/restoring-etcd/).
|
||||
|
||||
@@ -147,7 +147,7 @@ The timeout given to each pod for cleaning things up, so they will have chance t
|
||||
|
||||
The amount of time drain should continue to wait before giving up.
|
||||
|
||||
>**Kubernetes Known Issue:** The [timeout setting](https://github.com/kubernetes/kubernetes/pull/64378) was not enforced while draining a node prior to Kubernetes 1.12.
|
||||
>**Kubernetes Known Issue:** The [timeout setting](https://github.com/kubernetes/kubernetes/pull/64378) was not enforced while draining a node before Kubernetes 1.12.
|
||||
|
||||
### Drained and Cordoned State
|
||||
|
||||
|
||||
@@ -30,7 +30,7 @@ If your Kubernetes cluster is broken, you can restore the cluster from a snapsho
|
||||
|
||||
Snapshots are composed of the cluster data in etcd, the Kubernetes version, and the cluster configuration in the `cluster.yml.` These components allow you to select from the following options when restoring a cluster from a snapshot:
|
||||
|
||||
- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher prior to v2.4.0.
|
||||
- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher before v2.4.0.
|
||||
- **Restore etcd and Kubernetes version:** This option should be used if a Kubernetes upgrade is the reason that your cluster is failing, and you haven't made any cluster configuration changes.
|
||||
- **Restore etcd, Kubernetes versions and cluster configuration:** This option should be used if you changed both the Kubernetes version and cluster configuration when upgrading.
|
||||
|
||||
@@ -82,4 +82,4 @@ If the group of etcd nodes loses quorum, the Kubernetes cluster will report a fa
|
||||
|
||||
# Enabling Snapshot Features for Clusters Created Before Rancher v2.2.0
|
||||
|
||||
If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Rancher, you must [edit the cluster]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/restoring-etcd/).
|
||||
If you have any Rancher launched Kubernetes clusters that were created before v2.2.0, after upgrading Rancher, you must [edit the cluster]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots before v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/restoring-etcd/).
|
||||
|
||||
+1
-1
@@ -507,7 +507,7 @@ Service Role | The service role provides Kubernetes the permissions it requires
|
||||
VPC | Provides isolated network resources utilised by EKS and worker nodes. Rancher can create the VPC resources with the following [VPC Permissions]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/#vpc-permissions).
|
||||
|
||||
|
||||
Resource targeting uses `*` as the ARN of many of the resources created cannot be known prior to creating the EKS cluster in Rancher.
|
||||
Resource targeting uses `*` as the ARN of many of the resources created cannot be known before creating the EKS cluster in Rancher.
|
||||
|
||||
```json
|
||||
{
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Registering Existing Clusters
|
||||
weight: 6
|
||||
---
|
||||
|
||||
_Available of of v2.5_
|
||||
_Available as of v2.5_
|
||||
|
||||
The cluster registration feature replaced the feature to import clusters.
|
||||
|
||||
|
||||
@@ -10,6 +10,6 @@ Fleet is GitOps at scale. For more information, refer to the [Fleet section.](./
|
||||
|
||||
### Multi-cluster Apps
|
||||
|
||||
In Rancher prior to v2.5, the multi-cluster apps feature was used to deploy applications across clusters. The multi-cluster apps feature is deprecated, but still available in Rancher v2.5.
|
||||
In Rancher before v2.5, the multi-cluster apps feature was used to deploy applications across clusters. The multi-cluster apps feature is deprecated, but still available in Rancher v2.5.
|
||||
|
||||
Refer to the documentation [here.](./multi-cluster-apps)
|
||||
@@ -65,7 +65,7 @@ There are also separate instructions for installing Rancher in an air gap enviro
|
||||
| Level of Internet Access | Kubernetes Installation - Strongly Recommended | Docker Installation |
|
||||
| ---------------------------------- | ------------------------------ | ---------- |
|
||||
| With direct access to the Internet | [Docs]({{<baseurl>}}/rancher/v2.5/en/installation/install-rancher-on-k8s/) | [Docs]({{<baseurl>}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker) |
|
||||
| Behind an HTTP proxy | These [docs,]({{<baseurl>}}/rancher/v2.5/en/installation/install-rancher-on-k8s/) plus this [configuration]({{<baseurl>}}/rancher/v2.5/en/installation/install-rancher-on-k8s/chart-options/#http-proxy) | These [docs,]({{<baseurl>}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker) plus this [configuration]({{<baseurl>}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/proxy/) |
|
||||
| Behind an HTTP proxy | [Docs]({{<baseurl>}}/rancher/v2.5/en/installation/other-installation-methods/behind-proxy/) | These [docs,]({{<baseurl>}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker) plus this [configuration]({{<baseurl>}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/proxy/) |
|
||||
| In an air gap environment | [Docs]({{<baseurl>}}/rancher/v2.5/en/installation/other-installation-methods/air-gap) | [Docs]({{<baseurl>}}/rancher/v2.5/en/installation/other-installation-methods/air-gap) |
|
||||
|
||||
We recommend installing Rancher on a Kubernetes cluster, because in a multi-node cluster, the Rancher management server becomes highly available. This high-availability configuration helps maintain consistent access to the downstream Kubernetes clusters that Rancher will manage.
|
||||
|
||||
@@ -57,9 +57,8 @@ For information on enabling experimental features, refer to [this page.]({{<base
|
||||
| `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials |
|
||||
| `ingress.configurationSnippet` | "" | `string` - Add additional Nginx configuration. Can be used for proxy configuration. |
|
||||
| `ingress.extraAnnotations` | {} | `map` - additional annotations to customize the ingress |
|
||||
| `ingress.enabled` | true | When set to false, Helm will not install a Rancher ingress. Set the option to false to deploy your own ingress. _Available as of v2.5.6_ |
|
||||
| `letsEncrypt.ingress.class` | "" | `string` - optional ingress class for the cert-manager acmesolver ingress that responds to the Let's Encrypt ACME challenges. Options: traefik, nginx. |
|
||||
| `noProxy` | "127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local" | `string` - comma separated list of hostnames or ip address not to use the proxy |
|
||||
| `letsEncrypt.ingress.class` | "" | `string` - optional ingress class for the cert-manager acmesolver ingress that responds to the Let's Encrypt ACME challenges. Options: traefik, nginx. | |
|
||||
| `noProxy` | "127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local,cattle-system.svc" | `string` - comma separated list of hostnames or ip address not to use the proxy | |
|
||||
| `proxy` | "" | `string` - HTTP[S] proxy server for Rancher |
|
||||
| `rancherImage` | "rancher/rancher" | `string` - rancher image source |
|
||||
| `rancherImagePullPolicy` | "IfNotPresent" | `string` - Override imagePullPolicy for rancher server images - "Always", "Never", "IfNotPresent" |
|
||||
|
||||
@@ -80,7 +80,7 @@ Rancher can be rolled back using the Rancher UI.
|
||||
|
||||
# Rolling Back to Rancher v2.2-v2.4+
|
||||
|
||||
To roll back to Rancher prior to v2.5, follow the procedure detailed here: [Restoring Backups — Kubernetes installs]({{<baseurl>}}/rancher/v2.0-v2.4/en/backups/restore/rke-restore/) Restoring a snapshot of the Rancher server cluster will revert Rancher to the version and state at the time of the snapshot.
|
||||
To roll back to Rancher before v2.5, follow the procedure detailed here: [Restoring Backups — Kubernetes installs]({{<baseurl>}}/rancher/v2.0-v2.4/en/backups/restore/rke-restore/) Restoring a snapshot of the Rancher server cluster will revert Rancher to the version and state at the time of the snapshot.
|
||||
|
||||
For information on how to roll back Rancher installed with Docker, refer to [this page.]({{<baseurl>}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/single-node-rollbacks)
|
||||
|
||||
|
||||
@@ -98,7 +98,7 @@ You'll use the backup as a restoration point if something goes wrong during upgr
|
||||
helm repo list
|
||||
|
||||
NAME URL
|
||||
stable https://kubernetes-charts.storage.googleapis.com
|
||||
stable https://charts.helm.sh/stable
|
||||
rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
|
||||
+3
-1
@@ -3,6 +3,8 @@ title: RancherD Configuration Reference
|
||||
weight: 1
|
||||
---
|
||||
|
||||
> RancherD is an experimental feature.
|
||||
|
||||
In RancherD, a server node is defined as a machine (bare-metal or virtual) running the `rancherd server` command. The server runs the Kubernetes API as well as Kubernetes workloads.
|
||||
|
||||
An agent node is defined as a machine running the `rancherd agent` command. They don't run the Kubernetes API. To add nodes designated to run your apps and services, join agent nodes to your cluster.
|
||||
@@ -73,7 +75,7 @@ Put this manifest on your host in `/var/lib/rancher/rke2/server/manifests` befor
|
||||
| `extraEnv` | [] | ***list*** - set additional environment variables for Rancher |
|
||||
| `imagePullSecrets` | [] | ***list*** - list of names of Secret resource containing private registry credentials |
|
||||
| `proxy` | " " | ***string** - HTTP[S] proxy server for Rancher |
|
||||
| `noProxy` | "127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16" | ***string*** - comma separated list of hostnames or ip address not to use the proxy |
|
||||
| `noProxy` | "127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16" | ***string*** - comma separated list of hostnames or ip address not to use the proxy |
|
||||
| `resources` | {} | ***map*** - rancher pod resource requests & limits |
|
||||
| `rancherImage` | "rancher/rancher" | ***string*** - rancher image source |
|
||||
| `rancherImageTag` | same as chart version | ***string*** - rancher/rancher image tag |
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user