mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-01 10:43:10 +00:00
Merge branch 'main' into patch-move-eks
This commit is contained in:
+23
@@ -1,6 +1,29 @@
|
||||
---
|
||||
title: Rollbacks
|
||||
---
|
||||
## Additional Steps for Rollbacks with Rancher v2.6.4+
|
||||
|
||||
Rancher v2.6.4 upgrades the cluster-api module from v0.4.4 to v1.0.2. Version v1.0.2 of the cluster-api, in turn, upgrades the Cluster API's Custom Resource Definitions (CRDs) from `cluster.x-k8s.io/v1alpha4` to `cluster.x-k8s.io/v1beta1`. The CRDs upgrade to v1beta1 causes rollbacks to fail when you attempt to move from Rancher v2.6.4 to any previous version of Rancher v2.6.x. This is because CRDs that use the older apiVersion (v1alpha4) are incompatible with v1beta1.
|
||||
|
||||
To avoid rollback failure, the following Rancher scripts should be run **before** you attempt a restore operation or rollback:
|
||||
|
||||
* `verify.sh`: Checks for any Rancher-related resources in the cluster.
|
||||
* `cleanup.sh`: Cleans up the cluster.
|
||||
|
||||
See the [rancher/rancher-cleanup repo](https://github.com/rancher/rancher-cleanup) for more details and source code.
|
||||
|
||||
:::caution
|
||||
|
||||
There will be downtime while `cleanup.sh` runs, since the script deletes resources created by Rancher.
|
||||
|
||||
:::
|
||||
|
||||
### Rolling back from v2.6.4+ to lower versions of v2.6.x
|
||||
|
||||
1. Follow these [instructions](https://github.com/rancher/rancher-cleanup/blob/main/README.md) to run the scripts.
|
||||
1. Follow these [instructions](https://rancher.com/docs/rancher/v2.6/en/backups/migrating-rancher/) to install the rancher-backup Helm chart on the existing cluster and restore the previous state.
|
||||
1. Omit Step 3.
|
||||
1. When you reach Step 4, install the Rancher v2.6.x version on the local cluster you intend to roll back to.
|
||||
|
||||
## Rolling Back to Rancher v2.5.0+
|
||||
|
||||
|
||||
+20
-12
@@ -2,7 +2,7 @@
|
||||
title: '1. Set up Infrastructure and Private Registry'
|
||||
---
|
||||
|
||||
In this section, you will provision the underlying infrastructure for your Rancher management server in an air gapped environment. You will also set up the private Docker registry that must be available to your Rancher node(s).
|
||||
In this section, you will provision the underlying infrastructure for your Rancher management server in an air gapped environment. You will also set up the private container image registry that must be available to your Rancher node(s).
|
||||
|
||||
An air gapped environment is an environment where the Rancher server is installed offline or behind a firewall.
|
||||
|
||||
@@ -19,7 +19,7 @@ We recommend setting up the following infrastructure for a high-availability ins
|
||||
- **An external database** to store the cluster data. PostgreSQL, MySQL, and etcd are supported.
|
||||
- **A load balancer** to direct traffic to the two nodes.
|
||||
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
|
||||
- **A private Docker registry** to distribute Docker images to your machines.
|
||||
- **A private image registry** to distribute container images to your machines.
|
||||
|
||||
### 1. Set up Linux Nodes
|
||||
|
||||
@@ -78,13 +78,17 @@ You will need to specify this hostname in a later step when you install Rancher,
|
||||
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
|
||||
### 5. Set up a Private Docker Registry
|
||||
### 5. Set up a Private Image Registry
|
||||
|
||||
Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machines.
|
||||
Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing container images to your machines.
|
||||
|
||||
In a later step, when you set up your K3s Kubernetes cluster, you will create a [private registries configuration file](https://rancher.com/docs/k3s/latest/en/installation/private-registry/) with details from this registry.
|
||||
|
||||
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry)
|
||||
If you need to create a private registry, refer to the documentation pages for your respective runtime:
|
||||
|
||||
* [Containerd](https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration).
|
||||
* [Nerdctl commands and managed registry services](https://github.com/containerd/nerdctl/blob/main/docs/registry.md).
|
||||
* [Docker](https://docs.docker.com/registry/deploying/).
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="RKE">
|
||||
@@ -94,7 +98,7 @@ To install the Rancher management server on a high-availability RKE cluster, we
|
||||
- **Three Linux nodes,** typically virtual machines, in an infrastructure provider such as Amazon's EC2, Google Compute Engine, or vSphere.
|
||||
- **A load balancer** to direct front-end traffic to the three nodes.
|
||||
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
|
||||
- **A private Docker registry** to distribute Docker images to your machines.
|
||||
- **A private image registry** to distribute container images to your machines.
|
||||
|
||||
These nodes must be in the same region/data center. You may place these servers in separate availability zones.
|
||||
|
||||
@@ -145,13 +149,17 @@ You will need to specify this hostname in a later step when you install Rancher,
|
||||
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
|
||||
### 4. Set up a Private Docker Registry
|
||||
### 4. Set up a Private Image Registry
|
||||
|
||||
Rancher supports air gap installs using a secure Docker private registry. You must have your own private registry or other means of distributing Docker images to your machines.
|
||||
Rancher supports air gap installs using a secure private registry. You must have your own private registry or other means of distributing container images to your machines.
|
||||
|
||||
In a later step, when you set up your RKE Kubernetes cluster, you will create a [private registries configuration file](https://rancher.com/docs/rke/latest/en/config-options/private-registries/) with details from this registry.
|
||||
|
||||
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry)
|
||||
If you need to create a private registry, refer to the documentation pages for your respective runtime:
|
||||
|
||||
* [Containerd](https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration).
|
||||
* [Nerdctl commands and managed registry services](https://github.com/containerd/nerdctl/blob/main/docs/registry.md).
|
||||
* [Docker](https://docs.docker.com/registry/deploying/).
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker">
|
||||
@@ -168,15 +176,15 @@ If you need help with creating a private registry, please refer to the [official
|
||||
|
||||
This host will be disconnected from the Internet, but needs to be able to connect to your private registry.
|
||||
|
||||
Make sure that your node fulfills the general installation requirements for [OS, Docker, hardware, and networking.](../../../../pages-for-subheaders/installation-requirements.md)
|
||||
Make sure that your node fulfills the general installation requirements for [OS, containers, hardware, and networking.](../../../../pages-for-subheaders/installation-requirements.md)
|
||||
|
||||
For an example of one way to set up Linux nodes, refer to this [tutorial](../../../../how-to-guides/new-user-guides/infrastructure-setup/nodes-in-amazon-ec2.md) for setting up nodes as instances in Amazon EC2.
|
||||
|
||||
### 2. Set up a Private Docker Registry
|
||||
|
||||
Rancher supports air gap installs using a Docker private registry on your bastion server. You must have your own private registry or other means of distributing Docker images to your machines.
|
||||
Rancher supports air gap installs using a private registry on your bastion server. You must have your own private registry or other means of distributing container images to your machines.
|
||||
|
||||
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/)
|
||||
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/).
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
@@ -25,9 +25,9 @@ If you want to replace the certificate, you can delete the `tls-rancher-ingress`
|
||||
|
||||
## Using a Private CA Signed Certificate
|
||||
|
||||
If you are using a private CA, Rancher requires a copy of the CA certificate which is used by the Rancher Agent to validate the connection to the server.
|
||||
If you are using a private CA, Rancher requires a copy of the private CA's root certificate or certificate chain, which the Rancher Agent uses to validate the connection to the server.
|
||||
|
||||
Copy the CA certificate into a file named `cacerts.pem` and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
|
||||
Create a file named `cacerts.pem` that only contains the root CA certificate or certificate chain from your private CA, and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system create secret generic tls-ca \
|
||||
|
||||
+2
-2
@@ -147,7 +147,7 @@ This command will cause the agent manifest to be reapplied with the checksum of
|
||||
Manually patch the agent Kubernetes objects by updating the `CATTLE_CA_CHECKSUM` environment variable to the value matching the checksum of the new CA certificate. Generate the new checksum value like so:
|
||||
|
||||
```bash
|
||||
curl -k -s -fL <RANCHER_SERVER_URL>/v3/settings/cacerts | jq -r .value | sha256sum cacert.tmp | awk '{print $1}'
|
||||
curl -k -s -fL <RANCHER_SERVER_URL>/v3/settings/cacerts | jq -r .value | sha256sum | awk '{print $1}'
|
||||
```
|
||||
|
||||
Using a Kubeconfig for each downstream cluster update the environment variable for the two agent deployments. If the [ACE](../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint) is enabled for the cluster, [the kubectl context can be adjusted](../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig#authenticating-directly-with-a-downstream-cluster) to connect directly to the downstream cluster.
|
||||
@@ -260,4 +260,4 @@ Select 'Force Update' for the clusters within the [Continuous Delivery](../../..
|
||||
|
||||
#### Why is this step required?
|
||||
|
||||
Fleet agents in Rancher managed clusters store a kubeconfig that is used to connect to Rancher. The kubeconfig contains a `certificate-authority-data` field containing the CA for the certificate used by Rancher. When changing the CA, this block needs to be updated to allow the fleet-agent to trust the certificate used by Rancher.
|
||||
Fleet agents in Rancher managed clusters store a kubeconfig that is used to connect to Rancher. The kubeconfig contains a `certificate-authority-data` field containing the CA for the certificate used by Rancher. When changing the CA, this block needs to be updated to allow the fleet-agent to trust the certificate used by Rancher.
|
||||
|
||||
+1
-1
@@ -78,7 +78,7 @@ If you have an air gap setup, you might not be able to get the automatic periodi
|
||||
|
||||
To sync Rancher with a local mirror of the RKE metadata, an administrator would configure the `rke-metadata-config` settings to point to the mirror. For details, refer to [Configuring the Metadata Synchronization.](#configuring-the-metadata-synchronization)
|
||||
|
||||
After new Kubernetes versions are loaded into the Rancher setup, additional steps would be required in order to use them for launching clusters. Rancher needs access to updated system images. While the metadata settings can only be changed by administrators, any user can download the Rancher system images and prepare a private Docker registry for them.
|
||||
After new Kubernetes versions are loaded into the Rancher setup, additional steps would be required in order to use them for launching clusters. Rancher needs access to updated system images. While the metadata settings can only be changed by administrators, any user can download the Rancher system images and prepare a private container image registry for them.
|
||||
|
||||
1. To download the system images for the private registry, click the Rancher server version at the bottom left corner of the Rancher UI.
|
||||
1. Download the OS specific image lists for Linux or Windows.
|
||||
|
||||
@@ -44,7 +44,7 @@ The AWS module just creates an EC2 KeyPair, an EC2 SecurityGroup and an EC2 inst
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the AWS folder containing the terraform files by executing `cd quickstart/rancher/aws`.
|
||||
2. Go into the AWS folder containing the Terraform files by executing `cd quickstart/rancher/aws`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
@@ -30,7 +30,7 @@ Deploying to Microsoft Azure will incur charges.
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the Azure folder containing the terraform files by executing `cd quickstart/rancher/azure`.
|
||||
2. Go into the Azure folder containing the Terraform files by executing `cd quickstart/rancher/azure`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ Deploying to DigitalOcean will incur charges.
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the DigitalOcean folder containing the terraform files by executing `cd quickstart/rancher/do`.
|
||||
2. Go into the DigitalOcean folder containing the Terraform files by executing `cd quickstart/rancher/do`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
@@ -28,7 +28,7 @@ Deploying to Google GCP will incur charges.
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the GCP folder containing the terraform files by executing `cd quickstart/rancher/gcp`.
|
||||
2. Go into the GCP folder containing the Terraform files by executing `cd quickstart/rancher/gcp`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ Deploying to Hetzner Cloud will incur charges.
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the Hetzner folder containing the terraform files by executing `cd quickstart/rancher/hcloud`.
|
||||
2. Go into the Hetzner folder containing the Terraform files by executing `cd quickstart/rancher/hcloud`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ Deploying to Outscale will incur charges.
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the Outscale folder containing the terraform files by executing `cd quickstart/rancher/outscale`.
|
||||
2. Go into the Outscale folder containing the Terraform files by executing `cd quickstart/rancher/outscale`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
@@ -23,7 +23,7 @@ Following project creation, you can add users as project members so that they ca
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. On the **Clusters** page, go to the cluster where you want to add members to a project and click **Explore**.
|
||||
1. Click **Cluster > Projects/Namespaces**.
|
||||
1. Go to the project where you want to add members and click **⋮ > Edit Config**.
|
||||
1. Go to the project where you want to add members. Next to the **Create Namespace** button above the project name, click **☰**. Select **Edit Config**.
|
||||
1. In the **Members** tab, click **Add**.
|
||||
1. Search for the user or group that you want to add to the project.
|
||||
|
||||
|
||||
+1
-1
@@ -2,7 +2,7 @@
|
||||
title: Configuring a Global Default Private Registry
|
||||
---
|
||||
|
||||
You might want to use a private container registry to share your custom base images within your organization. With a private registry, you can keep a private, consistent, and centralized source of truth for the container images that are used in your clusters.
|
||||
You might want to use a private container image registry to share your custom base images within your organization. With a private registry, you can keep a private, consistent, and centralized source of truth for the container images that are used in your clusters.
|
||||
|
||||
There are two main ways to set up private registries in Rancher: by setting up the global default registry through the **Settings** tab in the global view, and by setting up a private registry in the advanced options in the cluster-level settings. The global default registry is intended to be used for air-gapped setups, for registries that do not require credentials. The cluster-level private registry is intended to be used in all setups in which the private registry requires credentials.
|
||||
|
||||
|
||||
+11
-6
@@ -12,24 +12,29 @@ This page outlines how to perform a restore with Rancher.
|
||||
|
||||
:::
|
||||
|
||||
### Additional Steps for Rollbacks with Rancher v2.6.4+
|
||||
## Additional Steps for Rollbacks with Rancher v2.6.4+
|
||||
|
||||
In Rancher v2.6.4, the cluster-api module has been upgraded from v0.4.4 to v1.0.2 in which the apiVersion of CAPI CRDs are upgraded from `cluster.x-k8s.io/v1alpha4` to `cluster.x-k8s.io/v1beta1`. This has the effect of causing rollbacks from Rancher v2.6.4 to any previous version of Rancher v2.6.x to fail because the previous version the CRDs needed to roll back are no longer available in v1beta1.
|
||||
Rancher v2.6.4 upgrades the cluster-api module from v0.4.4 to v1.0.2. Version v1.0.2 of the cluster-api, in turn, upgrades the Cluster API's Custom Resource Definitions (CRDs) from `cluster.x-k8s.io/v1alpha4` to `cluster.x-k8s.io/v1beta1`. The CRDs upgrade to v1beta1 causes rollbacks to fail when you attempt to move from Rancher v2.6.4 to any previous version of Rancher v2.6.x. This is because CRDs that use the older apiVersion (v1alpha4) are incompatible with v1beta1.
|
||||
|
||||
To avoid this, the Rancher resource cleanup scripts should be run **before** the restore or rollback is attempted. Specifically, two scripts have been created to assist you: one to clean up the cluster (`cleanup.sh`), and one to check for any Rancher-related resources in the cluster (`verify.sh`). Details on the cleanup script can be found in the [rancher/rancher-cleanup repo](https://github.com/rancher/rancher-cleanup).
|
||||
To avoid rollback failure, the following Rancher scripts should be run **before** you attempt a restore operation or rollback:
|
||||
|
||||
* `verify.sh`: Checks for any Rancher-related resources in the cluster.
|
||||
* `cleanup.sh`: Cleans up the cluster.
|
||||
|
||||
See the [rancher/rancher-cleanup repo](https://github.com/rancher/rancher-cleanup) for more details and source code.
|
||||
|
||||
:::caution
|
||||
|
||||
Rancher will be down as the `cleanup` script runs as it deletes the resources created by rancher.
|
||||
There will be downtime while `cleanup.sh` runs, since the script deletes resources created by Rancher.
|
||||
|
||||
:::
|
||||
|
||||
The additional preparations:
|
||||
### Rolling back from v2.6.4+ to lower versions of v2.6.x
|
||||
|
||||
1. Follow these [instructions](https://github.com/rancher/rancher-cleanup/blob/main/README.md) to run the scripts.
|
||||
1. Follow these [instructions](https://rancher.com/docs/rancher/v2.6/en/backups/migrating-rancher/) to install the rancher-backup Helm chart on the existing cluster and restore the previous state.
|
||||
1. Omit Step 3.
|
||||
1. When Step 4 is reached, install the required Rancher v2.6.x version on the local cluster you intend to roll back to.
|
||||
1. When you reach Step 4, install the Rancher v2.6.x version on the local cluster you intend to roll back to.
|
||||
|
||||
### Create the Restore Custom Resource
|
||||
|
||||
|
||||
@@ -63,8 +63,17 @@ After installing NGINX, you need to update the NGINX configuration file, `nginx.
|
||||
server <IP_NODE_3>:443 max_fails=3 fail_timeout=5s;
|
||||
}
|
||||
server {
|
||||
listen 443;
|
||||
listen 443 ssl;
|
||||
proxy_pass rancher_servers_https;
|
||||
ssl_certificate /path/to/tls.crt;
|
||||
ssl_certificate_key /path/to/key.key;
|
||||
location / {
|
||||
proxy_pass https://rancher_servers_https;
|
||||
proxy_set_header Host <rancher UI URL>;
|
||||
proxy_ssl_server_name on;
|
||||
proxy_ssl_name <rancher UI URL>
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
+22
-10
@@ -1,15 +1,15 @@
|
||||
---
|
||||
title: Kubernetes Registry and Docker Registry
|
||||
description: Learn about the Docker registry and Kubernetes registry, their use cases and how to use a private registry with the Rancher UI
|
||||
title: Kubernetes Registry and Container Image Registry
|
||||
description: Learn about the container image registry and Kubernetes registry, their use cases, and how to use a private registry with the Rancher UI
|
||||
---
|
||||
Registries are Kubernetes secrets containing credentials used to authenticate with [private Docker registries](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/).
|
||||
Registries are Kubernetes secrets containing credentials used to authenticate with [private container registries](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/).
|
||||
|
||||
The word "registry" can mean two things, depending on whether it is used to refer to a Docker or Kubernetes registry:
|
||||
The word "registry" can mean two things, depending on whether it is used to refer to a container or Kubernetes registry:
|
||||
|
||||
- A **Docker registry** contains Docker images that you can pull in order to use them in your deployment. The registry is a stateless, scalable server side application that stores and lets you distribute Docker images.
|
||||
- The **Kubernetes registry** is an image pull secret that your deployment uses to authenticate with a Docker registry.
|
||||
- A **Container image registry** (formerly "Docker registry") contains container images that you can pull and deploy. The registry is a stateless, scalable server side application that stores and lets you distribute container images.
|
||||
- The **Kubernetes registry** is an image pull secret that your deployment uses to authenticate with an image registry.
|
||||
|
||||
Deployments use the Kubernetes registry secret to authenticate with a private Docker registry and then pull a Docker image hosted on it.
|
||||
Deployments use the Kubernetes registry secret to authenticate with a private image registry and then pull a container image hosted on it.
|
||||
|
||||
Currently, deployments pull the private registry credentials automatically only if the workload is created in the Rancher UI and not when it is created via kubectl.
|
||||
|
||||
@@ -17,7 +17,13 @@ Currently, deployments pull the private registry credentials automatically only
|
||||
|
||||
:::note Prerequisite:
|
||||
|
||||
You must have a [private registry](https://docs.docker.com/registry/deploying/) available to use.
|
||||
You must have an available private registry already deployed.
|
||||
|
||||
If you need to create a private registry, refer to the documentation pages for your respective runtime:
|
||||
|
||||
* [Containerd](https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration).
|
||||
* [Nerdctl commands and managed registry services](https://github.com/containerd/nerdctl/blob/main/docs/registry.md).
|
||||
* [Docker](https://docs.docker.com/registry/deploying/).
|
||||
|
||||
:::
|
||||
|
||||
@@ -48,7 +54,13 @@ You must have a [private registry](https://docs.docker.com/registry/deploying/)
|
||||
|
||||
:::note Prerequisites:
|
||||
|
||||
You must have a [private registry](https://docs.docker.com/registry/deploying/) available to use.
|
||||
You must have an available private registry already deployed.
|
||||
|
||||
If you need to create a private registry, refer to the documentation pages for your respective runtime:
|
||||
|
||||
* [Containerd](https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration).
|
||||
* [Nerdctl commands and managed registry services](https://github.com/containerd/nerdctl/blob/main/docs/registry.md).
|
||||
* [Docker](https://docs.docker.com/registry/deploying/).
|
||||
|
||||
:::
|
||||
|
||||
@@ -104,7 +116,7 @@ To deploy a workload with an image from your private registry,
|
||||
1. In the **Container Image** field, enter the URL of the path to the image in your private registry. For example, if your private registry is on Quay.io, you could use `quay.io/<Quay profile name>/<Image name>`.
|
||||
1. Click **Create**.
|
||||
|
||||
**Result:** Your deployment should launch, authenticate using the private registry credentials you added in the Rancher UI, and pull the Docker image that you specified.
|
||||
**Result:** Your deployment should launch, authenticate using the private registry credentials you added in the Rancher UI, and pull the container image that you specified.
|
||||
|
||||
### Using the Private Registry with kubectl
|
||||
|
||||
|
||||
+3
-1
@@ -19,7 +19,9 @@ The `cattle-node-agent` is used to interact with nodes in a [Rancher Launched Ku
|
||||
|
||||
### Scheduling rules
|
||||
|
||||
The `cattle-cluster-agent` uses a fixed fixed set of tolerations (listed below, if no controlplane nodes are visible in the cluster) or dynamically added tolerations based on taints applied to the controlplane nodes. This structure allows for [Taint based Evictions](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions) to work properly for `cattle-cluster-agent`. The default tolerations are described below. If controlplane nodes are present the cluster, the tolerations will be replaced with tolerations matching the taints on the controlplane nodes.
|
||||
The `cattle-cluster-agent` uses either a fixed set of tolerations, or dynamically-added tolerations based on taints applied to the control plane nodes. This structure allows [Taint based Evictions](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions) to work properly for `cattle-cluster-agent`.
|
||||
|
||||
If control plane nodes are present in the cluster, the default tolerations will be replaced with tolerations matching the taints on the control plane nodes. The default set of tolerations are described below.
|
||||
|
||||
| Component | nodeAffinity nodeSelectorTerms | nodeSelector | Tolerations |
|
||||
| ---------------------- | ------------------------------------------ | ------------ | ------------------------------------------------------------------------------ |
|
||||
|
||||
+1
-1
@@ -8,7 +8,7 @@ RKE1 and RKE2 have several slight behavioral differences to note, and this page
|
||||
|
||||
### Control Plane Components
|
||||
|
||||
RKE1 uses Docker for deploying and managing control plane components, and it also uses Docker as the container runtime for Kubernetes. By contrast, RKE2 launches control plane components as static pods that are managed by the kubelet. RKE2's container runtime is containerd, which allows things such as container registry mirroring (RKE1 with Docker does not).
|
||||
RKE1 uses Docker for deploying and managing control plane components, and it also uses Docker as the container runtime for Kubernetes. By contrast, RKE2 launches control plane components as static pods that are managed by the kubelet. RKE2's container runtime is containerd, which allows things such as mirroring a container image registry. RKE1 with Docker does not allow mirroring.
|
||||
|
||||
### Cluster API
|
||||
|
||||
|
||||
@@ -1,110 +0,0 @@
|
||||
---
|
||||
title: Cloning Clusters
|
||||
---
|
||||
|
||||
If you have a cluster in Rancher that you want to use as a template for creating similar clusters, you can use Rancher CLI to clone the cluster's configuration, edit it, and then use it to quickly launch the cloned cluster.
|
||||
|
||||
Duplication of registered clusters is not supported.
|
||||
|
||||
| Cluster Type | Cloneable? |
|
||||
|----------------------------------|---------------|
|
||||
| [Nodes Hosted by Infrastructure Provider](../../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md) | ✓ |
|
||||
| [Hosted Kubernetes Providers](../../../pages-for-subheaders/set-up-clusters-from-hosted-kubernetes-providers.md) | ✓ |
|
||||
| [Custom Cluster](../../../pages-for-subheaders/use-existing-nodes.md) | ✓ |
|
||||
| [Registered Cluster](../../new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md) | |
|
||||
|
||||
:::caution
|
||||
|
||||
During the process of duplicating a cluster, you will edit a config file full of cluster settings. However, we recommend editing only values explicitly listed in this document, as cluster duplication is designed for simple cluster copying, **_not_** wide scale configuration changes. Editing other values may invalidate the config file, which will lead to cluster deployment failure.
|
||||
|
||||
:::
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Download and install [Rancher CLI](../../../pages-for-subheaders/cli-with-rancher.md). Remember to [create an API bearer token](../../../reference-guides/user-settings/api-keys.md) if necessary.
|
||||
|
||||
|
||||
## 1. Export Cluster Config
|
||||
|
||||
Begin by using Rancher CLI to export the configuration for the cluster that you want to clone.
|
||||
|
||||
1. Open Terminal and change your directory to the location of the Rancher CLI binary, `rancher`.
|
||||
|
||||
1. Enter the following command to list the clusters managed by Rancher.
|
||||
|
||||
|
||||
./rancher cluster ls
|
||||
|
||||
|
||||
1. Find the cluster that you want to clone, and copy either its resource `ID` or `NAME` to your clipboard. From this point on, we'll refer to the resource `ID` or `NAME` as `<RESOURCE_ID>`, which is used as a placeholder in the next step.
|
||||
|
||||
1. Enter the following command to export the configuration for your cluster.
|
||||
|
||||
|
||||
./rancher clusters export <RESOURCE_ID>
|
||||
|
||||
|
||||
**Step Result:** The YAML for a cloned cluster prints to Terminal.
|
||||
|
||||
1. Copy the YAML to your clipboard and paste it in a new file. Save the file as `cluster-template.yml` (or any other name, as long as it has a `.yml` extension).
|
||||
|
||||
## 2. Modify Cluster Config
|
||||
|
||||
Use your favorite text editor to modify the cluster configuration in `cluster-template.yml` for your cloned cluster.
|
||||
|
||||
:::note
|
||||
|
||||
Cluster configuration directives must be nested under the `rancher_kubernetes_engine_config` directive in `cluster.yml`. For more information, refer to the section on [the config file structure in Rancher v2.3.0+.](../../../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#config-file-structure-in-rancher)
|
||||
|
||||
:::
|
||||
|
||||
1. Open `cluster-template.yml` (or whatever you named your config) in your favorite text editor.
|
||||
|
||||
:::caution
|
||||
|
||||
Only edit the cluster config values explicitly called out below. Many of the values listed in this file are used to provision your cloned cluster, and editing their values may break the provisioning process.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
1. As depicted in the example below, at the `<CLUSTER_NAME>` placeholder, replace your original cluster's name with a unique name (`<CLUSTER_NAME>`). If your cloned cluster has a duplicate name, the cluster will not provision successfully.
|
||||
|
||||
```yml
|
||||
Version: v3
|
||||
clusters:
|
||||
<CLUSTER_NAME>: # ENTER UNIQUE NAME
|
||||
dockerRootDir: /var/lib/docker
|
||||
enableNetworkPolicy: false
|
||||
rancherKubernetesEngineConfig:
|
||||
addonJobTimeout: 30
|
||||
authentication:
|
||||
strategy: x509
|
||||
authorization: {}
|
||||
bastionHost: {}
|
||||
cloudProvider: {}
|
||||
ignoreDockerVersion: true
|
||||
```
|
||||
|
||||
1. For each `nodePools` section, replace the original nodepool name with a unique name at the `<NODEPOOL_NAME>` placeholder. If your cloned cluster has a duplicate nodepool name, the cluster will not provision successfully.
|
||||
|
||||
```yml
|
||||
nodePools:
|
||||
<NODEPOOL_NAME>:
|
||||
clusterId: do
|
||||
controlPlane: true
|
||||
etcd: true
|
||||
hostnamePrefix: mark-do
|
||||
nodeTemplateId: do
|
||||
quantity: 1
|
||||
worker: true
|
||||
```
|
||||
|
||||
1. When you're done, save and close the configuration.
|
||||
|
||||
## 3. Launch Cloned Cluster
|
||||
|
||||
Move `cluster-template.yml` into the same directory as the Rancher CLI binary. Then run this command:
|
||||
|
||||
./rancher up --file cluster-template.yml
|
||||
|
||||
**Result:** Your cloned cluster begins provisioning. Enter `./rancher cluster ls` to confirm.
|
||||
@@ -309,7 +309,7 @@ Now that Rancher is deployed, see [Adding TLS Secrets](../getting-started/instal
|
||||
The Rancher chart configuration has many options for customizing the installation to suit your specific environment. Here are some common advanced scenarios.
|
||||
|
||||
- [HTTP Proxy](../getting-started/installation-and-upgrade/installation-references/helm-chart-options.md#http-proxy)
|
||||
- [Private container image Registry](../getting-started/installation-and-upgrade/installation-references/helm-chart-options.md#private-registry-and-air-gap-installs)
|
||||
- [Private Container Image Registry](../getting-started/installation-and-upgrade/installation-references/helm-chart-options.md#private-registry-and-air-gap-installs)
|
||||
- [TLS Termination on an External Load Balancer](../getting-started/installation-and-upgrade/installation-references/helm-chart-options.md#external-tls-termination)
|
||||
|
||||
See the [Chart Options](../getting-started/installation-and-upgrade/installation-references/helm-chart-options.md) for the full list of options.
|
||||
|
||||
+1
-1
@@ -46,4 +46,4 @@ If you need to pass an **IAM Instance Profile Name** (not ARN), for example, whe
|
||||
|
||||
### Engine Options
|
||||
|
||||
In the **Engine Options** section of the node template, you can configure the Docker daemon. You may want to specify the docker version or a Docker registry mirror.
|
||||
In the **Engine Options** section of the node template, you can configure the container daemon. You may want to specify the container version or a container image registry mirror.
|
||||
|
||||
+4
-4
@@ -16,9 +16,9 @@ If using a VNet in a different Resource Group than the VMs, the VNet name should
|
||||
|
||||
:::
|
||||
|
||||
The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include:
|
||||
If you use Docker, the [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include:
|
||||
|
||||
- **Labels:** For information on labels, refer to the [Docker object label documentation.](https://docs.docker.com/config/labels-custom-metadata/)
|
||||
- **Labels:** For information on labels, refer to the [Docker object label documentation.](https://docs.docker.com/config/labels-custom-metadata/).
|
||||
- **Docker Engine Install URL:** Determines what Docker version will be installed on the instance.
|
||||
- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon
|
||||
- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/)
|
||||
- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon.
|
||||
- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/).
|
||||
|
||||
+2
-2
@@ -10,9 +10,9 @@ The **Droplet Options** provision your cluster's geographical region and specifi
|
||||
|
||||
### Docker Daemon
|
||||
|
||||
The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include:
|
||||
If you use Docker, the [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include:
|
||||
|
||||
- **Labels:** For information on labels, refer to the [Docker object label documentation.](https://docs.docker.com/config/labels-custom-metadata/)
|
||||
- **Docker Engine Install URL:** Determines what Docker version will be installed on the instance.
|
||||
- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon
|
||||
- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/)
|
||||
- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/).
|
||||
|
||||
@@ -6,6 +6,11 @@ Rancher is committed to informing the community of security issues in our produc
|
||||
|
||||
| ID | Description | Date | Resolution |
|
||||
|----|-------------|------|------------|
|
||||
| [CVE-2022-43758](https://github.com/rancher/rancher/security/advisories/GHSA-34p5-jp77-fcrc) | An issue was discovered in Rancher from versions 2.5.0 up to and including 2.5.16, 2.6.0 up to and including 2.6.9 and 2.7.0, where a command injection vulnerability is present in the Rancher Git package. This package uses the underlying Git binary available in the Rancher container image to execute Git operations. Specially crafted commands, when not properly disambiguated, can cause confusion when executed through Git, resulting in command injection in the underlying Rancher host. | 24 January 2023 | Rancher [v2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1), [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) and [v2.5.17](https://github.com/rancher/rancher/releases/tag/v2.5.17) |
|
||||
| [CVE-2022-43757](https://github.com/rancher/rancher/security/advisories/GHSA-cq4p-vp5q-4522) | This issue affects Rancher versions from 2.5.0 up to and including 2.5.16, from 2.6.0 up to and including 2.6.9 and 2.7.0. It was discovered that the security advisory [CVE-2021-36782](https://github.com/advisories/GHSA-g7j7-h4q8-8w2f), previously released by Rancher, missed addressing some sensitive fields, secret tokens, encryption keys, and SSH keys that were still being stored in plaintext directly on Kubernetes objects like `Clusters`. The exposed credentials are visible in Rancher to authenticated `Cluster Owners`, `Cluster Members`, `Project Owners` and `Project Members` of that cluster. | 24 January 2023 | Rancher [v2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1), [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) and [v2.5.17](https://github.com/rancher/rancher/releases/tag/v2.5.17) |
|
||||
| [CVE-2022-43755](https://github.com/rancher/rancher/security/advisories/GHSA-8c69-r38j-rpfj) | An issue was discovered in Rancher versions up to and including 2.6.9 and 2.7.0, where the `cattle-token` secret, used by the `cattle-cluster-agent`, is predictable. Even after the token is regenerated, it will have the same value. This can pose a serious problem if the token is compromised and needs to be recreated for security purposes. The `cattle-token` is used by Rancher's `cattle-cluster-agent` to connect to the Kubernetes API of Rancher provisioned downstream clusters. | 24 January 2023 | Rancher [v2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1) and [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) |
|
||||
| [CVE-2022-21953](https://github.com/rancher/rancher/security/advisories/GHSA-g25r-gvq3-wrq7) | An issue was discovered in Rancher versions up to and including 2.5.16, 2.6.9 and 2.7.0, where an authorization logic flaw allows an authenticated user on any downstream cluster to (1) open a shell pod in the Rancher `local` cluster and (2) have limited kubectl access to it. The expected behavior is that a user does not have such access in the Rancher `local` cluster unless explicitly granted. | 24 January 2023 | Rancher [v2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1), [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) and [v2.5.17](https://github.com/rancher/rancher/releases/tag/v2.5.17) |
|
||||
| [GHSA-c45c-39f6-6gw9](https://github.com/rancher/rancher/security/advisories/GHSA-c45c-39f6-6gw9) | This issue affects Rancher versions from 2.5.0 up to and including 2.5.16, from 2.6.0 up to and including 2.6.9 and 2.7.0. It only affects Rancher setups that have an external authentication provider configured or had one configured in the past. It was discovered that when an external authentication provider is configured in Rancher and then disabled, the Rancher generated tokens associated with users who had access granted through the now disabled auth provider are not revoked. | 24 January 2023 | Rancher [v2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1), [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) and [v2.5.17](https://github.com/rancher/rancher/releases/tag/v2.5.17) |
|
||||
| [CVE-2022-31247](https://github.com/rancher/rancher/security/advisories/GHSA-6x34-89p7-95wg) | An issue was discovered in Rancher versions up to and including 2.5.15 and 2.6.6 where a flaw with authorization logic allows privilege escalation in downstream clusters through cluster role template binding (CRTB) and project role template binding (PRTB). The vulnerability can be exploited by any user who has permissions to create/edit CRTB or PRTB (such as `cluster-owner`, `manage cluster members`, `project-owner`, and `manage project members`) to gain owner permission in another project in the same cluster or in another project on a different downstream cluster. | 18 August 2022 | [Rancher v2.6.7](https://github.com/rancher/rancher/releases/tag/v2.6.7) and [Rancher v2.5.16](https://github.com/rancher/rancher/releases/tag/v2.5.16) |
|
||||
| [CVE-2021-36783](https://github.com/rancher/rancher/security/advisories/GHSA-8w87-58w6-hfv8) | It was discovered that in Rancher versions up to and including 2.5.12 and 2.6.3, there is a failure to properly sanitize credentials in cluster template answers. This failure can lead to plaintext storage and exposure of credentials, passwords, and API tokens. The exposed credentials are visible in Rancher to authenticated `Cluster Owners`, `Cluster Members`, `Project Owners`, and `Project Members` on the endpoints `/v1/management.cattle.io.clusters`, `/v3/clusters`, and `/k8s/clusters/local/apis/management.cattle.io/v3/clusters`. | 18 August 2022 | [Rancher v2.6.7](https://github.com/rancher/rancher/releases/tag/v2.6.7) and [Rancher v2.5.16](https://github.com/rancher/rancher/releases/tag/v2.5.16) |
|
||||
| [CVE-2021-36782](https://github.com/rancher/rancher/security/advisories/GHSA-g7j7-h4q8-8w2f) | An issue was discovered in Rancher versions up to and including 2.5.15 and 2.6.6 where sensitive fields like passwords, API keys, and Rancher's service account token (used to provision clusters) were stored in plaintext directly on Kubernetes objects like `Clusters` (e.g., `cluster.management.cattle.io`). Anyone with read access to those objects in the Kubernetes API could retrieve the plaintext version of those sensitive data. The issue was partially found and reported by Florian Struck (from [Continum AG](https://www.continum.net/)) and [Marco Stuurman](https://github.com/fe-ax) (from [Shock Media B.V.](https://www.shockmedia.nl/)). | 18 August 2022 | [Rancher v2.6.7](https://github.com/rancher/rancher/releases/tag/v2.6.7) and [Rancher v2.5.16](https://github.com/rancher/rancher/releases/tag/v2.5.16) |
|
||||
@@ -17,15 +22,15 @@ Rancher is committed to informing the community of security issues in our produc
|
||||
| [GHSA-hwm2-4ph6-w6m5](https://github.com/rancher/rancher/security/advisories/GHSA-hwm2-4ph6-w6m5) | A vulnerability was discovered in versions of Rancher starting 2.0 up to and including 2.6.3. The `restricted` pod security policy (PSP) provided in Rancher deviated from the upstream `restricted` policy provided in Kubernetes on account of which Rancher's PSP had `runAsUser` set to `runAsAny`, while upstream had `runAsUser` set to `MustRunAsNonRoot`. This allowed containers to run as any user, including a privileged user (`root`), even when Rancher's `restricted` policy was enforced on a project or at the cluster level. | 31 Mar 2022 | [Rancher v2.6.4](https://github.com/rancher/rancher/releases/tag/v2.6.4) |
|
||||
| [CVE-2021-36775](https://github.com/rancher/rancher/security/advisories/GHSA-28g7-896h-695v) | A vulnerability was discovered in Rancher versions up to and including 2.4.17, 2.5.11 and 2.6.2. After removing a `Project Role` associated with a group from the project, the bindings that granted access to cluster-scoped resources for those subjects were not deleted. This was due to an incomplete authorization logic check. A user who was a member of the affected group with authenticated access to Rancher could exploit this vulnerability to access resources they shouldn't have had access to. The exposure level would depend on the original permission level granted to the affected project role. This vulnerability only affected customers using group based authentication in Rancher. | 31 Mar 2022 | [Rancher v2.6.3](https://github.com/rancher/rancher/releases/tag/v2.6.3), [Rancher v2.5.12](https://github.com/rancher/rancher/releases/tag/v2.5.12) and [Rancher v2.4.18](https://github.com/rancher/rancher/releases/tag/v2.4.18) |
|
||||
| [CVE-2021-36776](https://github.com/rancher/rancher/security/advisories/GHSA-gvh9-xgrq-r8hw) | A vulnerability was discovered in Rancher versions starting 2.5.0 up to and including 2.5.9, that allowed an authenticated user to impersonate any user on a cluster through an API proxy, without requiring knowledge of the impersonated user's credentials. This was due to the API proxy not dropping the impersonation header before sending the request to the Kubernetes API. A malicious user with authenticated access to Rancher could use this to impersonate another user with administrator access in Rancher, thereby gaining administrator level access to the cluster. | 31 Mar 2022 | [Rancher v2.6.0](https://github.com/rancher/rancher/releases/tag/v2.6.0) and [Rancher v2.5.10](https://github.com/rancher/rancher/releases/tag/v2.5.10) |
|
||||
| [CVE-2021-25318](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25318) | A vulnerability was discovered in Rancher versions 2.0 through the aforementioned fixed versions, where users were granted access to resources regardless of the resource's API group. For example, Rancher should have allowed users access to `apps.catalog.cattle.io`, but instead incorrectly gave access to `apps.*`. Resources affected in the **Downstream clusters** and **Rancher management cluster** can be found [here](https://github.com/rancher/rancher/security/advisories/GHSA-f9xf-jq4j-vqw4). There is not a direct mitigation besides upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-31999](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31999) | A vulnerability was discovered in Rancher 2.0.0 through the aforementioned patched versions, where a malicious Rancher user could craft an API request directed at the proxy for the Kubernetes API of a managed cluster to gain access to information they do not have access to. This is done by passing the "Impersonate-User" or "Impersonate-Group" header in the Connection header, which is then correctly removed by the proxy. At this point, instead of impersonating the user and their permissions, the request will act as if it was from the Rancher management server and incorrectly return the information. The vulnerability is limited to valid Rancher users with some level of permissions on the cluster. There is not a direct mitigation besides upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-25320](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25320) | A vulnerability was discovered in Rancher 2.2.0 through the aforementioned patched versions, where cloud credentials weren't being properly validated through the Rancher API. Specifically through a proxy designed to communicate with cloud providers. Any Rancher user that was logged-in and aware of a cloud-credential ID that was valid for a given cloud provider, could call that cloud provider's API through the proxy API, and the cloud-credential would be attached. The exploit is limited to valid Rancher users. There is not a direct mitigation outside of upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-25313](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25313) | A security vulnerability was discovered on all Rancher 2 versions. When accessing the Rancher API with a browser, the URL was not properly escaped, making it vulnerable to an XSS attack. Specially crafted URLs to these API endpoints could include JavaScript which would be embedded in the page and execute in a browser. There is no direct mitigation. Avoid clicking on untrusted links to your Rancher server. | 2 Mar 2021 | [Rancher v2.5.6](https://github.com/rancher/rancher/releases/tag/v2.5.6), [Rancher v2.4.14](https://github.com/rancher/rancher/releases/tag/v2.4.14), and [Rancher v2.3.11](https://github.com/rancher/rancher/releases/tag/v2.3.11) |
|
||||
| [CVE-2019-14435](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14435) | This vulnerability allows authenticated users to potentially extract otherwise private data out of IPs reachable from system service containers used by Rancher. This can include but not only limited to services such as cloud provider metadata services. Although Rancher allow users to configure whitelisted domains for system service access, this flaw can still be exploited by a carefully crafted HTTP request. The issue was found and reported by Matt Belisle and Alex Stevenson at Workiva. | 5 Aug 2019 | [Rancher v2.2.7](https://github.com/rancher/rancher/releases/tag/v2.2.7) and [Rancher v2.1.12](https://github.com/rancher/rancher/releases/tag/v2.1.12) |
|
||||
| [CVE-2019-14436](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14436) | The vulnerability allows a member of a project that has access to edit role bindings to be able to assign themselves or others a cluster level role granting them administrator access to that cluster. The issue was found and reported by Michal Lipinski at Nokia. | 5 Aug 2019 | [Rancher v2.2.7](https://github.com/rancher/rancher/releases/tag/v2.2.7) and [Rancher v2.1.12](https://github.com/rancher/rancher/releases/tag/v2.1.12) |
|
||||
| [CVE-2019-13209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-13209) | The vulnerability is known as a [Cross-Site Websocket Hijacking attack](https://www.christian-schneider.net/CrossSiteWebSocketHijacking.html). This attack allows an exploiter to gain access to clusters managed by Rancher with the roles/permissions of a victim. It requires that a victim to be logged into a Rancher server and then access a third-party site hosted by the exploiter. Once that is accomplished, the exploiter is able to execute commands against the Kubernetes API with the permissions and identity of the victim. Reported by Matt Belisle and Alex Stevenson from Workiva. | 15 Jul 2019 | [Rancher v2.2.5](https://github.com/rancher/rancher/releases/tag/v2.2.5), [Rancher v2.1.11](https://github.com/rancher/rancher/releases/tag/v2.1.11) and [Rancher v2.0.16](https://github.com/rancher/rancher/releases/tag/v2.0.16) |
|
||||
| [CVE-2019-12303](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12303) | Project owners can inject extra fluentd logging configurations that makes it possible to read files or execute arbitrary commands inside the fluentd container. Reported by Tyler Welton from Untamed Theory. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) |
|
||||
| [CVE-2019-12274](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12274) | Nodes using the built-in node drivers using a file path option allows the machine to read arbitrary files including sensitive ones from inside the Rancher server container. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) |
|
||||
| [CVE-2019-11202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11202) | The default admin, that is shipped with Rancher, will be re-created upon restart of Rancher despite being explicitly deleted. | 16 Apr 2019 | [Rancher v2.2.2](https://github.com/rancher/rancher/releases/tag/v2.2.2), [Rancher v2.1.9](https://github.com/rancher/rancher/releases/tag/v2.1.9) and [Rancher v2.0.14](https://github.com/rancher/rancher/releases/tag/v2.0.14) |
|
||||
| [CVE-2019-6287](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6287) | Project members continue to get access to namespaces from projects that they were removed from if they were added to more than one project. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) |
|
||||
| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions](../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rollbacks.md). |
|
||||
| [CVE-2021-25318](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25318) | A vulnerability was discovered in Rancher versions 2.0 through the aforementioned fixed versions, where users were granted access to resources regardless of the resource's API group. For example, Rancher should have allowed users access to `apps.catalog.cattle.io`, but instead incorrectly gave access to `apps.*`. Resources affected in the **Downstream clusters** and **Rancher management cluster** can be found [here](https://github.com/rancher/rancher/security/advisories/GHSA-f9xf-jq4j-vqw4). There is not a direct mitigation besides upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-31999](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31999) | A vulnerability was discovered in Rancher 2.0.0 through the aforementioned patched versions, where a malicious Rancher user could craft an API request directed at the proxy for the Kubernetes API of a managed cluster to gain access to information they do not have access to. This is done by passing the "Impersonate-User" or "Impersonate-Group" header in the Connection header, which is then correctly removed by the proxy. At this point, instead of impersonating the user and their permissions, the request will act as if it was from the Rancher management server and incorrectly return the information. The vulnerability is limited to valid Rancher users with some level of permissions on the cluster. There is not a direct mitigation besides upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-25320](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25320) | A vulnerability was discovered in Rancher 2.2.0 through the aforementioned patched versions, where cloud credentials weren't being properly validated through the Rancher API. Specifically through a proxy designed to communicate with cloud providers. Any Rancher user that was logged-in and aware of a cloud-credential ID that was valid for a given cloud provider, could call that cloud provider's API through the proxy API, and the cloud-credential would be attached. The exploit is limited to valid Rancher users. There is not a direct mitigation outside of upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-25313](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25313) | A security vulnerability was discovered on all Rancher 2 versions. When accessing the Rancher API with a browser, the URL was not properly escaped, making it vulnerable to an XSS attack. Specially crafted URLs to these API endpoints could include JavaScript which would be embedded in the page and execute in a browser. There is no direct mitigation. Avoid clicking on untrusted links to your Rancher server. | 2 Mar 2021 | [Rancher v2.5.6](https://github.com/rancher/rancher/releases/tag/v2.5.6), [Rancher v2.4.14](https://github.com/rancher/rancher/releases/tag/v2.4.14), and [Rancher v2.3.11](https://github.com/rancher/rancher/releases/tag/v2.3.11) |
|
||||
| [CVE-2019-14435](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14435) | This vulnerability allows authenticated users to potentially extract otherwise private data out of IPs reachable from system service containers used by Rancher. This can include but not only limited to services such as cloud provider metadata services. Although Rancher allow users to configure whitelisted domains for system service access, this flaw can still be exploited by a carefully crafted HTTP request. The issue was found and reported by Matt Belisle and Alex Stevenson at Workiva. | 5 Aug 2019 | [Rancher v2.2.7](https://github.com/rancher/rancher/releases/tag/v2.2.7) and [Rancher v2.1.12](https://github.com/rancher/rancher/releases/tag/v2.1.12) |
|
||||
| [CVE-2019-14436](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14436) | The vulnerability allows a member of a project that has access to edit role bindings to be able to assign themselves or others a cluster level role granting them administrator access to that cluster. The issue was found and reported by Michal Lipinski at Nokia. | 5 Aug 2019 | [Rancher v2.2.7](https://github.com/rancher/rancher/releases/tag/v2.2.7) and [Rancher v2.1.12](https://github.com/rancher/rancher/releases/tag/v2.1.12) |
|
||||
| [CVE-2019-13209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-13209) | The vulnerability is known as a [Cross-Site Websocket Hijacking attack](https://www.christian-schneider.net/CrossSiteWebSocketHijacking.html). This attack allows an exploiter to gain access to clusters managed by Rancher with the roles/permissions of a victim. It requires that a victim to be logged into a Rancher server and then access a third-party site hosted by the exploiter. Once that is accomplished, the exploiter is able to execute commands against the Kubernetes API with the permissions and identity of the victim. Reported by Matt Belisle and Alex Stevenson from Workiva. | 15 Jul 2019 | [Rancher v2.2.5](https://github.com/rancher/rancher/releases/tag/v2.2.5), [Rancher v2.1.11](https://github.com/rancher/rancher/releases/tag/v2.1.11) and [Rancher v2.0.16](https://github.com/rancher/rancher/releases/tag/v2.0.16) |
|
||||
| [CVE-2019-12303](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12303) | Project owners can inject extra fluentd logging configurations that makes it possible to read files or execute arbitrary commands inside the fluentd container. Reported by Tyler Welton from Untamed Theory. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) |
|
||||
| [CVE-2019-12274](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12274) | Nodes using the built-in node drivers using a file path option allows the machine to read arbitrary files including sensitive ones from inside the Rancher server container. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) |
|
||||
| [CVE-2019-11202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11202) | The default admin, that is shipped with Rancher, will be re-created upon restart of Rancher despite being explicitly deleted. | 16 Apr 2019 | [Rancher v2.2.2](https://github.com/rancher/rancher/releases/tag/v2.2.2), [Rancher v2.1.9](https://github.com/rancher/rancher/releases/tag/v2.1.9) and [Rancher v2.0.14](https://github.com/rancher/rancher/releases/tag/v2.0.14) |
|
||||
| [CVE-2019-6287](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6287) | Project members continue to get access to namespaces from projects that they were removed from if they were added to more than one project. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) |
|
||||
| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions](../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rollbacks.md). |
|
||||
|
||||
@@ -9,7 +9,6 @@
|
||||
| [Using App Catalogs](../pages-for-subheaders/helm-charts-in-rancher.md) | ✓ | ✓ | ✓ | ✓ |
|
||||
| Configuring Tools ([Alerts, Notifiers, Monitoring](../pages-for-subheaders/monitoring-and-alerting.md), [Logging](../pages-for-subheaders/logging.md), [Istio](../pages-for-subheaders/istio.md)) | ✓ | ✓ | ✓ | ✓ |
|
||||
| [Running Security Scans](../pages-for-subheaders/cis-scan-guides.md) | ✓ | ✓ | ✓ | ✓ |
|
||||
| [Use existing configuration to create additional clusters](../how-to-guides/new-user-guides/manage-clusters/clone-cluster-configuration.md)| ✓ | ✓ | ✓ | |
|
||||
| [Ability to rotate certificates](../how-to-guides/new-user-guides/manage-clusters/rotate-certificates.md) | ✓ | ✓ | | |
|
||||
| Ability to [backup](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher-launched-kubernetes-clusters.md) and [restore](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-rancher-launched-kubernetes-clusters-from-backup.md) Rancher-launched clusters | ✓ | ✓ | | ✓<sup>4</sup> |
|
||||
| [Cleaning Kubernetes components when clusters are no longer reachable from Rancher](../how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes.md) | ✓ | | | |
|
||||
|
||||
@@ -407,10 +407,6 @@ module.exports = {
|
||||
to: '/how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces',
|
||||
from: '/how-to-guides/advanced-user-guides/manage-clusters/projects-and-namespaces'
|
||||
},
|
||||
{
|
||||
to: '/how-to-guides/new-user-guides/manage-clusters/clone-cluster-configuration',
|
||||
from: '/how-to-guides/advanced-user-guides/manage-clusters/clone-cluster-configuration'
|
||||
},
|
||||
{
|
||||
to: '/how-to-guides/new-user-guides/manage-clusters/rotate-certificates',
|
||||
from: '/how-to-guides/advanced-user-guides/manage-clusters/rotate-certificates'
|
||||
|
||||
-1
@@ -9,7 +9,6 @@
|
||||
| [使用应用目录](../pages-for-subheaders/helm-charts-in-rancher.md) | ✓ | ✓ | ✓ | ✓ |
|
||||
| 配置工具([Alerts、Notifiers、Monitoring](../pages-for-subheaders/monitoring-and-alerting.md)、[Logging](../pages-for-subheaders/logging.md) 和 [Istio](../pages-for-subheaders/istio.md)) | ✓ | ✓ | ✓ | ✓ |
|
||||
| [运行安全扫描](../pages-for-subheaders/cis-scan-guides.md) | ✓ | ✓ | ✓ | ✓ |
|
||||
| [使用现有配置来创建其他集群](../how-to-guides/new-user-guides/manage-clusters/clone-cluster-configuration.md) | ✓ | ✓ | ✓ | |
|
||||
| [轮换证书](../how-to-guides/new-user-guides/manage-clusters/rotate-certificates.md) | ✓ | ✓ | | |
|
||||
| [备份](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher-launched-kubernetes-clusters.md)和[恢复](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-rancher-launched-kubernetes-clusters-from-backup.md) Rancher 启动的集群 | ✓ | ✓ | | ✓<sup>4</sup> |
|
||||
| [在 Rancher 无法访问集群时清理 Kubernetes 组件](../how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes.md) | ✓ | | | |
|
||||
|
||||
-1
@@ -9,7 +9,6 @@
|
||||
| [使用应用目录](../pages-for-subheaders/helm-charts-in-rancher.md) | ✓ | ✓ | ✓ | ✓ |
|
||||
| 配置工具([Alerts、Notifiers、Monitoring](../pages-for-subheaders/monitoring-and-alerting.md)、[Logging](../pages-for-subheaders/logging.md) 和 [Istio](../pages-for-subheaders/istio.md)) | ✓ | ✓ | ✓ | ✓ |
|
||||
| [运行安全扫描](../pages-for-subheaders/cis-scan-guides.md) | ✓ | ✓ | ✓ | ✓ |
|
||||
| [使用现有配置来创建其他集群](../how-to-guides/new-user-guides/manage-clusters/clone-cluster-configuration.md) | ✓ | ✓ | ✓ | |
|
||||
| [轮换证书](../how-to-guides/new-user-guides/manage-clusters/rotate-certificates.md) | ✓ | ✓ | | |
|
||||
| [备份](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher-launched-kubernetes-clusters.md)和[恢复](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-rancher-launched-kubernetes-clusters-from-backup.md) Rancher 启动的集群 | ✓ | ✓ | | ✓<sup>4</sup> |
|
||||
| [在 Rancher 无法访问集群时清理 Kubernetes 组件](../how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes.md) | ✓ | | | |
|
||||
|
||||
@@ -386,8 +386,6 @@ const sidebars = {
|
||||
},
|
||||
"how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces",
|
||||
|
||||
"how-to-guides/new-user-guides/manage-clusters/clone-cluster-configuration",
|
||||
|
||||
"how-to-guides/new-user-guides/manage-clusters/rotate-certificates",
|
||||
|
||||
"how-to-guides/new-user-guides/manage-clusters/rotate-encryption-key",
|
||||
|
||||
+2
-2
@@ -19,9 +19,9 @@ kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
|
||||
### Using a Private CA Signed Certificate
|
||||
|
||||
If you are using a private CA, Rancher requires a copy of the CA certificate which is used by the Rancher Agent to validate the connection to the server.
|
||||
If you are using a private CA, Rancher requires a copy of the private CA's root certificate or certificate chain, which the Rancher Agent uses to validate the connection to the server.
|
||||
|
||||
Copy the CA certificate into a file named `cacerts.pem` and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
|
||||
Create a file named `cacerts.pem` that only contains the root CA certificate or certificate chain from your private CA, and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
|
||||
|
||||
>**Important:** Make sure the file is called `cacerts.pem` as Rancher uses that filename to configure the CA certificate.
|
||||
|
||||
|
||||
+2
-2
@@ -21,9 +21,9 @@ kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
|
||||
## Using a Private CA Signed Certificate
|
||||
|
||||
If you are using a private CA, Rancher requires a copy of the CA certificate which is used by the Rancher Agent to validate the connection to the server.
|
||||
If you are using a private CA, Rancher requires a copy of the private CA's root certificate or certificate chain, which the Rancher Agent uses to validate the connection to the server.
|
||||
|
||||
Copy the CA certificate into a file named `cacerts.pem` and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
|
||||
Create a file named `cacerts.pem` that only contains the root CA certificate or certificate chain from your private CA, and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system create secret generic tls-ca \
|
||||
|
||||
+1
-1
@@ -20,7 +20,7 @@ The following steps will quickly deploy a Rancher Server on AWS with a single no
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
1. Go into the AWS folder containing the terraform files by executing `cd quickstart/aws`.
|
||||
1. Go into the AWS folder containing the Terraform files by executing `cd quickstart/aws`.
|
||||
|
||||
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
+1
-1
@@ -23,7 +23,7 @@ The following steps will quickly deploy a Rancher server on Azure in a single-no
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
1. Go into the Azure folder containing the terraform files by executing `cd quickstart/azure`.
|
||||
1. Go into the Azure folder containing the Terraform files by executing `cd quickstart/azure`.
|
||||
|
||||
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
+1
-1
@@ -20,7 +20,7 @@ The following steps will quickly deploy a Rancher Server on DigitalOcean with a
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
1. Go into the DigitalOcean folder containing the terraform files by executing `cd quickstart/do`.
|
||||
1. Go into the DigitalOcean folder containing the Terraform files by executing `cd quickstart/do`.
|
||||
|
||||
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
+1
-1
@@ -21,7 +21,7 @@ The following steps will quickly deploy a Rancher server on GCP in a single-node
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
1. Go into the GCP folder containing the terraform files by executing `cd quickstart/gcp`.
|
||||
1. Go into the GCP folder containing the Terraform files by executing `cd quickstart/gcp`.
|
||||
|
||||
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
+2
-2
@@ -21,9 +21,9 @@ kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
|
||||
## Using a Private CA Signed Certificate
|
||||
|
||||
If you are using a private CA, Rancher requires a copy of the CA certificate which is used by the Rancher Agent to validate the connection to the server.
|
||||
If you are using a private CA, Rancher requires a copy of the private CA's root certificate or certificate chain, which the Rancher Agent uses to validate the connection to the server.
|
||||
|
||||
Copy the CA certificate into a file named `cacerts.pem` and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
|
||||
Create a file named `cacerts.pem` that only contains the root CA certificate or certificate chain from your private CA, and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system create secret generic tls-ca \
|
||||
|
||||
+4
-4
@@ -37,7 +37,7 @@ The AWS module just creates an EC2 KeyPair, an EC2 SecurityGroup and an EC2 inst
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the AWS folder containing the terraform files by executing `cd quickstart/aws`.
|
||||
2. Go into the AWS folder containing the Terraform files by executing `cd quickstart/rancher/aws`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
@@ -47,7 +47,7 @@ The AWS module just creates an EC2 KeyPair, an EC2 SecurityGroup and an EC2 inst
|
||||
- `rancher_server_admin_password` - Admin password for created Rancher server
|
||||
|
||||
5. **Optional:** Modify optional variables within `terraform.tfvars`.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [AWS Quickstart Readme](https://github.com/rancher/quickstart/tree/master/aws) for more information.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [AWS Quickstart Readme](https://github.com/rancher/quickstart/tree/master/rancher/aws) for more information.
|
||||
Suggestions include:
|
||||
- `aws_region` - Amazon AWS region, choose the closest instead of the default (`us-east-1`)
|
||||
- `prefix` - Prefix for all created resources
|
||||
@@ -69,7 +69,7 @@ Suggestions include:
|
||||
```
|
||||
|
||||
8. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`).
|
||||
9. ssh to the Rancher server using the `id_rsa` key generated in `quickstart/aws`.
|
||||
9. ssh to the Rancher server using the `id_rsa` key generated in `quickstart/rancher/aws`.
|
||||
|
||||
#### Result
|
||||
|
||||
@@ -81,6 +81,6 @@ Use Rancher to create a deployment. For more information, see [Creating Deployme
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/aws` folder, execute `terraform destroy --auto-approve`.
|
||||
1. From the `quickstart/rancher/aws` folder, execute `terraform destroy --auto-approve`.
|
||||
|
||||
2. Wait for confirmation that all resources have been destroyed.
|
||||
|
||||
+4
-4
@@ -23,7 +23,7 @@ The following steps will quickly deploy a Rancher server on Azure in a single-no
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the Azure folder containing the terraform files by executing `cd quickstart/azure`.
|
||||
2. Go into the Azure folder containing the Terraform files by executing `cd quickstart/rancher/azure`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
@@ -35,7 +35,7 @@ The following steps will quickly deploy a Rancher server on Azure in a single-no
|
||||
- `rancher_server_admin_password` - Admin password for created Rancher server
|
||||
|
||||
5. **Optional:** Modify optional variables within `terraform.tfvars`.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [Azure Quickstart Readme](https://github.com/rancher/quickstart/tree/master/azure) for more information.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [Azure Quickstart Readme](https://github.com/rancher/quickstart/tree/master/rancher/azure) for more information.
|
||||
Suggestions include:
|
||||
- `azure_location` - Microsoft Azure region, choose the closest instead of the default (`East US`)
|
||||
- `prefix` - Prefix for all created resources
|
||||
@@ -58,7 +58,7 @@ Suggestions include:
|
||||
```
|
||||
|
||||
8. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`).
|
||||
9. ssh to the Rancher Server using the `id_rsa` key generated in `quickstart/azure`.
|
||||
9. ssh to the Rancher Server using the `id_rsa` key generated in `quickstart/rancher/azure`.
|
||||
|
||||
#### Result
|
||||
|
||||
@@ -70,6 +70,6 @@ Use Rancher to create a deployment. For more information, see [Creating Deployme
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/azure` folder, execute `terraform destroy --auto-approve`.
|
||||
1. From the `quickstart/rancher/azure` folder, execute `terraform destroy --auto-approve`.
|
||||
|
||||
2. Wait for confirmation that all resources have been destroyed.
|
||||
|
||||
+4
-4
@@ -20,7 +20,7 @@ The following steps will quickly deploy a Rancher server on DigitalOcean in a si
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the DigitalOcean folder containing the terraform files by executing `cd quickstart/do`.
|
||||
2. Go into the DigitalOcean folder containing the Terraform files by executing `cd quickstart/rancher/do`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
@@ -29,7 +29,7 @@ The following steps will quickly deploy a Rancher server on DigitalOcean in a si
|
||||
- `rancher_server_admin_password` - Admin password for created Rancher server
|
||||
|
||||
5. **Optional:** Modify optional variables within `terraform.tfvars`.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [DO Quickstart Readme](https://github.com/rancher/quickstart/tree/master/do) for more information.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [DO Quickstart Readme](https://github.com/rancher/quickstart/tree/master/rancher/do) for more information.
|
||||
Suggestions include:
|
||||
- `do_region` - DigitalOcean region, choose the closest instead of the default (`nyc1`)
|
||||
- `prefix` - Prefix for all created resources
|
||||
@@ -50,7 +50,7 @@ Suggestions include:
|
||||
```
|
||||
|
||||
8. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`).
|
||||
9. ssh to the Rancher Server using the `id_rsa` key generated in `quickstart/do`.
|
||||
9. ssh to the Rancher Server using the `id_rsa` key generated in `quickstart/rancher/do`.
|
||||
|
||||
#### Result
|
||||
|
||||
@@ -62,6 +62,6 @@ Use Rancher to create a deployment. For more information, see [Creating Deployme
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/do` folder, execute `terraform destroy --auto-approve`.
|
||||
1. From the `quickstart/rancher/do` folder, execute `terraform destroy --auto-approve`.
|
||||
|
||||
2. Wait for confirmation that all resources have been destroyed.
|
||||
|
||||
+4
-4
@@ -21,7 +21,7 @@ The following steps will quickly deploy a Rancher server on GCP in a single-node
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the GCP folder containing the terraform files by executing `cd quickstart/gcp`.
|
||||
2. Go into the GCP folder containing the Terraform files by executing `cd quickstart/rancher/gcp`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
@@ -30,7 +30,7 @@ The following steps will quickly deploy a Rancher server on GCP in a single-node
|
||||
- `rancher_server_admin_password` - Admin password for created Rancher server
|
||||
|
||||
5. **Optional:** Modify optional variables within `terraform.tfvars`.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [GCP Quickstart Readme](https://github.com/rancher/quickstart/tree/master/gcp) for more information.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [GCP Quickstart Readme](https://github.com/rancher/quickstart/tree/master/rancher/gcp) for more information.
|
||||
Suggestions include:
|
||||
- `gcp_region` - Google GCP region, choose the closest instead of the default (`us-east4`)
|
||||
- `gcp_zone` - Google GCP zone, choose the closest instead of the default (`us-east4-a`)
|
||||
@@ -52,7 +52,7 @@ Suggestions include:
|
||||
```
|
||||
|
||||
8. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`).
|
||||
9. ssh to the Rancher Server using the `id_rsa` key generated in `quickstart/gcp`.
|
||||
9. ssh to the Rancher Server using the `id_rsa` key generated in `quickstart/rancher/gcp`.
|
||||
|
||||
#### Result
|
||||
|
||||
@@ -64,6 +64,6 @@ Use Rancher to create a deployment. For more information, see [Creating Deployme
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/gcp` folder, execute `terraform destroy --auto-approve`.
|
||||
1. From the `quickstart/rancher/gcp` folder, execute `terraform destroy --auto-approve`.
|
||||
|
||||
2. Wait for confirmation that all resources have been destroyed.
|
||||
|
||||
+2
-2
@@ -23,7 +23,7 @@ The following steps quickly deploy a Rancher Server with a single node cluster a
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the folder containing the Vagrantfile by executing `cd quickstart/vagrant`.
|
||||
2. Go into the folder containing the Vagrantfile by executing `cd quickstart/rancher/vagrant`.
|
||||
|
||||
3. **Optional:** Edit `config.yaml` to:
|
||||
|
||||
@@ -42,6 +42,6 @@ Use Rancher to create a deployment. For more information, see [Creating Deployme
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/vagrant` folder execute `vagrant destroy -f`.
|
||||
1. From the `quickstart/rancher/vagrant` folder execute `vagrant destroy -f`.
|
||||
|
||||
2. Wait for the confirmation that all resources have been destroyed.
|
||||
|
||||
+17
-12
@@ -6,6 +6,11 @@ Rancher is committed to informing the community of security issues in our produc
|
||||
|
||||
| ID | Description | Date | Resolution |
|
||||
|----|-------------|------|------------|
|
||||
| [CVE-2022-43759](https://github.com/rancher/rancher/security/advisories/GHSA-7m72-mh5r-6j3r) | An issue was discovered in Rancher versions from 2.5.0 up to and including 2.5.16 and from 2.6.0 up to and including 2.6.9, where an authorization logic flaw allows privilege escalation via project role template binding (PRTB) and `-promoted` roles. | 24 January 2023 | Rancher [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) and [v2.5.17](https://github.com/rancher/rancher/releases/tag/v2.5.17) |
|
||||
| [CVE-2022-43758](https://github.com/rancher/rancher/security/advisories/GHSA-34p5-jp77-fcrc) | An issue was discovered in Rancher from versions 2.5.0 up to and including 2.5.16, 2.6.0 up to and including 2.6.9 and 2.7.0, where a command injection vulnerability is present in the Rancher Git package. This package uses the underlying Git binary available in the Rancher container image to execute Git operations. Specially crafted commands, when not properly disambiguated, can cause confusion when executed through Git, resulting in command injection in the underlying Rancher host. | 24 January 2023 | Rancher [v2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1), [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) and [v2.5.17](https://github.com/rancher/rancher/releases/tag/v2.5.17) |
|
||||
| [CVE-2022-43757](https://github.com/rancher/rancher/security/advisories/GHSA-cq4p-vp5q-4522) | This issue affects Rancher versions from 2.5.0 up to and including 2.5.16, from 2.6.0 up to and including 2.6.9 and 2.7.0. It was discovered that the security advisory [CVE-2021-36782](https://github.com/advisories/GHSA-g7j7-h4q8-8w2f), previously released by Rancher, missed addressing some sensitive fields, secret tokens, encryption keys, and SSH keys that were still being stored in plaintext directly on Kubernetes objects like `Clusters`. The exposed credentials are visible in Rancher to authenticated `Cluster Owners`, `Cluster Members`, `Project Owners` and `Project Members` of that cluster. | 24 January 2023 | Rancher [v2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1), [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) and [v2.5.17](https://github.com/rancher/rancher/releases/tag/v2.5.17) |
|
||||
| [CVE-2022-21953](https://github.com/rancher/rancher/security/advisories/GHSA-g25r-gvq3-wrq7) | An issue was discovered in Rancher versions up to and including 2.5.16, 2.6.9 and 2.7.0, where an authorization logic flaw allows an authenticated user on any downstream cluster to (1) open a shell pod in the Rancher `local` cluster and (2) have limited kubectl access to it. The expected behavior is that a user does not have such access in the Rancher `local` cluster unless explicitly granted. | 24 January 2023 | Rancher [v2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1), [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) and [v2.5.17](https://github.com/rancher/rancher/releases/tag/v2.5.17) |
|
||||
| [GHSA-c45c-39f6-6gw9](https://github.com/rancher/rancher/security/advisories/GHSA-c45c-39f6-6gw9) | This issue affects Rancher versions from 2.5.0 up to and including 2.5.16, from 2.6.0 up to and including 2.6.9 and 2.7.0. It only affects Rancher setups that have an external authentication provider configured or had one configured in the past. It was discovered that when an external authentication provider is configured in Rancher and then disabled, the Rancher generated tokens associated with users who had access granted through the now disabled auth provider are not revoked. | 24 January 2023 | Rancher [v2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1), [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) and [v2.5.17](https://github.com/rancher/rancher/releases/tag/v2.5.17) |
|
||||
| [CVE-2022-31247](https://github.com/rancher/rancher/security/advisories/GHSA-6x34-89p7-95wg) | An issue was discovered in Rancher versions up to and including 2.5.15 and 2.6.6 where a flaw with authorization logic allows privilege escalation in downstream clusters through cluster role template binding (CRTB) and project role template binding (PRTB). The vulnerability can be exploited by any user who has permissions to create/edit CRTB or PRTB (such as `cluster-owner`, `manage cluster members`, `project-owner`, and `manage project members`) to gain owner permission in another project in the same cluster or in another project on a different downstream cluster. | 18 August 2022 | [Rancher v2.6.7](https://github.com/rancher/rancher/releases/tag/v2.6.7) and [Rancher v2.5.16](https://github.com/rancher/rancher/releases/tag/v2.5.16) |
|
||||
| [CVE-2021-36783](https://github.com/rancher/rancher/security/advisories/GHSA-8w87-58w6-hfv8) | It was discovered that in Rancher versions up to and including 2.5.12 and 2.6.3, there is a failure to properly sanitize credentials in cluster template answers. This failure can lead to plaintext storage and exposure of credentials, passwords, and API tokens. The exposed credentials are visible in Rancher to authenticated `Cluster Owners`, `Cluster Members`, `Project Owners`, and `Project Members` on the endpoints `/v1/management.cattle.io.clusters`, `/v3/clusters`, and `/k8s/clusters/local/apis/management.cattle.io/v3/clusters`. | 18 August 2022 | [Rancher v2.6.7](https://github.com/rancher/rancher/releases/tag/v2.6.7) and [Rancher v2.5.16](https://github.com/rancher/rancher/releases/tag/v2.5.16) |
|
||||
| [CVE-2021-36782](https://github.com/rancher/rancher/security/advisories/GHSA-g7j7-h4q8-8w2f) | An issue was discovered in Rancher versions up to and including 2.5.15 and 2.6.6 where sensitive fields like passwords, API keys, and Rancher's service account token (used to provision clusters) were stored in plaintext directly on Kubernetes objects like `Clusters` (e.g., `cluster.management.cattle.io`). Anyone with read access to those objects in the Kubernetes API could retrieve the plaintext version of those sensitive data. The issue was partially found and reported by Florian Struck (from [Continum AG](https://www.continum.net/)) and [Marco Stuurman](https://github.com/fe-ax) (from [Shock Media B.V.](https://www.shockmedia.nl/)). | 18 August 2022 | [Rancher v2.6.7](https://github.com/rancher/rancher/releases/tag/v2.6.7) and [Rancher v2.5.16](https://github.com/rancher/rancher/releases/tag/v2.5.16) |
|
||||
@@ -17,15 +22,15 @@ Rancher is committed to informing the community of security issues in our produc
|
||||
| [GHSA-hwm2-4ph6-w6m5](https://github.com/rancher/rancher/security/advisories/GHSA-hwm2-4ph6-w6m5) | A vulnerability was discovered in versions of Rancher starting 2.0 up to and including 2.6.3. The `restricted` pod security policy (PSP) provided in Rancher deviated from the upstream `restricted` policy provided in Kubernetes on account of which Rancher's PSP had `runAsUser` set to `runAsAny`, while upstream had `runAsUser` set to `MustRunAsNonRoot`. This allowed containers to run as any user, including a privileged user (`root`), even when Rancher's `restricted` policy was enforced on a project or at the cluster level. | 31 Mar 2022 | [Rancher v2.6.4](https://github.com/rancher/rancher/releases/tag/v2.6.4) |
|
||||
| [CVE-2021-36775](https://github.com/rancher/rancher/security/advisories/GHSA-28g7-896h-695v) | A vulnerability was discovered in Rancher versions up to and including 2.4.17, 2.5.11 and 2.6.2. After removing a `Project Role` associated with a group from the project, the bindings that granted access to cluster-scoped resources for those subjects were not deleted. This was due to an incomplete authorization logic check. A user who was a member of the affected group with authenticated access to Rancher could exploit this vulnerability to access resources they shouldn't have had access to. The exposure level would depend on the original permission level granted to the affected project role. This vulnerability only affected customers using group based authentication in Rancher. | 31 Mar 2022 | [Rancher v2.6.3](https://github.com/rancher/rancher/releases/tag/v2.6.3), [Rancher v2.5.12](https://github.com/rancher/rancher/releases/tag/v2.5.12) and [Rancher v2.4.18](https://github.com/rancher/rancher/releases/tag/v2.4.18) |
|
||||
| [CVE-2021-36776](https://github.com/rancher/rancher/security/advisories/GHSA-gvh9-xgrq-r8hw) | A vulnerability was discovered in Rancher versions starting 2.5.0 up to and including 2.5.9, that allowed an authenticated user to impersonate any user on a cluster through an API proxy, without requiring knowledge of the impersonated user's credentials. This was due to the API proxy not dropping the impersonation header before sending the request to the Kubernetes API. A malicious user with authenticated access to Rancher could use this to impersonate another user with administrator access in Rancher, thereby gaining administrator level access to the cluster. | 31 Mar 2022 | [Rancher v2.6.0](https://github.com/rancher/rancher/releases/tag/v2.6.0) and [Rancher v2.5.10](https://github.com/rancher/rancher/releases/tag/v2.5.10) |
|
||||
| [CVE-2021-25318](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25318) | A vulnerability was discovered in Rancher versions 2.0 through the aforementioned fixed versions, where users were granted access to resources regardless of the resource's API group. For example, Rancher should have allowed users access to `apps.catalog.cattle.io`, but instead incorrectly gave access to `apps.*`. Resources affected in the **Downstream clusters** and **Rancher management cluster** can be found [here](https://github.com/rancher/rancher/security/advisories/GHSA-f9xf-jq4j-vqw4). There is not a direct mitigation besides upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-31999](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31999) | A vulnerability was discovered in Rancher 2.0.0 through the aforementioned patched versions, where a malicious Rancher user could craft an API request directed at the proxy for the Kubernetes API of a managed cluster to gain access to information they do not have access to. This is done by passing the "Impersonate-User" or "Impersonate-Group" header in the Connection header, which is then correctly removed by the proxy. At this point, instead of impersonating the user and their permissions, the request will act as if it was from the Rancher management server and incorrectly return the information. The vulnerability is limited to valid Rancher users with some level of permissions on the cluster. There is not a direct mitigation besides upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-25320](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25320) | A vulnerability was discovered in Rancher 2.2.0 through the aforementioned patched versions, where cloud credentials weren't being properly validated through the Rancher API. Specifically through a proxy designed to communicate with cloud providers. Any Rancher user that was logged-in and aware of a cloud-credential ID that was valid for a given cloud provider, could call that cloud provider's API through the proxy API, and the cloud-credential would be attached. The exploit is limited to valid Rancher users. There is not a direct mitigation outside of upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-25313](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25313) | A security vulnerability was discovered on all Rancher 2 versions. When accessing the Rancher API with a browser, the URL was not properly escaped, making it vulnerable to an XSS attack. Specially crafted URLs to these API endpoints could include JavaScript which would be embedded in the page and execute in a browser. There is no direct mitigation. Avoid clicking on untrusted links to your Rancher server. | 2 Mar 2021 | [Rancher v2.5.6](https://github.com/rancher/rancher/releases/tag/v2.5.6), [Rancher v2.4.14](https://github.com/rancher/rancher/releases/tag/v2.4.14), and [Rancher v2.3.11](https://github.com/rancher/rancher/releases/tag/v2.3.11) |
|
||||
| [CVE-2019-14435](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14435) | This vulnerability allows authenticated users to potentially extract otherwise private data out of IPs reachable from system service containers used by Rancher. This can include but not only limited to services such as cloud provider metadata services. Although Rancher allow users to configure whitelisted domains for system service access, this flaw can still be exploited by a carefully crafted HTTP request. The issue was found and reported by Matt Belisle and Alex Stevenson at Workiva. | 5 Aug 2019 | [Rancher v2.2.7](https://github.com/rancher/rancher/releases/tag/v2.2.7) and [Rancher v2.1.12](https://github.com/rancher/rancher/releases/tag/v2.1.12) |
|
||||
| [CVE-2019-14436](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14436) | The vulnerability allows a member of a project that has access to edit role bindings to be able to assign themselves or others a cluster level role granting them administrator access to that cluster. The issue was found and reported by Michal Lipinski at Nokia. | 5 Aug 2019 | [Rancher v2.2.7](https://github.com/rancher/rancher/releases/tag/v2.2.7) and [Rancher v2.1.12](https://github.com/rancher/rancher/releases/tag/v2.1.12) |
|
||||
| [CVE-2019-13209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-13209) | The vulnerability is known as a [Cross-Site Websocket Hijacking attack](https://www.christian-schneider.net/CrossSiteWebSocketHijacking.html). This attack allows an exploiter to gain access to clusters managed by Rancher with the roles/permissions of a victim. It requires that a victim to be logged into a Rancher server and then access a third-party site hosted by the exploiter. Once that is accomplished, the exploiter is able to execute commands against the Kubernetes API with the permissions and identity of the victim. Reported by Matt Belisle and Alex Stevenson from Workiva. | 15 Jul 2019 | [Rancher v2.2.5](https://github.com/rancher/rancher/releases/tag/v2.2.5), [Rancher v2.1.11](https://github.com/rancher/rancher/releases/tag/v2.1.11) and [Rancher v2.0.16](https://github.com/rancher/rancher/releases/tag/v2.0.16) |
|
||||
| [CVE-2019-12303](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12303) | Project owners can inject extra fluentd logging configurations that makes it possible to read files or execute arbitrary commands inside the fluentd container. Reported by Tyler Welton from Untamed Theory. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) |
|
||||
| [CVE-2019-12274](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12274) | Nodes using the built-in node drivers using a file path option allows the machine to read arbitrary files including sensitive ones from inside the Rancher server container. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) |
|
||||
| [CVE-2019-11202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11202) | The default admin, that is shipped with Rancher, will be re-created upon restart of Rancher despite being explicitly deleted. | 16 Apr 2019 | [Rancher v2.2.2](https://github.com/rancher/rancher/releases/tag/v2.2.2), [Rancher v2.1.9](https://github.com/rancher/rancher/releases/tag/v2.1.9) and [Rancher v2.0.14](https://github.com/rancher/rancher/releases/tag/v2.0.14) |
|
||||
| [CVE-2019-6287](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6287) | Project members continue to get access to namespaces from projects that they were removed from if they were added to more than one project. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) |
|
||||
| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions](../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rollbacks.md). |
|
||||
| [CVE-2021-25318](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25318) | A vulnerability was discovered in Rancher versions 2.0 through the aforementioned fixed versions, where users were granted access to resources regardless of the resource's API group. For example, Rancher should have allowed users access to `apps.catalog.cattle.io`, but instead incorrectly gave access to `apps.*`. Resources affected in the **Downstream clusters** and **Rancher management cluster** can be found [here](https://github.com/rancher/rancher/security/advisories/GHSA-f9xf-jq4j-vqw4). There is not a direct mitigation besides upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-31999](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31999) | A vulnerability was discovered in Rancher 2.0.0 through the aforementioned patched versions, where a malicious Rancher user could craft an API request directed at the proxy for the Kubernetes API of a managed cluster to gain access to information they do not have access to. This is done by passing the "Impersonate-User" or "Impersonate-Group" header in the Connection header, which is then correctly removed by the proxy. At this point, instead of impersonating the user and their permissions, the request will act as if it was from the Rancher management server and incorrectly return the information. The vulnerability is limited to valid Rancher users with some level of permissions on the cluster. There is not a direct mitigation besides upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-25320](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25320) | A vulnerability was discovered in Rancher 2.2.0 through the aforementioned patched versions, where cloud credentials weren't being properly validated through the Rancher API. Specifically through a proxy designed to communicate with cloud providers. Any Rancher user that was logged-in and aware of a cloud-credential ID that was valid for a given cloud provider, could call that cloud provider's API through the proxy API, and the cloud-credential would be attached. The exploit is limited to valid Rancher users. There is not a direct mitigation outside of upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-25313](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25313) | A security vulnerability was discovered on all Rancher 2 versions. When accessing the Rancher API with a browser, the URL was not properly escaped, making it vulnerable to an XSS attack. Specially crafted URLs to these API endpoints could include JavaScript which would be embedded in the page and execute in a browser. There is no direct mitigation. Avoid clicking on untrusted links to your Rancher server. | 2 Mar 2021 | [Rancher v2.5.6](https://github.com/rancher/rancher/releases/tag/v2.5.6), [Rancher v2.4.14](https://github.com/rancher/rancher/releases/tag/v2.4.14), and [Rancher v2.3.11](https://github.com/rancher/rancher/releases/tag/v2.3.11) |
|
||||
| [CVE-2019-14435](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14435) | This vulnerability allows authenticated users to potentially extract otherwise private data out of IPs reachable from system service containers used by Rancher. This can include but not only limited to services such as cloud provider metadata services. Although Rancher allow users to configure whitelisted domains for system service access, this flaw can still be exploited by a carefully crafted HTTP request. The issue was found and reported by Matt Belisle and Alex Stevenson at Workiva. | 5 Aug 2019 | [Rancher v2.2.7](https://github.com/rancher/rancher/releases/tag/v2.2.7) and [Rancher v2.1.12](https://github.com/rancher/rancher/releases/tag/v2.1.12) |
|
||||
| [CVE-2019-14436](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14436) | The vulnerability allows a member of a project that has access to edit role bindings to be able to assign themselves or others a cluster level role granting them administrator access to that cluster. The issue was found and reported by Michal Lipinski at Nokia. | 5 Aug 2019 | [Rancher v2.2.7](https://github.com/rancher/rancher/releases/tag/v2.2.7) and [Rancher v2.1.12](https://github.com/rancher/rancher/releases/tag/v2.1.12) |
|
||||
| [CVE-2019-13209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-13209) | The vulnerability is known as a [Cross-Site Websocket Hijacking attack](https://www.christian-schneider.net/CrossSiteWebSocketHijacking.html). This attack allows an exploiter to gain access to clusters managed by Rancher with the roles/permissions of a victim. It requires that a victim to be logged into a Rancher server and then access a third-party site hosted by the exploiter. Once that is accomplished, the exploiter is able to execute commands against the Kubernetes API with the permissions and identity of the victim. Reported by Matt Belisle and Alex Stevenson from Workiva. | 15 Jul 2019 | [Rancher v2.2.5](https://github.com/rancher/rancher/releases/tag/v2.2.5), [Rancher v2.1.11](https://github.com/rancher/rancher/releases/tag/v2.1.11) and [Rancher v2.0.16](https://github.com/rancher/rancher/releases/tag/v2.0.16) |
|
||||
| [CVE-2019-12303](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12303) | Project owners can inject extra fluentd logging configurations that makes it possible to read files or execute arbitrary commands inside the fluentd container. Reported by Tyler Welton from Untamed Theory. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) |
|
||||
| [CVE-2019-12274](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12274) | Nodes using the built-in node drivers using a file path option allows the machine to read arbitrary files including sensitive ones from inside the Rancher server container. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) |
|
||||
| [CVE-2019-11202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11202) | The default admin, that is shipped with Rancher, will be re-created upon restart of Rancher despite being explicitly deleted. | 16 Apr 2019 | [Rancher v2.2.2](https://github.com/rancher/rancher/releases/tag/v2.2.2), [Rancher v2.1.9](https://github.com/rancher/rancher/releases/tag/v2.1.9) and [Rancher v2.0.14](https://github.com/rancher/rancher/releases/tag/v2.0.14) |
|
||||
| [CVE-2019-6287](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6287) | Project members continue to get access to namespaces from projects that they were removed from if they were added to more than one project. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) |
|
||||
| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions](../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rollbacks.md). |
|
||||
|
||||
+2
-2
@@ -25,9 +25,9 @@ If you want to replace the certificate, you can delete the `tls-rancher-ingress`
|
||||
|
||||
## Using a Private CA Signed Certificate
|
||||
|
||||
If you are using a private CA, Rancher requires a copy of the CA certificate which is used by the Rancher Agent to validate the connection to the server.
|
||||
If you are using a private CA, Rancher requires a copy of the private CA's root certificate or certificate chain, which the Rancher Agent uses to validate the connection to the server.
|
||||
|
||||
Copy the CA certificate into a file named `cacerts.pem` and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
|
||||
Create a file named `cacerts.pem` that only contains the root CA certificate or certificate chain from your private CA, and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system create secret generic tls-ca \
|
||||
|
||||
+1
-1
@@ -44,7 +44,7 @@ The AWS module just creates an EC2 KeyPair, an EC2 SecurityGroup and an EC2 inst
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the AWS folder containing the terraform files by executing `cd quickstart/rancher/aws`.
|
||||
2. Go into the AWS folder containing the Terraform files by executing `cd quickstart/rancher/aws`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
+1
-1
@@ -30,7 +30,7 @@ Deploying to Microsoft Azure will incur charges.
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the Azure folder containing the terraform files by executing `cd quickstart/rancher/azure`.
|
||||
2. Go into the Azure folder containing the Terraform files by executing `cd quickstart/rancher/azure`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
+1
-1
@@ -27,7 +27,7 @@ Deploying to DigitalOcean will incur charges.
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the DigitalOcean folder containing the terraform files by executing `cd quickstart/rancher/do`.
|
||||
2. Go into the DigitalOcean folder containing the Terraform files by executing `cd quickstart/rancher/do`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
+1
-1
@@ -28,7 +28,7 @@ Deploying to Google GCP will incur charges.
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the GCP folder containing the terraform files by executing `cd quickstart/rancher/gcp`.
|
||||
2. Go into the GCP folder containing the Terraform files by executing `cd quickstart/rancher/gcp`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
+1
-1
@@ -27,7 +27,7 @@ Deploying to Hetzner Cloud will incur charges.
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the Hetzner folder containing the terraform files by executing `cd quickstart/rancher/hcloud`.
|
||||
2. Go into the Hetzner folder containing the Terraform files by executing `cd quickstart/rancher/hcloud`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
+1
-1
@@ -27,7 +27,7 @@ Deploying to Outscale will incur charges.
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the Outscale folder containing the terraform files by executing `cd quickstart/rancher/outscale`.
|
||||
2. Go into the Outscale folder containing the Terraform files by executing `cd quickstart/rancher/outscale`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
|
||||
+1
-1
@@ -19,7 +19,7 @@ For this workload, you'll be deploying the application Rancher Hello-World.
|
||||
1. Click **Deployment**.
|
||||
1. Enter a **Name** for your workload.
|
||||
1. From the **Docker Image** field, enter `rancher/hello-world`. This field is case-sensitive.
|
||||
1. Click **Add Port** and enter `80` in the **Private Container Port** field. Adding a port enables access to the application inside and outside of the cluster. For more information, see [Services](../../../pages-for-subheaders/workloads-and-pods.md#services).
|
||||
1. Click **Add Port** and `Cluster IP` for the `Service Type` and enter `80` in the **Private Container Port** field. You may leave the `Name` blank or specify any name that you wish. Adding a port enables access to the application inside and outside of the cluster. For more information, see [Services](../../../pages-for-subheaders/workloads-and-pods.md#services).
|
||||
1. Click **Create**.
|
||||
|
||||
**Result:**
|
||||
|
||||
@@ -23,7 +23,7 @@ Following project creation, you can add users as project members so that they ca
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. On the **Clusters** page, go to the cluster where you want to add members to a project and click **Explore**.
|
||||
1. Click **Cluster > Projects/Namespaces**.
|
||||
1. Go to the project where you want to add members and click **⋮ > Edit Config**.
|
||||
1. Go to the project where you want to add members. Next to the **Create Namespace** button above the project name, click **☰**. Select **Edit Config**.
|
||||
1. In the **Members** tab, click **Add**.
|
||||
1. Search for the user or group that you want to add to the project.
|
||||
|
||||
|
||||
+4
-3
@@ -19,9 +19,10 @@ For Kubernetes v1.21 and up, the NGINX Ingress controller no longer runs in host
|
||||
If you use this option, ingress routes requests for a hostname to the service or workload that you specify.
|
||||
|
||||
1. Enter the **Request Host** that your ingress will handle request forwarding for. For example, `www.mysite.com`.
|
||||
1. Add a **Target Service**.
|
||||
1. **Optional:** If you want specify a workload or service when a request is sent to a particular hostname path, add a **Path** for the target. For example, if you want requests for `www.mysite.com/contact-us` to be sent to a different service than `www.mysite.com`, enter `/contact-us` in the **Path** field. Typically, the first rule that you create does not include a path.
|
||||
1. Enter the **Port** number that each target operates on.
|
||||
1. Specify a path of type `Prefix` and a specify a path such as `/`.
|
||||
2. Add a **Target Service**.
|
||||
3. **Optional:** If you want specify a workload or service when a request is sent to a particular hostname path, add a **Path** for the target. For example, if you want requests for `www.mysite.com/contact-us` to be sent to a different service than `www.mysite.com`, enter `/contact-us` in the **Path** field. Typically, the first rule that you create does not include a path.
|
||||
4. Enter the **Port** number that each target operates on.
|
||||
### Certificates
|
||||
|
||||
:::note
|
||||
|
||||
-110
@@ -1,110 +0,0 @@
|
||||
---
|
||||
title: Cloning Clusters
|
||||
---
|
||||
|
||||
If you have a cluster in Rancher that you want to use as a template for creating similar clusters, you can use Rancher CLI to clone the cluster's configuration, edit it, and then use it to quickly launch the cloned cluster.
|
||||
|
||||
Duplication of registered clusters is not supported.
|
||||
|
||||
| Cluster Type | Cloneable? |
|
||||
|----------------------------------|---------------|
|
||||
| [Nodes Hosted by Infrastructure Provider](../../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md) | ✓ |
|
||||
| [Hosted Kubernetes Providers](../../../pages-for-subheaders/set-up-clusters-from-hosted-kubernetes-providers.md) | ✓ |
|
||||
| [Custom Cluster](../../../pages-for-subheaders/use-existing-nodes.md) | ✓ |
|
||||
| [Registered Cluster](../../new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md) | |
|
||||
|
||||
:::caution
|
||||
|
||||
During the process of duplicating a cluster, you will edit a config file full of cluster settings. However, we recommend editing only values explicitly listed in this document, as cluster duplication is designed for simple cluster copying, **_not_** wide scale configuration changes. Editing other values may invalidate the config file, which will lead to cluster deployment failure.
|
||||
|
||||
:::
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Download and install [Rancher CLI](../../../pages-for-subheaders/cli-with-rancher.md). Remember to [create an API bearer token](../../../reference-guides/user-settings/api-keys.md) if necessary.
|
||||
|
||||
|
||||
## 1. Export Cluster Config
|
||||
|
||||
Begin by using Rancher CLI to export the configuration for the cluster that you want to clone.
|
||||
|
||||
1. Open Terminal and change your directory to the location of the Rancher CLI binary, `rancher`.
|
||||
|
||||
1. Enter the following command to list the clusters managed by Rancher.
|
||||
|
||||
|
||||
./rancher cluster ls
|
||||
|
||||
|
||||
1. Find the cluster that you want to clone, and copy either its resource `ID` or `NAME` to your clipboard. From this point on, we'll refer to the resource `ID` or `NAME` as `<RESOURCE_ID>`, which is used as a placeholder in the next step.
|
||||
|
||||
1. Enter the following command to export the configuration for your cluster.
|
||||
|
||||
|
||||
./rancher clusters export <RESOURCE_ID>
|
||||
|
||||
|
||||
**Step Result:** The YAML for a cloned cluster prints to Terminal.
|
||||
|
||||
1. Copy the YAML to your clipboard and paste it in a new file. Save the file as `cluster-template.yml` (or any other name, as long as it has a `.yml` extension).
|
||||
|
||||
## 2. Modify Cluster Config
|
||||
|
||||
Use your favorite text editor to modify the cluster configuration in `cluster-template.yml` for your cloned cluster.
|
||||
|
||||
:::note
|
||||
|
||||
Cluster configuration directives must be nested under the `rancher_kubernetes_engine_config` directive in `cluster.yml`. For more information, refer to the section on [the config file structure in Rancher v2.3.0+.](../../../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#config-file-structure-in-rancher)
|
||||
|
||||
:::
|
||||
|
||||
1. Open `cluster-template.yml` (or whatever you named your config) in your favorite text editor.
|
||||
|
||||
:::caution
|
||||
|
||||
Only edit the cluster config values explicitly called out below. Many of the values listed in this file are used to provision your cloned cluster, and editing their values may break the provisioning process.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
1. As depicted in the example below, at the `<CLUSTER_NAME>` placeholder, replace your original cluster's name with a unique name (`<CLUSTER_NAME>`). If your cloned cluster has a duplicate name, the cluster will not provision successfully.
|
||||
|
||||
```yml
|
||||
Version: v3
|
||||
clusters:
|
||||
<CLUSTER_NAME>: # ENTER UNIQUE NAME
|
||||
dockerRootDir: /var/lib/docker
|
||||
enableNetworkPolicy: false
|
||||
rancherKubernetesEngineConfig:
|
||||
addonJobTimeout: 30
|
||||
authentication:
|
||||
strategy: x509
|
||||
authorization: {}
|
||||
bastionHost: {}
|
||||
cloudProvider: {}
|
||||
ignoreDockerVersion: true
|
||||
```
|
||||
|
||||
1. For each `nodePools` section, replace the original nodepool name with a unique name at the `<NODEPOOL_NAME>` placeholder. If your cloned cluster has a duplicate nodepool name, the cluster will not provision successfully.
|
||||
|
||||
```yml
|
||||
nodePools:
|
||||
<NODEPOOL_NAME>:
|
||||
clusterId: do
|
||||
controlPlane: true
|
||||
etcd: true
|
||||
hostnamePrefix: mark-do
|
||||
nodeTemplateId: do
|
||||
quantity: 1
|
||||
worker: true
|
||||
```
|
||||
|
||||
1. When you're done, save and close the configuration.
|
||||
|
||||
## 3. Launch Cloned Cluster
|
||||
|
||||
Move `cluster-template.yml` into the same directory as the Rancher CLI binary. Then run this command:
|
||||
|
||||
./rancher up --file cluster-template.yml
|
||||
|
||||
**Result:** Your cloned cluster begins provisioning. Enter `./rancher cluster ls` to confirm.
|
||||
+18
-12
@@ -6,6 +6,12 @@ Rancher is committed to informing the community of security issues in our produc
|
||||
|
||||
| ID | Description | Date | Resolution |
|
||||
|----|-------------|------|------------|
|
||||
| [CVE-2022-43759](https://github.com/rancher/rancher/security/advisories/GHSA-7m72-mh5r-6j3r) | An issue was discovered in Rancher versions from 2.5.0 up to and including 2.5.16 and from 2.6.0 up to and including 2.6.9, where an authorization logic flaw allows privilege escalation via project role template binding (PRTB) and `-promoted` roles. | 24 January 2023 | Rancher [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) and [v2.5.17]( https://github.com/rancher/rancher/releases/tag/v2.5.17) |
|
||||
| [CVE-2022-43758](https://github.com/rancher/rancher/security/advisories/GHSA-34p5-jp77-fcrc) | An issue was discovered in Rancher from versions 2.5.0 up to and including 2.5.16, 2.6.0 up to and including 2.6.9 and 2.7.0, where a command injection vulnerability is present in the Rancher Git package. This package uses the underlying Git binary available in the Rancher container image to execute Git operations. Specially crafted commands, when not properly disambiguated, can cause confusion when executed through Git, resulting in command injection in the underlying Rancher host. | 24 January 2023 | Rancher [v2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1), [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) and [v2.5.17](https://github.com/rancher/rancher/releases/tag/v2.5.17) |
|
||||
| [CVE-2022-43757](https://github.com/rancher/rancher/security/advisories/GHSA-cq4p-vp5q-4522) | This issue affects Rancher versions from 2.5.0 up to and including 2.5.16, from 2.6.0 up to and including 2.6.9 and 2.7.0. It was discovered that the security advisory [CVE-2021-36782](https://github.com/advisories/GHSA-g7j7-h4q8-8w2f), previously released by Rancher, missed addressing some sensitive fields, secret tokens, encryption keys, and SSH keys that were still being stored in plaintext directly on Kubernetes objects like `Clusters`. The exposed credentials are visible in Rancher to authenticated `Cluster Owners`, `Cluster Members`, `Project Owners` and `Project Members` of that cluster. | 24 January 2023 | Rancher [v2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1), [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) and [v2.5.17](https://github.com/rancher/rancher/releases/tag/v2.5.17) |
|
||||
| [CVE-2022-43755](https://github.com/rancher/rancher/security/advisories/GHSA-8c69-r38j-rpfj) | An issue was discovered in Rancher versions up to and including 2.6.9 and 2.7.0, where the `cattle-token` secret, used by the `cattle-cluster-agent`, is predictable. Even after the token is regenerated, it will have the same value. This can pose a serious problem if the token is compromised and needs to be recreated for security purposes. The `cattle-token` is used by Rancher's `cattle-cluster-agent` to connect to the Kubernetes API of Rancher provisioned downstream clusters. | 24 January 2023 | Rancher [v2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1) and [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) |
|
||||
| [CVE-2022-21953](https://github.com/rancher/rancher/security/advisories/GHSA-g25r-gvq3-wrq7) | An issue was discovered in Rancher versions up to and including 2.5.16, 2.6.9 and 2.7.0, where an authorization logic flaw allows an authenticated user on any downstream cluster to (1) open a shell pod in the Rancher `local` cluster and (2) have limited kubectl access to it. The expected behavior is that a user does not have such access in the Rancher `local` cluster unless explicitly granted. | 24 January 2023 | Rancher [v2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1), [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) and [v2.5.17](https://github.com/rancher/rancher/releases/tag/v2.5.17) |
|
||||
| [GHSA-c45c-39f6-6gw9](https://github.com/rancher/rancher/security/advisories/GHSA-c45c-39f6-6gw9) | This issue affects Rancher versions from 2.5.0 up to and including 2.5.16, from 2.6.0 up to and including 2.6.9 and 2.7.0. It only affects Rancher setups that have an external authentication provider configured or had one configured in the past. It was discovered that when an external authentication provider is configured in Rancher and then disabled, the Rancher generated tokens associated with users who had access granted through the now disabled auth provider are not revoked. | 24 January 2023 | Rancher [v2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1), [v2.6.10](https://github.com/rancher/rancher/releases/tag/v2.6.10) and [v2.5.17](https://github.com/rancher/rancher/releases/tag/v2.5.17) |
|
||||
| [CVE-2022-31247](https://github.com/rancher/rancher/security/advisories/GHSA-6x34-89p7-95wg) | An issue was discovered in Rancher versions up to and including 2.5.15 and 2.6.6 where a flaw with authorization logic allows privilege escalation in downstream clusters through cluster role template binding (CRTB) and project role template binding (PRTB). The vulnerability can be exploited by any user who has permissions to create/edit CRTB or PRTB (such as `cluster-owner`, `manage cluster members`, `project-owner`, and `manage project members`) to gain owner permission in another project in the same cluster or in another project on a different downstream cluster. | 18 August 2022 | [Rancher v2.6.7](https://github.com/rancher/rancher/releases/tag/v2.6.7) and [Rancher v2.5.16](https://github.com/rancher/rancher/releases/tag/v2.5.16) |
|
||||
| [CVE-2021-36783](https://github.com/rancher/rancher/security/advisories/GHSA-8w87-58w6-hfv8) | It was discovered that in Rancher versions up to and including 2.5.12 and 2.6.3, there is a failure to properly sanitize credentials in cluster template answers. This failure can lead to plaintext storage and exposure of credentials, passwords, and API tokens. The exposed credentials are visible in Rancher to authenticated `Cluster Owners`, `Cluster Members`, `Project Owners`, and `Project Members` on the endpoints `/v1/management.cattle.io.clusters`, `/v3/clusters`, and `/k8s/clusters/local/apis/management.cattle.io/v3/clusters`. | 18 August 2022 | [Rancher v2.6.7](https://github.com/rancher/rancher/releases/tag/v2.6.7) and [Rancher v2.5.16](https://github.com/rancher/rancher/releases/tag/v2.5.16) |
|
||||
| [CVE-2021-36782](https://github.com/rancher/rancher/security/advisories/GHSA-g7j7-h4q8-8w2f) | An issue was discovered in Rancher versions up to and including 2.5.15 and 2.6.6 where sensitive fields like passwords, API keys, and Rancher's service account token (used to provision clusters) were stored in plaintext directly on Kubernetes objects like `Clusters` (e.g., `cluster.management.cattle.io`). Anyone with read access to those objects in the Kubernetes API could retrieve the plaintext version of those sensitive data. The issue was partially found and reported by Florian Struck (from [Continum AG](https://www.continum.net/)) and [Marco Stuurman](https://github.com/fe-ax) (from [Shock Media B.V.](https://www.shockmedia.nl/)). | 18 August 2022 | [Rancher v2.6.7](https://github.com/rancher/rancher/releases/tag/v2.6.7) and [Rancher v2.5.16](https://github.com/rancher/rancher/releases/tag/v2.5.16) |
|
||||
@@ -17,15 +23,15 @@ Rancher is committed to informing the community of security issues in our produc
|
||||
| [GHSA-hwm2-4ph6-w6m5](https://github.com/rancher/rancher/security/advisories/GHSA-hwm2-4ph6-w6m5) | A vulnerability was discovered in versions of Rancher starting 2.0 up to and including 2.6.3. The `restricted` pod security policy (PSP) provided in Rancher deviated from the upstream `restricted` policy provided in Kubernetes on account of which Rancher's PSP had `runAsUser` set to `runAsAny`, while upstream had `runAsUser` set to `MustRunAsNonRoot`. This allowed containers to run as any user, including a privileged user (`root`), even when Rancher's `restricted` policy was enforced on a project or at the cluster level. | 31 Mar 2022 | [Rancher v2.6.4](https://github.com/rancher/rancher/releases/tag/v2.6.4) |
|
||||
| [CVE-2021-36775](https://github.com/rancher/rancher/security/advisories/GHSA-28g7-896h-695v) | A vulnerability was discovered in Rancher versions up to and including 2.4.17, 2.5.11 and 2.6.2. After removing a `Project Role` associated with a group from the project, the bindings that granted access to cluster-scoped resources for those subjects were not deleted. This was due to an incomplete authorization logic check. A user who was a member of the affected group with authenticated access to Rancher could exploit this vulnerability to access resources they shouldn't have had access to. The exposure level would depend on the original permission level granted to the affected project role. This vulnerability only affected customers using group based authentication in Rancher. | 31 Mar 2022 | [Rancher v2.6.3](https://github.com/rancher/rancher/releases/tag/v2.6.3), [Rancher v2.5.12](https://github.com/rancher/rancher/releases/tag/v2.5.12) and [Rancher v2.4.18](https://github.com/rancher/rancher/releases/tag/v2.4.18) |
|
||||
| [CVE-2021-36776](https://github.com/rancher/rancher/security/advisories/GHSA-gvh9-xgrq-r8hw) | A vulnerability was discovered in Rancher versions starting 2.5.0 up to and including 2.5.9, that allowed an authenticated user to impersonate any user on a cluster through an API proxy, without requiring knowledge of the impersonated user's credentials. This was due to the API proxy not dropping the impersonation header before sending the request to the Kubernetes API. A malicious user with authenticated access to Rancher could use this to impersonate another user with administrator access in Rancher, thereby gaining administrator level access to the cluster. | 31 Mar 2022 | [Rancher v2.6.0](https://github.com/rancher/rancher/releases/tag/v2.6.0) and [Rancher v2.5.10](https://github.com/rancher/rancher/releases/tag/v2.5.10) |
|
||||
| [CVE-2021-25318](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25318) | A vulnerability was discovered in Rancher versions 2.0 through the aforementioned fixed versions, where users were granted access to resources regardless of the resource's API group. For example, Rancher should have allowed users access to `apps.catalog.cattle.io`, but instead incorrectly gave access to `apps.*`. Resources affected in the **Downstream clusters** and **Rancher management cluster** can be found [here](https://github.com/rancher/rancher/security/advisories/GHSA-f9xf-jq4j-vqw4). There is not a direct mitigation besides upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-31999](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31999) | A vulnerability was discovered in Rancher 2.0.0 through the aforementioned patched versions, where a malicious Rancher user could craft an API request directed at the proxy for the Kubernetes API of a managed cluster to gain access to information they do not have access to. This is done by passing the "Impersonate-User" or "Impersonate-Group" header in the Connection header, which is then correctly removed by the proxy. At this point, instead of impersonating the user and their permissions, the request will act as if it was from the Rancher management server and incorrectly return the information. The vulnerability is limited to valid Rancher users with some level of permissions on the cluster. There is not a direct mitigation besides upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-25320](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25320) | A vulnerability was discovered in Rancher 2.2.0 through the aforementioned patched versions, where cloud credentials weren't being properly validated through the Rancher API. Specifically through a proxy designed to communicate with cloud providers. Any Rancher user that was logged-in and aware of a cloud-credential ID that was valid for a given cloud provider, could call that cloud provider's API through the proxy API, and the cloud-credential would be attached. The exploit is limited to valid Rancher users. There is not a direct mitigation outside of upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-25313](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25313) | A security vulnerability was discovered on all Rancher 2 versions. When accessing the Rancher API with a browser, the URL was not properly escaped, making it vulnerable to an XSS attack. Specially crafted URLs to these API endpoints could include JavaScript which would be embedded in the page and execute in a browser. There is no direct mitigation. Avoid clicking on untrusted links to your Rancher server. | 2 Mar 2021 | [Rancher v2.5.6](https://github.com/rancher/rancher/releases/tag/v2.5.6), [Rancher v2.4.14](https://github.com/rancher/rancher/releases/tag/v2.4.14), and [Rancher v2.3.11](https://github.com/rancher/rancher/releases/tag/v2.3.11) |
|
||||
| [CVE-2019-14435](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14435) | This vulnerability allows authenticated users to potentially extract otherwise private data out of IPs reachable from system service containers used by Rancher. This can include but not only limited to services such as cloud provider metadata services. Although Rancher allow users to configure whitelisted domains for system service access, this flaw can still be exploited by a carefully crafted HTTP request. The issue was found and reported by Matt Belisle and Alex Stevenson at Workiva. | 5 Aug 2019 | [Rancher v2.2.7](https://github.com/rancher/rancher/releases/tag/v2.2.7) and [Rancher v2.1.12](https://github.com/rancher/rancher/releases/tag/v2.1.12) |
|
||||
| [CVE-2019-14436](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14436) | The vulnerability allows a member of a project that has access to edit role bindings to be able to assign themselves or others a cluster level role granting them administrator access to that cluster. The issue was found and reported by Michal Lipinski at Nokia. | 5 Aug 2019 | [Rancher v2.2.7](https://github.com/rancher/rancher/releases/tag/v2.2.7) and [Rancher v2.1.12](https://github.com/rancher/rancher/releases/tag/v2.1.12) |
|
||||
| [CVE-2019-13209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-13209) | The vulnerability is known as a [Cross-Site Websocket Hijacking attack](https://www.christian-schneider.net/CrossSiteWebSocketHijacking.html). This attack allows an exploiter to gain access to clusters managed by Rancher with the roles/permissions of a victim. It requires that a victim to be logged into a Rancher server and then access a third-party site hosted by the exploiter. Once that is accomplished, the exploiter is able to execute commands against the Kubernetes API with the permissions and identity of the victim. Reported by Matt Belisle and Alex Stevenson from Workiva. | 15 Jul 2019 | [Rancher v2.2.5](https://github.com/rancher/rancher/releases/tag/v2.2.5), [Rancher v2.1.11](https://github.com/rancher/rancher/releases/tag/v2.1.11) and [Rancher v2.0.16](https://github.com/rancher/rancher/releases/tag/v2.0.16) |
|
||||
| [CVE-2019-12303](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12303) | Project owners can inject extra fluentd logging configurations that makes it possible to read files or execute arbitrary commands inside the fluentd container. Reported by Tyler Welton from Untamed Theory. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) |
|
||||
| [CVE-2019-12274](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12274) | Nodes using the built-in node drivers using a file path option allows the machine to read arbitrary files including sensitive ones from inside the Rancher server container. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) |
|
||||
| [CVE-2019-11202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11202) | The default admin, that is shipped with Rancher, will be re-created upon restart of Rancher despite being explicitly deleted. | 16 Apr 2019 | [Rancher v2.2.2](https://github.com/rancher/rancher/releases/tag/v2.2.2), [Rancher v2.1.9](https://github.com/rancher/rancher/releases/tag/v2.1.9) and [Rancher v2.0.14](https://github.com/rancher/rancher/releases/tag/v2.0.14) |
|
||||
| [CVE-2019-6287](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6287) | Project members continue to get access to namespaces from projects that they were removed from if they were added to more than one project. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) |
|
||||
| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions](../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rollbacks.md). |
|
||||
| [CVE-2021-25318](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25318) | A vulnerability was discovered in Rancher versions 2.0 through the aforementioned fixed versions, where users were granted access to resources regardless of the resource's API group. For example, Rancher should have allowed users access to `apps.catalog.cattle.io`, but instead incorrectly gave access to `apps.*`. Resources affected in the **Downstream clusters** and **Rancher management cluster** can be found [here](https://github.com/rancher/rancher/security/advisories/GHSA-f9xf-jq4j-vqw4). There is not a direct mitigation besides upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-31999](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31999) | A vulnerability was discovered in Rancher 2.0.0 through the aforementioned patched versions, where a malicious Rancher user could craft an API request directed at the proxy for the Kubernetes API of a managed cluster to gain access to information they do not have access to. This is done by passing the "Impersonate-User" or "Impersonate-Group" header in the Connection header, which is then correctly removed by the proxy. At this point, instead of impersonating the user and their permissions, the request will act as if it was from the Rancher management server and incorrectly return the information. The vulnerability is limited to valid Rancher users with some level of permissions on the cluster. There is not a direct mitigation besides upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-25320](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25320) | A vulnerability was discovered in Rancher 2.2.0 through the aforementioned patched versions, where cloud credentials weren't being properly validated through the Rancher API. Specifically through a proxy designed to communicate with cloud providers. Any Rancher user that was logged-in and aware of a cloud-credential ID that was valid for a given cloud provider, could call that cloud provider's API through the proxy API, and the cloud-credential would be attached. The exploit is limited to valid Rancher users. There is not a direct mitigation outside of upgrading to the patched Rancher versions. | 14 Jul 2021 | [Rancher v2.5.9](https://github.com/rancher/rancher/releases/tag/v2.5.9) and [Rancher v2.4.16](https://github.com/rancher/rancher/releases/tag/v2.4.16) |
|
||||
| [CVE-2021-25313](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-25313) | A security vulnerability was discovered on all Rancher 2 versions. When accessing the Rancher API with a browser, the URL was not properly escaped, making it vulnerable to an XSS attack. Specially crafted URLs to these API endpoints could include JavaScript which would be embedded in the page and execute in a browser. There is no direct mitigation. Avoid clicking on untrusted links to your Rancher server. | 2 Mar 2021 | [Rancher v2.5.6](https://github.com/rancher/rancher/releases/tag/v2.5.6), [Rancher v2.4.14](https://github.com/rancher/rancher/releases/tag/v2.4.14), and [Rancher v2.3.11](https://github.com/rancher/rancher/releases/tag/v2.3.11) |
|
||||
| [CVE-2019-14435](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14435) | This vulnerability allows authenticated users to potentially extract otherwise private data out of IPs reachable from system service containers used by Rancher. This can include but not only limited to services such as cloud provider metadata services. Although Rancher allow users to configure whitelisted domains for system service access, this flaw can still be exploited by a carefully crafted HTTP request. The issue was found and reported by Matt Belisle and Alex Stevenson at Workiva. | 5 Aug 2019 | [Rancher v2.2.7](https://github.com/rancher/rancher/releases/tag/v2.2.7) and [Rancher v2.1.12](https://github.com/rancher/rancher/releases/tag/v2.1.12) |
|
||||
| [CVE-2019-14436](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14436) | The vulnerability allows a member of a project that has access to edit role bindings to be able to assign themselves or others a cluster level role granting them administrator access to that cluster. The issue was found and reported by Michal Lipinski at Nokia. | 5 Aug 2019 | [Rancher v2.2.7](https://github.com/rancher/rancher/releases/tag/v2.2.7) and [Rancher v2.1.12](https://github.com/rancher/rancher/releases/tag/v2.1.12) |
|
||||
| [CVE-2019-13209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-13209) | The vulnerability is known as a [Cross-Site Websocket Hijacking attack](https://www.christian-schneider.net/CrossSiteWebSocketHijacking.html). This attack allows an exploiter to gain access to clusters managed by Rancher with the roles/permissions of a victim. It requires that a victim to be logged into a Rancher server and then access a third-party site hosted by the exploiter. Once that is accomplished, the exploiter is able to execute commands against the Kubernetes API with the permissions and identity of the victim. Reported by Matt Belisle and Alex Stevenson from Workiva. | 15 Jul 2019 | [Rancher v2.2.5](https://github.com/rancher/rancher/releases/tag/v2.2.5), [Rancher v2.1.11](https://github.com/rancher/rancher/releases/tag/v2.1.11) and [Rancher v2.0.16](https://github.com/rancher/rancher/releases/tag/v2.0.16) |
|
||||
| [CVE-2019-12303](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12303) | Project owners can inject extra fluentd logging configurations that makes it possible to read files or execute arbitrary commands inside the fluentd container. Reported by Tyler Welton from Untamed Theory. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) |
|
||||
| [CVE-2019-12274](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12274) | Nodes using the built-in node drivers using a file path option allows the machine to read arbitrary files including sensitive ones from inside the Rancher server container. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) |
|
||||
| [CVE-2019-11202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11202) | The default admin, that is shipped with Rancher, will be re-created upon restart of Rancher despite being explicitly deleted. | 16 Apr 2019 | [Rancher v2.2.2](https://github.com/rancher/rancher/releases/tag/v2.2.2), [Rancher v2.1.9](https://github.com/rancher/rancher/releases/tag/v2.1.9) and [Rancher v2.0.14](https://github.com/rancher/rancher/releases/tag/v2.0.14) |
|
||||
| [CVE-2019-6287](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6287) | Project members continue to get access to namespaces from projects that they were removed from if they were added to more than one project. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) |
|
||||
| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions](../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rollbacks.md). |
|
||||
|
||||
@@ -9,7 +9,6 @@
|
||||
| [Using App Catalogs](../pages-for-subheaders/helm-charts-in-rancher.md) | ✓ | ✓ | ✓ | ✓ |
|
||||
| Configuring Tools ([Alerts, Notifiers, Monitoring](../pages-for-subheaders/monitoring-and-alerting.md), [Logging](../pages-for-subheaders/logging.md), [Istio](../pages-for-subheaders/istio.md)) | ✓ | ✓ | ✓ | ✓ |
|
||||
| [Running Security Scans](../pages-for-subheaders/cis-scan-guides.md) | ✓ | ✓ | ✓ | ✓ |
|
||||
| [Use existing configuration to create additional clusters](../how-to-guides/new-user-guides/manage-clusters/clone-cluster-configuration.md)| ✓ | ✓ | ✓ | |
|
||||
| [Ability to rotate certificates](../how-to-guides/new-user-guides/manage-clusters/rotate-certificates.md) | ✓ | ✓ | | |
|
||||
| Ability to [backup](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher-launched-kubernetes-clusters.md) and [restore](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-rancher-launched-kubernetes-clusters-from-backup.md) Rancher-launched clusters | ✓ | ✓ | | ✓<sup>4</sup> |
|
||||
| [Cleaning Kubernetes components when clusters are no longer reachable from Rancher](../how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes.md) | ✓ | | | |
|
||||
|
||||
@@ -360,7 +360,6 @@
|
||||
]
|
||||
},
|
||||
"how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces",
|
||||
"how-to-guides/new-user-guides/manage-clusters/clone-cluster-configuration",
|
||||
"how-to-guides/new-user-guides/manage-clusters/rotate-certificates",
|
||||
"how-to-guides/new-user-guides/manage-clusters/rotate-encryption-key",
|
||||
"how-to-guides/new-user-guides/manage-clusters/nodes-and-node-pools",
|
||||
|
||||
Reference in New Issue
Block a user