Merge branch 'rancher:main' into windows_clusters

This commit is contained in:
Sunil Singh
2024-10-14 10:30:42 -07:00
committed by GitHub
2113 changed files with 148737 additions and 7242 deletions
@@ -1,9 +0,0 @@
---
title: RKE Cluster Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration"/>
</head>
This page has moved [here.](../../../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md)
@@ -96,7 +96,7 @@ Kubernetes workers should open TCP port `6783` (control port), UDP port `6783` a
For more information, see the following pages:
- [Weave Net Official Site](https://www.weave.works/)
- [Weave Net Official Site](https://github.com/weaveworks/weave/blob/master/site/overview.md)
### RKE2 Kubernetes clusters
@@ -6,30 +6,33 @@ title: Deprecated Features in Rancher
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/deprecated-features"/>
</head>
### What is Rancher's deprecation policy?
## What is Rancher's deprecation policy?
We have published our official deprecation policy in the support [terms of service](https://rancher.com/support-maintenance-terms).
### Where can I find out which features have been deprecated in Rancher?
## Where can I find out which features have been deprecated in Rancher?
Rancher will publish deprecated features as part of the [release notes](https://github.com/rancher/rancher/releases) for Rancher found on GitHub. Please consult the following patch releases for deprecated features:
| Patch Version | Release Date |
|---------------|---------------|
| [2.7.12](https://github.com/rancher/rancher/releases/tag/v2.7.12) | Mar 28, 2024 |
| [2.7.11](https://github.com/rancher/rancher/releases/tag/v2.7.11) | Mar 1, 2024 |
| [2.7.10](https://github.com/rancher/rancher/releases/tag/v2.7.10) | Feb 8, 2024 |
| [2.7.9](https://github.com/rancher/rancher/releases/tag/v2.7.9) | Oct 26, 2023 |
| [2.7.8](https://github.com/rancher/rancher/releases/tag/v2.7.8) | Oct 5, 2023 |
| [2.7.7](https://github.com/rancher/rancher/releases/tag/v2.7.7) | Sep 28, 2023 |
| [2.7.6](https://github.com/rancher/rancher/releases/tag/v2.7.6) | Aug 30, 2023 |
| [2.7.5](https://github.com/rancher/rancher/releases/tag/v2.7.5) | Jun 29, 2023 |
| [2.7.4](https://github.com/rancher/rancher/releases/tag/v2.7.4) | May 31, 2023 |
| [2.7.3](https://github.com/rancher/rancher/releases/tag/v2.7.3) | Apr 24, 2023 |
| [2.7.2](https://github.com/rancher/rancher/releases/tag/v2.7.2) | Apr 11, 2023 |
| [2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1) | Jan 24, 2023 |
| [2.7.0](https://github.com/rancher/rancher/releases/tag/v2.7.0) | Nov 16, 2022 |
| [2.7.15](https://github.com/rancher/rancher/releases/tag/v2.7.15) | July 31, 2024 |
| [2.7.14](https://github.com/rancher/rancher/releases/tag/v2.7.14) | June 17, 2024 |
| [2.7.13](https://github.com/rancher/rancher/releases/tag/v2.7.13) | May 16, 2024 |
| [2.7.12](https://github.com/rancher/rancher/releases/tag/v2.7.12) | Mar 28, 2024 |
| [2.7.11](https://github.com/rancher/rancher/releases/tag/v2.7.11) | Mar 1, 2024 |
| [2.7.10](https://github.com/rancher/rancher/releases/tag/v2.7.10) | Feb 8, 2024 |
| [2.7.9](https://github.com/rancher/rancher/releases/tag/v2.7.9) | Oct 26, 2023 |
| [2.7.8](https://github.com/rancher/rancher/releases/tag/v2.7.8) | Oct 5, 2023 |
| [2.7.7](https://github.com/rancher/rancher/releases/tag/v2.7.7) | Sep 28, 2023 |
| [2.7.6](https://github.com/rancher/rancher/releases/tag/v2.7.6) | Aug 30, 2023 |
| [2.7.5](https://github.com/rancher/rancher/releases/tag/v2.7.5) | Jun 29, 2023 |
| [2.7.4](https://github.com/rancher/rancher/releases/tag/v2.7.4) | May 31, 2023 |
| [2.7.3](https://github.com/rancher/rancher/releases/tag/v2.7.3) | Apr 24, 2023 |
| [2.7.2](https://github.com/rancher/rancher/releases/tag/v2.7.2) | Apr 11, 2023 |
| [2.7.1](https://github.com/rancher/rancher/releases/tag/v2.7.1) | Jan 24, 2023 |
| [2.7.0](https://github.com/rancher/rancher/releases/tag/v2.7.0) | Nov 16, 2022 |
### What can I expect when a feature is marked for deprecation?
## What can I expect when a feature is marked for deprecation?
In the release where functionality is marked as "Deprecated", it will still be available and supported allowing upgrades to follow the usual procedure. Once upgraded, users/admins should start planning to move away from the deprecated functionality before upgrading to the release it marked as removed. The recommendation for new deployments is to not use the deprecated feature.
+4 -4
View File
@@ -18,19 +18,19 @@ enable_cri_dockerd: true
For users looking to use another container runtime, Rancher has the edge-focused K3s and datacenter-focused RKE2 Kubernetes distributions that use containerd as the default runtime. Imported RKE2 and K3s Kubernetes clusters can then be upgraded and managed through Rancher even after the removal of in-tree Dockershim in Kubernetes 1.24.
### FAQ
## FAQ
<br/>
Q. Do I have to upgrade Rancher to get Ranchers support of the upstream Dockershim?
Q: Do I have to upgrade Rancher to get Ranchers support of the upstream Dockershim?
The upstream support of Dockershim begins for RKE in Kubernetes 1.21. You will need to be on Rancher 2.6 or above to have support for RKE with Kubernetes 1.21. See our [support matrix](https://rancher.com/support-maintenance-terms/all-supported-versions/rancher-v2.6.0/) for details.
<br/>
Q. I am currently on RKE with Kubernetes 1.20. Do I need to upgrade to RKE with Kubernetes 1.21 sooner to avoid being out of support for Dockershim?
Q: I am currently on RKE with Kubernetes 1.20. Do I need to upgrade to RKE with Kubernetes 1.21 sooner to avoid being out of support for Dockershim?
A. The version of Dockershim in RKE with Kubernetes 1.20 will continue to work and is not scheduled for removal upstream until Kubernetes 1.24. It will only emit a warning of its future deprecation, which Rancher has mitigated in RKE with Kubernetes 1.21. You can plan your upgrade to Kubernetes 1.21 as you would normally, but should consider enabling the external Dockershim by Kubernetes 1.22. The external Dockershim will need to be enabled before upgrading to Kubernetes 1.24, at which point the existing implementation will be removed.
A: The version of Dockershim in RKE with Kubernetes 1.20 will continue to work and is not scheduled for removal upstream until Kubernetes 1.24. It will only emit a warning of its future deprecation, which Rancher has mitigated in RKE with Kubernetes 1.21. You can plan your upgrade to Kubernetes 1.21 as you would normally, but should consider enabling the external Dockershim by Kubernetes 1.22. The external Dockershim will need to be enabled before upgrading to Kubernetes 1.24, at which point the existing implementation will be removed.
For more information on the deprecation and its timeline, see the [Kubernetes Dockershim Deprecation FAQ](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed).
@@ -10,10 +10,6 @@ This FAQ is a work in progress designed to answer the questions most frequently
See the [Technical FAQ](technical-items.md) for frequently asked technical questions.
## Does Rancher v2.x support Docker Swarm and Mesos as environment types?
Swarm and Mesos are no longer selectable options when you create a new environment in Rancher v2.x. However, both Swarm and Mesos will continue to be available as Catalog applications you can deploy. It was a tough decision to make but, in the end, it came down to adoption. For example, out of more than 15,000 clusters, only about 200 were running Swarm.
## Is it possible to manage Azure Kubernetes Services with Rancher v2.x?
Yes. See our [Cluster Administration](../how-to-guides/new-user-guides/manage-clusters/manage-clusters.md) guide for what Rancher features are available on AKS, as well as our [documentation on AKS](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-aks.md).
@@ -8,11 +8,11 @@ title: Installing and Configuring kubectl
`kubectl` is a CLI utility for running commands against Kubernetes clusters. It's required for many maintenance and administrative tasks in Rancher 2.x.
### Installation
## Installation
See [kubectl Installation](https://kubernetes.io/docs/tasks/tools/install-kubectl/) for installation on your operating system.
### Configuration
## Configuration
When you create a Kubernetes cluster with RKE, RKE creates a `kube_config_cluster.yml` in the local directory that contains credentials to connect to your new cluster with tools like `kubectl` or `helm`.
+3 -3
View File
@@ -7,15 +7,15 @@ title: Security FAQ
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/security"/>
</head>
### Is there a Hardening Guide?
## Is there a Hardening Guide?
The Hardening Guide is located in the main [Security](../reference-guides/rancher-security/rancher-security.md) section.
### Have hardened Rancher Kubernetes clusters been evaluated by the CIS Kubernetes Benchmark? Where can I find the results?
## Have hardened Rancher Kubernetes clusters been evaluated by the CIS Kubernetes Benchmark? Where can I find the results?
We have run the CIS Kubernetes benchmark against a hardened Rancher Kubernetes cluster. The results of that assessment can be found in the main [Security](../reference-guides/rancher-security/rancher-security.md) section.
### How does Rancher verify communication with downstream clusters, and what are some associated security concerns?
## How does Rancher verify communication with downstream clusters, and what are some associated security concerns?
Communication between the Rancher server and downstream clusters is performed through agents. Rancher uses either a registered certificate authority (CA) bundle or the local trust store to verify communication between Rancher agents and the Rancher server. Using a CA bundle for verification is more strict, as only the certificates based on that bundle are trusted. If TLS verification for a explicit CA bundle fails, Rancher may fall back to using the local trust store for verifying future communication. Any CA within the local trust store can then be used to generate a valid certificate.
@@ -6,9 +6,10 @@ title: Technical FAQ
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/technical-items"/>
</head>
### How can I reset the administrator password?
## How can I reset the administrator password?
Docker install:
Docker Install:
```
$ docker exec -ti <container_id> reset-password
New password for default administrator (user-xxxxx):
@@ -16,6 +17,7 @@ New password for default administrator (user-xxxxx):
```
Kubernetes install (Helm):
```
$ KUBECONFIG=./kube_config_cluster.yml
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher --no-headers | head -1 | awk '{ print $1 }') -c rancher -- reset-password
@@ -23,10 +25,10 @@ New password for default administrator (user-xxxxx):
<new_password>
```
## I deleted/deactivated the last admin, how can I fix it?
Docker install:
### I deleted/deactivated the last admin, how can I fix it?
Docker Install:
```
$ docker exec -ti <container_id> ensure-default-admin
New default administrator (user-xxxxx)
@@ -35,38 +37,39 @@ New password for default administrator (user-xxxxx):
```
Kubernetes install (Helm):
```
$ KUBECONFIG=./kube_config_cluster.yml
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- ensure-default-admin
New password for default administrator (user-xxxxx):
<new_password>
```
### How can I enable debug logging?
## How can I enable debug logging?
See [Troubleshooting: Logging](../troubleshooting/other-troubleshooting-tips/logging.md)
### My ClusterIP does not respond to ping
## My ClusterIP does not respond to ping
ClusterIP is a virtual IP, which will not respond to ping. Best way to test if the ClusterIP is configured correctly, is by using `curl` to access the IP and port to see if it responds.
### Where can I manage Node Templates?
## Where can I manage Node Templates?
Node Templates can be accessed by opening your account menu (top right) and selecting `Node Templates`.
### Why is my Layer-4 Load Balancer in `Pending` state?
## Why is my Layer-4 Load Balancer in `Pending` state?
The Layer-4 Load Balancer is created as `type: LoadBalancer`. In Kubernetes, this needs a cloud provider or controller that can satisfy these requests, otherwise these will be in `Pending` state forever. More information can be found on [Cloud Providers](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md) or [Create External Load Balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/)
### Where is the state of Rancher stored?
## Where is the state of Rancher stored?
- Docker Install: in the embedded etcd of the `rancher/rancher` container, located at `/var/lib/rancher`.
- Kubernetes install: in the etcd of the RKE cluster created to run Rancher.
### How are the supported Docker versions determined?
## How are the supported Docker versions determined?
We follow the validated Docker versions for upstream Kubernetes releases. The validated versions can be found under [External Dependencies](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.10.md#external-dependencies) in the Kubernetes release CHANGELOG.md.
### How can I access nodes created by Rancher?
## How can I access nodes created by Rancher?
SSH keys to access the nodes created by Rancher can be downloaded via the **Nodes** view. Choose the node which you want to access and click on the vertical ⋮ button at the end of the row, and choose **Download Keys** as shown in the picture below.
@@ -78,14 +81,14 @@ Unzip the downloaded zip file, and use the file `id_rsa` to connect to you host.
$ ssh -i id_rsa user@ip_of_node
```
### How can I automate task X in Rancher?
## How can I automate task X in Rancher?
The UI consists of static files, and works based on responses of the API. That means every action/task that you can execute in the UI, can be automated via the API. There are 2 ways to do this:
* Visit `https://your_rancher_ip/v3` and browse the API options.
* Capture the API calls when using the UI (Most commonly used for this is [Chrome Developer Tools](https://developers.google.com/web/tools/chrome-devtools/#network) but you can use anything you like)
### The IP address of a node changed, how can I recover?
## The IP address of a node changed, how can I recover?
A node is required to have a static IP configured (or a reserved IP via DHCP). If the IP of a node has changed, you will have to remove it from the cluster and readd it. After it is removed, Rancher will update the cluster to the correct state. If the cluster is no longer in `Provisioning` state, the node is removed from the cluster.
@@ -93,11 +96,11 @@ When the IP address of the node changed, Rancher lost connection to the node, so
When the node is removed from the cluster, and the node is cleaned, you can readd the node to the cluster.
### How can I add more arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster?
## How can I add more arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster?
You can add more arguments/binds/environment variables via the [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#rke-cluster-config-file-reference) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls](https://rancher.com/docs/rke/latest/en/example-yamls/).
### How do I check if my certificate chain is valid?
## How do I check if my certificate chain is valid?
Use the `openssl verify` command to validate your certificate chain:
@@ -138,7 +141,7 @@ subject= /C=GB/ST=England/O=Alice Ltd/CN=rancher.yourdomain.com
issuer= /C=GB/ST=England/O=Alice Ltd/CN=Alice Intermediate CA
```
### How do I check `Common Name` and `Subject Alternative Names` in my server certificate?
## How do I check `Common Name` and `Subject Alternative Names` in my server certificate?
Although technically an entry in `Subject Alternative Names` is required, having the hostname in both `Common Name` and as entry in `Subject Alternative Names` gives you maximum compatibility with older browser/applications.
@@ -156,7 +159,7 @@ openssl x509 -noout -in cert.pem -text | grep DNS
DNS:rancher.my.org
```
### Why does it take 5+ minutes for a pod to be rescheduled when a node has failed?
## Why does it take 5+ minutes for a pod to be rescheduled when a node has failed?
This is due to a combination of the following default Kubernetes settings:
@@ -175,6 +178,6 @@ In Kubernetes v1.13, the `TaintBasedEvictions` feature is enabled by default. Se
* `default-not-ready-toleration-seconds`: Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.
* `default-unreachable-toleration-seconds`: Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
### Can I use keyboard shortcuts in the UI?
## Can I use keyboard shortcuts in the UI?
Yes, most parts of the UI can be reached using keyboard shortcuts. For an overview of the available shortcuts, press `?` anywhere in the UI.
+4 -4
View File
@@ -6,11 +6,11 @@ title: Telemetry FAQ
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/telemetry"/>
</head>
### What is Telemetry?
## What is Telemetry?
Telemetry collects aggregate information about the size of Rancher installations, versions of components used, and which features are used. This information is used by Rancher Labs to help make the product better and is not shared with third-parties.
### What information is collected?
## What information is collected?
No specific identifying information like usernames, passwords, or the names or addresses of user resources will ever be collected.
@@ -24,12 +24,12 @@ The primary things collected include:
- The image name & version of Rancher that is running.
- A unique randomly-generated identifier for this installation.
### Can I see the information that is being sent?
## Can I see the information that is being sent?
If Telemetry is enabled, you can go to `https://<your rancher server>/v1-telemetry` in your installation to see the current data.
If Telemetry is not enabled, the process that collects the data is not running, so there is nothing being collected to look at.
### How do I turn it on or off?
## How do I turn it on or off?
After initial setup, an administrator can go to the `Settings` page in the `Global` section of the UI and click Edit to change the `telemetry-opt` setting to either `in` or `out`.
@@ -12,7 +12,7 @@ These instructions assume you have already followed the instructions for a Kuber
:::
### Rancher Helm Upgrade Options
## Rancher Helm Upgrade Options
To upgrade with Helm, apply the same options that you used when installing Rancher. Refer to the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
@@ -148,7 +148,7 @@ To see options on how to customize the cert-manager install (including for cases
:::
```
# If you have installed the CRDs manually instead of with the `--set installCRDs=true` option added to your Helm install command, you should upgrade your CRD resources before upgrading the Helm chart:
# If you have installed the CRDs manually, instead of setting `installCRDs` or `crds.enabled` to `true` in your Helm install command, you should upgrade your CRD resources before upgrading the Helm chart:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/<VERSION>/cert-manager.crds.yaml
# Add the Jetstack Helm repository
@@ -161,7 +161,7 @@ helm repo update
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set installCRDs=true
--set crds.enabled=true
```
Once youve installed cert-manager, you can verify it is deployed correctly by checking the cert-manager namespace for running pods:
@@ -12,7 +12,6 @@ For the instructions to upgrade Rancher installed with Docker, refer to [this pa
To upgrade the components in your Kubernetes cluster, or the definition of the [Kubernetes services](https://rancher.com/docs/rke/latest/en/config-options/services/) or [add-ons](https://rancher.com/docs/rke/latest/en/config-options/add-ons/), refer to the [upgrade documentation for RKE](https://rancher.com/docs/rke/latest/en/upgrades/), the Rancher Kubernetes Engine.
## Prerequisites
### Access to kubeconfig
@@ -49,7 +48,6 @@ For [air-gapped installs only,](../other-installation-methods/air-gapped-helm-cl
Follow the steps to upgrade Rancher server:
### 1. Back up Your Kubernetes Cluster that is Running Rancher Server
Use the [backup application](../../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher.md) to back up Rancher.
@@ -119,7 +117,6 @@ If you are installing Rancher in an air-gapped environment, skip the rest of thi
:::
Get the values, which were passed with `--set`, from the current Rancher Helm chart that is installed.
```
@@ -19,6 +19,7 @@ Some feature flags require a restart of the Rancher container. Features that req
The following is a list of feature flags available in Rancher. If you've upgraded from a previous Rancher version, you may see additional flags in the Rancher UI, such as `proxy` or `dashboard` (both [discontinued](/versioned_docs/version-2.5/reference-guides/installation-references/feature-flags.md)):
- `continuous-delivery`: Allows Fleet GitOps to be disabled separately from Fleet. See [Continuous Delivery.](../../../how-to-guides/advanced-user-guides/enable-experimental-features/continuous-delivery.md) for more information.
- `external-rules`: This flag is disabled by default. Only admin users can enable/disable the flag, and note that `escalate` permissions on `RoleTemplates` are required to create external `RoleTemplates` with `ExternalRules`. Restricted admin users can only enable the flag. If enabled, external `RoleTemplates` can be created only if the backing `ClusterRole` exists in the local cluster or the `ExternalRules` is set. For context, the backing `ClusterRole` holds cluster rules and privileges, and shares the same `metadata.name` used in the `RoleTemplate` in your respective cluster referenced by the `ClusterRoleTemplateBinding/ProjectRoleTemplateBinding`. Previous external `RoleTemplates` that dont have a backing `ClusterRole` wont be granted or modifiable unless a backing `ClusterRole` is created or the `ExternalRules` field is set. If disabled, external `RoleTemplates` with `.context=project` or `.context=””` can be created even if the backing `ClusterRole` does not exist.
- `fleet`: The Rancher provisioning framework in v2.6 and later requires Fleet. The flag will be automatically enabled when you upgrade, even if you disabled this flag in an earlier version of Rancher. See [Continuous Delivery with Fleet](../../../integrations-in-rancher/fleet-gitops-at-scale/fleet-gitops-at-scale.md) for more information.
- `harvester`: Manages access to the Virtualization Management page, where users can navigate directly to Harvester clusters and access the Harvester UI. See [Harvester Integration](../../../integrations-in-rancher/harvester.md) for more information.
- `istio-virtual-service-ui`: Enables a [visual interface](../../../how-to-guides/advanced-user-guides/enable-experimental-features/istio-traffic-management-features.md) to create, read, update, and delete Istio virtual services and destination rules, which are Istio traffic management features.
@@ -22,7 +22,7 @@ Starting with version 1.24, the above defaults to true.
For users looking to use another container runtime, Rancher has the edge-focused K3s and datacenter-focused RKE2 Kubernetes distributions that use containerd as the default runtime. Imported RKE2 and K3s Kubernetes clusters can then be upgraded and managed through Rancher going forward.
### FAQ
## FAQ
<br/>
@@ -46,6 +46,6 @@ A: You can use a runtime like containerd with Kubernetes that does not require D
Q: If I am already using RKE1 and want to switch to RKE2, what are my migration options?
A: Today, you can stand up a new cluster and migrate workloads to a new RKE2 cluster that uses containerd. Rancher is exploring the possibility of an in-place upgrade path.
A: Today, you can stand up a new cluster and migrate workloads to a new RKE2 cluster that uses containerd. For details, see the [RKE to RKE2 Replatforming Guide](https://links.imagerelay.com/cdn/3404/ql/5606a3da2365422ab2250d348aa07112/rke_to_rke2_replatforming_guide.pdf).
<br/>
@@ -143,8 +143,6 @@ docker run -d --restart=unless-stopped \
</details>
:::note
If you don't intend to send telemetry data, opt out [telemetry](../../../../faq/telemetry.md) during the initial login.
@@ -25,7 +25,7 @@ We recommend setting up the following infrastructure for a high-availability ins
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
- **A private image registry** to distribute container images to your machines.
### 1. Set up Linux Nodes
## 1. Set up Linux Nodes
These hosts will be disconnected from the internet, but require being able to connect with your private registry.
@@ -33,7 +33,7 @@ Make sure that your nodes fulfill the general installation requirements for [OS,
For an example of one way to set up Linux nodes, refer to this [tutorial](../../../../how-to-guides/new-user-guides/infrastructure-setup/nodes-in-amazon-ec2.md) for setting up nodes as instances in Amazon EC2.
### 2. Set up External Datastore
## 2. Set up External Datastore
The ability to run Kubernetes using a datastore other than etcd sets K3s apart from other Kubernetes distributions. This feature provides flexibility to Kubernetes operators. The available options allow you to select a datastore that best fits your use case.
@@ -49,7 +49,7 @@ For an example of one way to set up the database, refer to this [tutorial](../..
For the complete list of options that are available for configuring a K3s cluster datastore, refer to the [K3s documentation.](https://rancher.com/docs/k3s/latest/en/installation/datastore/)
### 3. Set up the Load Balancer
## 3. Set up the Load Balancer
You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
@@ -72,7 +72,7 @@ Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance
:::
### 4. Set up the DNS Record
## 4. Set up the DNS Record
Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer.
@@ -82,7 +82,7 @@ You will need to specify this hostname in a later step when you install Rancher,
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
### 5. Set up a Private Image Registry
## 5. Set up a Private Image Registry
Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing container images to your machines.
@@ -106,13 +106,13 @@ To install the Rancher management server on a high-availability RKE cluster, we
These nodes must be in the same region/data center. You may place these servers in separate availability zones.
### Why three nodes?
## Why Three Nodes?
In an RKE cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes.
The etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can suffer from [split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems), requiring the cluster to be restored from backup. If one of the three etcd nodes fails, the two remaining nodes can elect a leader because they have the majority of the total number of etcd nodes.
### 1. Set up Linux Nodes
## 1. Set up Linux Nodes
These hosts will be disconnected from the internet, but require being able to connect with your private registry.
@@ -120,7 +120,7 @@ Make sure that your nodes fulfill the general installation requirements for [OS,
For an example of one way to set up Linux nodes, refer to this [tutorial](../../../../how-to-guides/new-user-guides/infrastructure-setup/nodes-in-amazon-ec2.md) for setting up nodes as instances in Amazon EC2.
### 2. Set up the Load Balancer
## 2. Set up the Load Balancer
You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
@@ -143,7 +143,7 @@ Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance
:::
### 3. Set up the DNS Record
## 3. Set up the DNS Record
Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer.
@@ -153,7 +153,7 @@ You will need to specify this hostname in a later step when you install Rancher,
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
### 4. Set up a Private Image Registry
## 4. Set up a Private Image Registry
Rancher supports air gap installs using a secure private registry. You must have your own private registry or other means of distributing container images to your machines.
@@ -176,7 +176,7 @@ If you need to create a private registry, refer to the documentation pages for y
:::
### 1. Set up a Linux Node
## 1. Set up a Linux Node
This host will be disconnected from the Internet, but needs to be able to connect to your private registry.
@@ -184,7 +184,7 @@ Make sure that your node fulfills the general installation requirements for [OS,
For an example of one way to set up Linux nodes, refer to this [tutorial](../../../../how-to-guides/new-user-guides/infrastructure-setup/nodes-in-amazon-ec2.md) for setting up nodes as instances in Amazon EC2.
### 2. Set up a Private Docker Registry
## 2. Set up a Private Docker Registry
Rancher supports air gap installs using a private registry on your bastion server. You must have your own private registry or other means of distributing container images to your machines.
@@ -193,4 +193,4 @@ If you need help with creating a private registry, please refer to the [official
</TabItem>
</Tabs>
### [Next: Collect and Publish Images to your Private Registry](publish-images.md)
## [Next: Collect and Publish Images to your Private Registry](publish-images.md)
@@ -30,7 +30,8 @@ In this guide, we are assuming you have created your nodes in your air gapped en
3. [Install K3s](#3-install-k3s)
4. [Save and Start Using the kubeconfig File](#4-save-and-start-using-the-kubeconfig-file)
### 1. Prepare Images Directory
## 1. Prepare Images Directory
Obtain the images tar file for your architecture from the [releases](https://github.com/k3s-io/k3s/releases) page for the version of K3s you will be running.
Place the tar file in the `images` directory before starting K3s on each node, for example:
@@ -40,7 +41,8 @@ sudo mkdir -p /var/lib/rancher/k3s/agent/images/
sudo cp ./k3s-airgap-images-$ARCH.tar /var/lib/rancher/k3s/agent/images/
```
### 2. Create Registry YAML
## 2. Create Registry YAML
Create the registries.yaml file at `/etc/rancher/k3s/registries.yaml`. This will tell K3s the necessary details to connect to your private registry.
The registries.yaml file should look like this before plugging in the necessary information:
@@ -66,7 +68,7 @@ Note, at this time only secure registries are supported with K3s (SSL with custo
For more information on private registries configuration file for K3s, refer to the [K3s documentation.](https://rancher.com/docs/k3s/latest/en/installation/private-registry/)
### 3. Install K3s
## 3. Install K3s
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [Rancher Support Matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/).
@@ -98,7 +100,7 @@ K3s additionally provides a `--resolv-conf` flag for kubelets, which may help wi
:::
### 4. Save and Start Using the kubeconfig File
## 4. Save and Start Using the kubeconfig File
When you installed K3s on each Rancher server node, a `kubeconfig` file was created on the node at `/etc/rancher/k3s/k3s.yaml`. This file contains credentials for full access to the cluster, and you should save this file in a secure location.
@@ -138,7 +140,7 @@ kubectl --kubeconfig ~/.kube/config/k3s.yaml get pods --all-namespaces
For more information about the `kubeconfig` file, refer to the [K3s documentation](https://rancher.com/docs/k3s/latest/en/cluster-access/) or the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) about organizing cluster access using `kubeconfig` files.
### Note on Upgrading
## Note on Upgrading
Upgrading an air-gap environment can be accomplished in the following manner:
@@ -151,14 +153,15 @@ Upgrading an air-gap environment can be accomplished in the following manner:
In this guide, we are assuming you have created your nodes in your air-gapped environment and have a secure Docker private registry on your bastion server.
### Installation Outline
## Installation Outline
1. [Create RKE2 configuration](#1-create-rke2-configuration)
2. [Create Registry YAML](#2-create-registry-yaml)
3. [Install RKE2](#3-install-rke2)
4. [Save and Start Using the kubeconfig File](#4-save-and-start-using-the-kubeconfig-file)
### 1. Create RKE2 configuration
## 1. Create RKE2 configuration
Create the config.yaml file at `/etc/rancher/rke2/config.yaml`. This will contain all the configuration options necessary to create a highly available RKE2 cluster.
On the first server the minimum config is:
@@ -186,7 +189,8 @@ RKE2 additionally provides a `resolv-conf` option for kubelets, which may help w
:::
### 2. Create Registry YAML
## 2. Create Registry YAML
Create the registries.yaml file at `/etc/rancher/rke2/registries.yaml`. This will tell RKE2 the necessary details to connect to your private registry.
The registries.yaml file should look like this before plugging in the necessary information:
@@ -210,7 +214,7 @@ configs:
For more information on private registries configuration file for RKE2, refer to the [RKE2 documentation.](https://docs.rke2.io/install/containerd_registry_configuration)
### 3. Install RKE2
## 3. Install RKE2
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/)
@@ -239,7 +243,7 @@ systemctl start rke2-server.service
For more information, refer to the [RKE2 documentation](https://docs.rke2.io/install/airgap).
### 4. Save and Start Using the kubeconfig File
## 4. Save and Start Using the kubeconfig File
When you installed RKE2 on each Rancher server node, a `kubeconfig` file was created on the node at `/etc/rancher/rke2/rke2.yaml`. This file contains credentials for full access to the cluster, and you should save this file in a secure location.
@@ -279,7 +283,7 @@ kubectl --kubeconfig ~/.kube/config/rke2.yaml get pods --all-namespaces
For more information about the `kubeconfig` file, refer to the [RKE2 documentation](https://docs.rke2.io/cluster_access) or the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) about organizing cluster access using `kubeconfig` files.
### Note on Upgrading
## Note on Upgrading
Upgrading an air-gap environment can be accomplished in the following manner:
@@ -301,7 +305,7 @@ Certified version(s) of RKE based on the Rancher version can be found in the [Ra
:::
### 2. Create an RKE Config File
## 2. Create an RKE Config File
From a system that can access ports 22/TCP and 6443/TCP on the Linux host node(s) that you set up in a previous step, use the sample below to create a new file named `rancher-cluster.yml`.
@@ -352,7 +356,7 @@ private_registries:
is_default: true
```
### 3. Run RKE
## 3. Run RKE
After configuring `rancher-cluster.yml`, bring up your Kubernetes cluster:
@@ -360,7 +364,7 @@ After configuring `rancher-cluster.yml`, bring up your Kubernetes cluster:
rke up --config ./rancher-cluster.yml
```
### 4. Save Your Files
## 4. Save Your Files
:::note Important:
@@ -383,8 +387,8 @@ The "rancher-cluster" parts of the two latter file names are dependent on how yo
:::
### Issues or errors?
## Issues or Errors?
See the [Troubleshooting](../../install-upgrade-on-a-kubernetes-cluster/troubleshooting.md) page.
### [Next: Install Rancher](install-rancher-ha.md)
## [Next: Install Rancher](install-rancher-ha.md)
@@ -192,7 +192,7 @@ Placeholder | Description
**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.5.8`
#### Option B: Certificates From Files using Kubernetes Secrets
#### Option B: Certificates From Files Using Kubernetes Secrets
##### 1. Create secrets
@@ -27,7 +27,7 @@ First configure the HTTP proxy settings on the K3s systemd service, so that K3s'
```
cat <<'EOF' | sudo tee /etc/default/k3s > /dev/null
HTTP_PROXY=http://${proxy_host}
HTTPS_PROXY=http://${proxy_host}"
HTTPS_PROXY=http://${proxy_host}
NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local
EOF
```
@@ -71,7 +71,7 @@ Then you have to configure the HTTP proxy settings on the RKE2 systemd service,
```
cat <<'EOF' | sudo tee /etc/default/rke2-server > /dev/null
HTTP_PROXY=http://${proxy_host}
HTTPS_PROXY=http://${proxy_host}"
HTTPS_PROXY=http://${proxy_host}
NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local
EOF
```
@@ -109,7 +109,7 @@ Rancher Server is distributed as a Docker image, which have tags attached to the
| -------------------------- | ------ |
| `rancher/rancher:latest` | Our latest development release. These builds are validated through our CI automation framework. These releases are not recommended for production environments. |
| `rancher/rancher:stable` | Our newest stable release. This tag is recommended for production. |
| `rancher/rancher:<v2.X.X>` | You can install specific versions of Rancher by using the tag from a previous release. See what's available at DockerHub. |
| `rancher/rancher:<v2.X.X>` | You can install specific versions of Rancher by using the tag from a previous release. See what's available at Docker Hub. |
:::note
@@ -180,7 +180,7 @@ Repeat the below steps for each downstream cluster:
### 5. Force Update Fleet clusters to reconnect the fleet-agent to Rancher
Select 'Force Update' for the clusters within the [Continuous Delivery](../../../integrations-in-rancher/fleet/overview.md#accessing-fleet-in-the-rancher-ui) view of the Rancher UI to allow the fleet-agent in downstream clusters to successfully connect to Rancher.
Select 'Force Update' for the clusters within the [Continuous Delivery](../../../integrations-in-rancher/fleet-gitops-at-scale/fleet-gitops-at-scale.md#accessing-fleet-in-the-rancher-ui) view of the Rancher UI to allow the fleet-agent in downstream clusters to successfully connect to Rancher.
#### Why is this step required?
@@ -260,7 +260,7 @@ As a private CA is no longer being used, the `CATTLE_CA_CHECKSUM` environment va
### 5. Force Update Fleet clusters to reconnect the fleet-agent to Rancher
Select 'Force Update' for the clusters within the [Continuous Delivery](../../../integrations-in-rancher/fleet/overview.md#accessing-fleet-in-the-rancher-ui) view of the Rancher UI to allow the fleet-agent in downstream clusters to successfully connect to Rancher.
Select 'Force Update' for the clusters within the [Continuous Delivery](../../../integrations-in-rancher/fleet-gitops-at-scale/fleet-gitops-at-scale.md#accessing-fleet-in-the-rancher-ui) view of the Rancher UI to allow the fleet-agent in downstream clusters to successfully connect to Rancher.
#### Why is this step required?
@@ -102,8 +102,6 @@ There is a [known issue](https://github.com/rancher/rancher/issues/25478) in whi
### Maintaining Availability for Applications During Upgrades
_Available as of RKE v1.1.0_
In [this section of the RKE documentation,](https://rancher.com/docs/rke/latest/en/upgrades/maintaining-availability/) you'll learn the requirements to prevent downtime for your applications when upgrading the cluster.
### Configuring the Upgrade Strategy in the cluster.yml
@@ -36,7 +36,7 @@ Administrators might configure the RKE metadata settings to do the following:
- Change the metadata URL that Rancher uses to sync the metadata, which is useful for air gap setups if you need to sync Rancher locally instead of with GitHub
- Prevent Rancher from auto-syncing the metadata, which is one way to prevent new and unsupported Kubernetes versions from being available in Rancher
### Refresh Kubernetes Metadata
## Refresh Kubernetes Metadata
The option to refresh the Kubernetes metadata is available for administrators by default, or for any user who has the **Manage Cluster Drivers** [global role.](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md)
@@ -74,7 +74,7 @@ If you don't have an air gap setup, you don't need to specify the URL where Ranc
However, if you have an [air gap setup,](#air-gap-setups) you will need to mirror the Kubernetes metadata repository in a location available to Rancher. Then you need to change the URL to point to the new location of the JSON file.
### Air Gap Setups
## Air Gap Setups
Rancher relies on a periodic refresh of the `rke-metadata-config` to download new Kubernetes version metadata if it is supported with the current version of the Rancher server. For a table of compatible Kubernetes and Rancher versions, refer to the [service terms section.](https://rancher.com/support-maintenance-terms/all-supported-versions/rancher-v2.2.8/)
+17
View File
@@ -0,0 +1,17 @@
---
title: Glossary
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/glossary"/>
</head>
This page covers Rancher-specific terminology and symbols which might be unfamiliar, or which differ between Rancher versions.
```mdx-code-block
import Glossary, {toc as GlossaryTOC} from "/shared-files/_glossary.md"
<Glossary />
export const toc = GlossaryTOC;
```
@@ -80,11 +80,11 @@ If you use a certificate signed by a recognized CA, installing your certificate
1. Enter the following command.
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
rancher/rancher:latest --no-cacerts
```
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
rancher/rancher:latest --no-cacerts
```
</details>
@@ -36,12 +36,12 @@ The usage below defines rules about what the audit log should record and what da
The following table displays what parts of API transactions are logged for each [`AUDIT_LEVEL`](#api-audit-log-options) setting.
| `AUDIT_LEVEL` Setting | Request Metadata | Request Body | Response Metadata | Response Body |
| --------------------- | ---------------- | ------------ | ----------------- | ------------- |
| `0` | | | | |
| `1` | ✓ | | | |
| `2` | ✓ | ✓ | | |
| `3` | ✓ | ✓ | ✓ | ✓ |
| `AUDIT_LEVEL` Setting | Metadata | Request Body | Response Body |
| --------------------- | -------- | ------------ | ------------- |
| `0` | | | |
| `1` | ✓ | | |
| `2` | ✓ | ✓ | |
| `3` | ✓ | ✓ | ✓ |
## Viewing API Audit Logs
@@ -6,7 +6,7 @@ title: Continuous Delivery
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features/continuous-delivery"/>
</head>
[Continuous Delivery with Fleet](../../../integrations-in-rancher/fleet-gitops-at-scale/fleet-gitops-at-scale.md) comes preinstalled in Rancher and can't be fully disabled. However, the Fleet feature for GitOps continuous delivery may be disabled using the `continuous-delivery` feature flag.
[Continuous Delivery with Fleet](../../../integrations-in-rancher/fleet.md) comes preinstalled in Rancher and can't be fully disabled. However, the Fleet feature for GitOps continuous delivery may be disabled using the `continuous-delivery` feature flag.
To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.](enable-experimental-features.md)
@@ -0,0 +1,62 @@
---
title: Enabling User Retention
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-user-retention"/>
</head>
In Rancher v2.7.14 and later, you can enable user retention to automatically disable or delete inactive user accounts after a configurable time period.
The user retention feature is off by default. It is considered experimental at this time.
## Enabling User Retention with kubectl
To enable user retention, you must set `user-retention-cron`. You must also set at least one of `disable-inactive-user-after` or `delete-inactive-user-after`. You can use `kubectl edit setting <name-of-setting>` to open your editor of choice and set these values.
## Configuring Rancher to Delete Users, Disable Users, or Combine Operations
Rancher uses two global user retention settings to determine if and when users are disabled or deleted after a certain period of inactivity. Disabled accounts must be re-enabled before users can log in again. If an account is deleted without being disabled, users may be able to log in through external authentication and the deleted account will be recreated.
The global settings, `disable-inactive-user-after` and `delete-inactive-user-after`, do not block one another from running.
For example, you can set both operations to run. If you give `disable-inactive-user-after` a shorter duration than `delete-inactive-user-after`, the user retention process disables inactive accounts before deleting them.
You can also edit some user retention settings on a specific user's `UserAttribute`. Setting these values overrides the global settings. See [User-specific User Retention Overrides](#user-specific-user-retention-overrides) for more details.
### Required User Retention Settings
The following are global settings:
- `user-retention-cron`: Describes how often the user retention process runs. The value is a cron expression (for example, `0 * * * *` for every hour).
- `disable-inactive-user-after`: The amount of time that a user account can be inactive before the process disables an account. Disabling an account forces the user to request that an administrator re-enable the account before they can log in to use it. Values are expressed in [time.Duration units](https://pkg.go.dev/time#ParseDuration) (for example, `720h` for 720 hours or 30 days). The value must be greater than `auth-user-session-ttl-minutes`, which is `16h` by default. If the value is not set, set to the empty string, or is equal to 0, the process does not disable any inactive accounts.
- `delete-inactive-user-after`: The amount of time that a user account can be inactive before the process deletes the account. Values are expressed in time.Duration units (for example, `720h` for 720 hours or 30 days). The value must be greater than `auth-user-session-ttl-minutes`, which is `16h` by default. The value should be greater than `336h` (14 days), otherwise it is rejected by the Rancher webhook. If you need the value to be lower than 14 days, you can [bypass the webhook](../../reference-guides/rancher-webhook.md#bypassing-the-webhook). If the value is not set, set to the empty string, or is equal to 0, the process does not delete any inactive accounts.
### Optional User Retention Settings
The following are global settings:
- `user-retention-dry-run`: If set to `true`, the user retention process runs without actually deleting or disabling any user accounts. This can help test user retention behavior before allowing the process to disable or delete user accounts in a production environment.
- `user-last-login-default`: If a user does not have `UserAttribute.LastLogin` set on their account, this setting is used instead. It provides a predetermined last login time for the account. The value is expressed as an [RFC 3339 date-time](https://datatracker.ietf.org/doc/html/rfc3339#section-5.6) truncated to the last second; for example, `2023-03-01T00:00:00Z`. If the value is set to the empty string or is equal to 0, this setting is not used.
#### User-specific User Retention Overrides
The following are user-specific overrides to the global settings for special cases. These settings are applied by editing the `UserAttribute` associated with a given account:
```
kubectl edit userattribute <user-name>
```
- `disableAfter`: The user-specific override for `disable-inactive-user-after`. The value is expressed in [time.Duration units](https://pkg.go.dev/time#ParseDuration) and truncated to the second. If the value is set to `0s` then the account won't be subject to disabling.
- `deleteAfter`: The user-specific override for `delete-inactive-user-after`. The value is expressed in [time.Duration units](https://pkg.go.dev/time#ParseDuration) and truncated to the second. If the value is set to `0s` then the account won't be subject to deletion.
## Viewing User Retention Settings in the Rancher UI
You can see which user retention settings are applied to which users.
1. In the upper left corner, click **☰ > Users & Authentication**.
1. In the left navigation menu, select **Users**.
The **Disable After** and **Delete After** columns for each user account indicate how long the account can be inactive before it is disabled or deleted from Rancher. There is also a **Last Login** column roughly indicating when the account was last active.
The same information is available if you click a user's name in the **Users** table and select the **Detail** tab.
@@ -6,19 +6,21 @@ title: Generate and View Traffic from Istio
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/istio-setup-guide/generate-and-view-traffic"/>
</head>
This section describes how to view the traffic that is being managed by Istio.
## The Kiali Traffic Graph
The Istio overview page provides a link to the Kiali dashboard. From the Kiali dashboard, you are able to view graphs for each namespace. The Kiali graph provides a powerful way to visualize the topology of your Istio service mesh. It shows you which services communicate with each other.
The Istio overview page provides a link to the Kiali dashboard. From the Kiali dashboard, you can view graphs for each namespace. The Kiali graph provides a powerful way to visualize the topology of your Istio service mesh. It shows you which services communicate with each other.
:::note Prerequisites:
## Prerequisites
To enable traffic to show up in the graph, ensure you have prometheus installed in the cluster. Rancher-istio installs Kiali configured by default to work with the rancher-monitoring chart. You can use rancher-monitoring or install your own monitoring solution. Optional: you can change configuration on how data scraping occurs by setting the [Selectors & Scrape Configs](../../../integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md) options.
To enable traffic to show up in the graph, ensure that you have Prometheus installed in the cluster. `Rancher-istio` installs Kiali, and configures it by default to work with the `rancher-monitoring` chart. You can use `rancher-monitoring` or install your own monitoring solution.
:::
Additionally, for Istio installations version `103.1.0+up1.19.6` and later, Kiali uses a token value for its authentication strategy. If you are trying to generate or retrieve the token (e.g. for login), note that the name of the Kiali service account in Rancher is `kiali`. For more information, refer to the [Kiali token authentication FAQ](https://kiali.io/docs/faq/authentication/).
To see the traffic graph,
Optional: You can configure which namespaces data scraping occurs in by setting the Helm chart options described in [Selectors & Scrape Configs](../../../integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md).
## Traffic Visualization
To see the traffic graph follow the steps below:
1. In the cluster where Istio is installed, click **Istio** in the left navigation bar.
1. Click the **Kiali** link.
@@ -77,3 +77,80 @@ key.pfx=`base64-content`
```
Then **Cert File Path** would be set to `/etc/alertmanager/secrets/cert.pem`.
## Rancher Performance Dashboard
When monitoring is installed on the upstream (local) cluster, you are given basic health metrics about the Rancher pods, such as CPU and memory data. To get advanced metrics for your local Rancher server, you must additionally enable the Rancher Performance Dashboard for Grafana.
This dashboard provides access to the following advanced metrics:
- Handler Average Execution Times Over Last 5 Minutes
- Rancher API Average Request Times Over Last 5 Minutes
- Subscribe Average Request Times Over Last 5 Minutes
- Lasso Controller Work Queue Depth (Top 20)
- Number of Rancher Requests (Top 20)
- Number of Failed Rancher API Requests (Top 20)
- K8s Proxy Store Average Request Times Over Last 5 Minutes (Top 20)
- K8s Proxy Client Average Request Times Over Last 5 Minutes (Top 20)
- Cached Objects by GroupVersionKind (Top 20)
- Lasso Handler Executions (Top 20)
- Handler Executions Over Last 2 Minutes (Top 20)
- Total Handler Executions with Error (Top 20)
- Data Transmitted by Remote Dialer Sessions (Top 20)
- Errors for Remote Dialer Sessions (Top 20)
- Remote Dialer Connections Removed (Top 20)
- Remote Dialer Connections Added by Client (Top 20)
:::note
Profiling data (such as advanced memory or CPU analysis) is not present as it is a very context-dependent technique that's meant for debugging and not intended for normal observation.
:::
### Enabling the Rancher Performance Dashboard
To enable the Rancher Performance Dashboard:
<Tabs groupId="UIorCLI">
<TabItem value="Helm">
Use the following options with the Helm CLI:
```bash
--set extraEnv\[0\].name="CATTLE_PROMETHEUS_METRICS" --set-string extraEnv\[0\].value=true
```
You can also include the following snippet in your Rancher Helm chart's values.yaml file:
```yaml
extraEnv:
- name: "CATTLE_PROMETHEUS_METRICS"
value: "true"
```
</TabItem>
<TabItem value="UI">
1. Click **☰ > Cluster Management**.
1. Go to the row of the `local` cluster and click **Explore**.
1. Click **Workloads > Deployments**.
1. Use the dropdown menu at the top to filter for **All Namespaces**.
1. Under the `cattle-system` namespace, go to the `rancher` row and click **⋮ > Edit Config**
1. Under **Environment Variables**, click **Add Variable**.
1. For **Type**, select `Key/Value Pair`.
1. For **Variable Name**, enter `CATTLE_PROMETHEUS_METRICS`.
1. For **Value**, enter `true`.
1. Click **Save** to apply the change.
</TabItem>
</Tabs>
### Accessing the Rancher Performance Dashboard
1. Click **☰ > Cluster Management**.
1. Go to the row of the `local` cluster and click **Explore**.
1. Click **Monitoring**
1. Select the **Grafana** dashboard.
1. From the sidebar, click **Search dashboards**.
1. Enter `Rancher Performance Debugging` and select it.
@@ -6,7 +6,17 @@ title: Opening Ports with firewalld
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/open-ports-with-firewalld"/>
</head>
> We recommend disabling firewalld. For Kubernetes 1.19.x and higher, firewalld must be turned off.
:::danger
Enabling firewalld can cause serious network communication problems.
For proper network function, firewalld must be disabled on systems running RKE2. [Firewalld conflicts with Canal](https://docs.rke2.io/known_issues#firewalld-conflicts-with-default-networking), RKE2's default networking stack.
Firewalld must also be disabled on systems running Kubernetes 1.19 and later.
If you enable firewalld on systems running Kubernetes 1.18 or earlier, understand that this may cause networking issues. CNIs in Kubernetes dynamically update iptables and networking rules independently of any external firewalls, such as firewalld. This can cause unexpected behavior when the CNI and the external firewall conflict.
:::
Some distributions of Linux [derived from RHEL,](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Rebuilds) including Oracle Linux, may have default firewall rules that block communication with Helm.
@@ -8,9 +8,9 @@ title: Tuning etcd for Large Installations
When Rancher is used to manage [a large infrastructure](../../getting-started/installation-and-upgrade/installation-requirements/installation-requirements.md) it is recommended to increase the default keyspace for etcd from the default 2 GB. The maximum setting is 8 GB and the host should have enough RAM to keep the entire dataset in memory. When increasing this value you should also increase the size of the host. The keyspace size can also be adjusted in smaller installations if you anticipate a high rate of change of pods during the garbage collection interval.
The etcd data set is automatically cleaned up on a five minute interval by Kubernetes. There are situations, e.g. deployment thrashing, where enough events could be written to etcd and deleted before garbage collection occurs and cleans things up causing the keyspace to fill up. If you see `mvcc: database space exceeded` errors, in the etcd logs or Kubernetes API server logs, you should consider increasing the keyspace size. This can be accomplished by setting the [quota-backend-bytes](https://etcd.io/docs/v3.4.0/op-guide/maintenance/#space-quota) setting on the etcd servers.
The etcd data set is automatically cleaned up on a five minute interval by Kubernetes. There are situations, e.g. deployment thrashing, where enough events could be written to etcd and deleted before garbage collection occurs and cleans things up causing the keyspace to fill up. If you see `mvcc: database space exceeded` errors, in the etcd logs or Kubernetes API server logs, you should consider increasing the keyspace size. This can be accomplished by setting the [quota-backend-bytes](https://etcd.io/docs/v3.5/op-guide/maintenance/#space-quota) setting on the etcd servers.
### Example: This snippet of the RKE cluster.yml file increases the keyspace size to 5GB
## Example: This Snippet of the RKE Cluster.yml file Increases the Keyspace Size to 5GB
```yaml
# RKE cluster.yml
@@ -21,9 +21,9 @@ services:
quota-backend-bytes: 5368709120
```
## Scaling etcd disk performance
## Scaling etcd Disk Performance
You can follow the recommendations from [the etcd docs](https://etcd.io/docs/v3.4.0/tuning/#disk) on how to tune the disk priority on the host.
You can follow the recommendations from [the etcd docs](https://etcd.io/docs/v3.5/tuning/#disk) on how to tune the disk priority on the host.
Additionally, to reduce IO contention on the disks for etcd, you can use a dedicated device for the data and wal directory. Based on etcd best practices, mirroring RAID configurations are unnecessary because etcd replicates data between the nodes in the cluster. You can use striping RAID configurations to increase available IOPS.
@@ -16,11 +16,11 @@ Want to provide a user with access to _all_ projects within a cluster? See [Addi
:::
### Adding Members to a New Project
## Adding Members to a New Project
You can add members to a project as you create it (recommended if possible). For details on creating a new project, refer to the [cluster administration section.](../../how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces.md)
### Adding Members to an Existing Project
## Adding Members to an Existing Project
Following project creation, you can add users as project members so that they can access its resources.
@@ -56,4 +56,6 @@ If you want to use a node driver that Rancher doesn't support out-of-the-box, yo
### Developing Your Own Node Drivers
Node drivers are implemented with [Docker Machine](https://docs.docker.com/machine/).
Node drivers are implemented with [Rancher Machine](https://github.com/rancher/machine), a fork of [Docker Machine](https://github.com/docker/machine). Docker Machine is no longer under active development.
Refer to the original [Docker Machine documentation](https://github.com/docker/docs/blob/vnext-engine/machine/overview.md) for details on how to develop your own node drivers.
@@ -60,4 +60,4 @@ To convert an existing cluster to use an RKE template,
- A new RKE template is created.
- The cluster is converted to use the new template.
- New clusters can be [created from the new template.](apply-templates.md#creating-a-cluster-from-an-rke-template)
- New clusters can be [created from the new template.](#creating-a-cluster-from-an-rke-template)
@@ -62,6 +62,12 @@ After you configure Rancher to allow sign on using an external authentication se
| Allow members of Clusters, Projects, plus Authorized Users and Organizations | Any user in the authorization service and any group added as a **Cluster Member** or **Project Member** can log in to Rancher. Additionally, any user in the authentication service or group you add to the **Authorized Users and Organizations** list may log in to Rancher. |
| Restrict access to only Authorized Users and Organizations | Only users in the authentication service or groups added to the Authorized Users and Organizations can log in to Rancher. |
:::warning
Only trusted admin-level users should have access to the local cluster, which manages all of the other clusters in a Rancher instance. Rancher is directly installed on the local cluster, and Rancher's management features allow admins on the local cluster to provision, modify, connect to, and view details about downstream clusters. Since the local cluster is key to a Rancher instance's architecture, inappropriate access carries security risks.
:::
To set the Rancher access level for users in the authorization service, follow these steps:
1. In the upper left corner, click **☰ > Users & Authentication**.
@@ -51,7 +51,6 @@ You can integrate Okta with Rancher, so that authenticated users can access Ranc
:::
1. After you complete the **Configure Okta Account** form, click **Enable**.
Rancher redirects you to the IdP login page. Enter credentials that authenticate with Okta IdP to validate your Rancher Okta configuration.
@@ -30,6 +30,14 @@ Within Rancher, each person authenticates as a _user_, which is a login that gra
For more information how authorization works and how to customize roles, see [Roles Based Access Control (RBAC)](manage-role-based-access-control-rbac/manage-role-based-access-control-rbac.md).
## User Retention
In Rancher v2.7.14 and later, you can enable user retention. This feature automatically removes inactive users after a configurable period of time.
The user retention feature is disabled by default.
For more information, see [Enabling User Retention](../../advanced-user-guides/enable-user-retention.md).
## Pod Security Policies
_Pod Security Policies_ (or PSPs) are objects that control security-sensitive aspects of pod specification, e.g. root privileges. If a pod does not meet the conditions specified in the PSP, Kubernetes will not allow it to start, and Rancher will display an error message.
@@ -23,7 +23,7 @@ This option replaces "Rancher" with the value you provide in most places. Files
### Support Links
Use a url address to send new "File an Issue" reports instead of sending users to the Github issues page. Optionally show Rancher community support links.
Use a url address to send new "File an Issue" reports instead of sending users to the GitHub issues page. Optionally show Rancher community support links.
### Logo
@@ -40,7 +40,7 @@ Backups are created as .tar.gz files. These files can be pushed to S3 or Minio,
:::note
There is a known issue in Fleet that occurs after performing a restoration using the backup-restore-operator: Secrets used for clientSecretName and helmSecretName are not included in Fleet gitrepos. Refer [here](../../../integrations-in-rancher/fleet/overview.md#troubleshooting) for a workaround.
There is a known issue in Fleet that occurs after performing a restoration using the backup-restore-operator: Secrets used for clientSecretName and helmSecretName are not included in Fleet gitrepos. Refer [here](../../../integrations-in-rancher/fleet-gitops-at-scale/fleet-gitops-at-scale.md#troubleshooting) for a workaround.
:::
@@ -62,21 +62,6 @@ Install the [`rancher-backup chart`](https://github.com/rancher/backup-restore-o
### 2. Restore from backup using a Restore custom resource
:::note Important:
Kubernetes v1.22, available as an experimental feature of v2.6.3, does not support restoring from backup files containing CRDs with the apiVersion `apiextensions.k8s.io/v1beta1`. In v1.22, the default `resourceSet` in the rancher-backup app is updated to collect only CRDs that use `apiextensions.k8s.io/v1`. There are currently two ways to work around this issue:
1. Update the default `resourceSet` to collect the CRDs with the apiVersion v1.
1. Update the default `resourceSet` and the client to use the new APIs internally, with `apiextensions.k8s.io/v1` as the replacement.
:::note
When making or restoring backups for v1.22, the Rancher version and the local cluster's Kubernetes version should be the same. The Kubernetes version should be considered when restoring a backup since the supported apiVersion in the cluster and in the backup file could be different.
:::
:::
1. When using S3 object storage as the backup source for a restore that requires credentials, create a `Secret` object in this cluster to add the S3 credentials. The secret data must have two keys - `accessKey`, and `secretKey`, that contain the S3 credentials.
The secret can be created in any namespace, this example uses the default namespace.
@@ -79,15 +79,20 @@ If you are using [local snapshots](./back-up-rancher-launched-kubernetes-cluster
1. In the **Clusters** page, go to the cluster where you want to remove nodes.
1. In the **Machines** tab, click **⋮ > Delete** on each node you want to delete. Initially, you will see the nodes hang in a `deleting` state, but once all etcd nodes are deleting, they will be removed together. This is due to the fact that Rancher sees all etcd nodes deleting and proceeds to "short circuit" the etcd safe-removal logic.
1. After all etcd nodes are removed, add a new etcd node that you are planning to restore from.
1. After all etcd nodes are removed, add the new etcd node that you are planning to restore from. Assign the new node the role of `all` (etcd, controlplane, and worker).
- For custom clusters, go to the **Registration** tab then copy and run the registration command on your node. If the node has previously been used in a cluster, [clean the node](../manage-clusters/clean-cluster-nodes.md#cleaning-up-nodes) first.
- If the node was previously in a cluster, [clean the node](../manage-clusters/clean-cluster-nodes.md#cleaning-up-nodes) first.
- For custom clusters, go to the **Registration** tab and check the box for `etcd, controlplane, and worker`. Then copy and run the registration command on your node.
- For node driver clusters, a new node is provisioned automatically.
At this point, Rancher will indicate that restoration from etcd snapshot is required.
1. Restore from an etcd snapshot.
:::note
As the etcd node is a clean node, you may need to manually create the `/var/lib/rancher/<k3s/rke2>/server/db/snapshots/` path.
:::
- For S3 snapshots, restore using the UI.
1. Click the **Snapshots** tab to view the list of saved snapshots.
1. Go to the snapshot you want to restore and click **⋮ > Restore**.
@@ -95,7 +100,15 @@ If you are using [local snapshots](./back-up-rancher-launched-kubernetes-cluster
1. Click **Restore**.
- For local snapshots, restore using the UI is **not** available.
1. In the upper right corner, click **⋮ > Edit YAML**.
1. Define `spec.cluster.rkeConfig.etcdSnapshotRestore.name` as the filename of the snapshot on disk in `/var/lib/rancher/<k3s/rke2>/server/db/snapshots/`.
1. The example YAML below can be added under your `rkeConfig` to configure the etcd restore:
```yaml
...
rkeConfig:
etcdSnapshotRestore:
name: <string> # This field is required. Refers to the filename of the associated etcdsnapshot object.
...
```
1. After restoration is successful, you can scale your etcd nodes back up to the desired redundancy.
@@ -167,6 +167,23 @@ spec:
Only Helm 3 compatible charts are supported.
### Refresh Chart Repositories
The **Refresh** button can be used to sync changes from selected Helm chart repositories on the **Repositories** page.
To refresh a chart repository:
1. Click **☰ > Cluster Management**.
1. Find the name of the cluster whose repositories you want to access. Click **Explore** at the end of the cluster's row.
1. In the left navigation menu on the **Cluster Dashboard**, click **Apps > Repositories**.
1. Use the toggle next to the **State** field to select all repositories, or toggle specified chart repositories to sync changes.
1. Click **Refresh**.
1. The **⋮** at the end of each chart repository row also includes a **Refresh** option, which can be clicked to refresh the respective repository.
Non-Airgap Rancher installations upon refresh will reflect any chart repository changes immediately and you will see the **State** field for updated repositories move from `In Progress` to `Active` once the action is completed.
Airgap installations where Rancher is configured to use the packaged copy of Helm system charts ([`useBundledSystemChart=true`](../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha.md#helm-chart-options-for-air-gap-installations)) will only refer to the [system-chart](https://github.com/rancher/system-charts) repository that comes bundled and will not be able to be refreshed or synced.
## Deploy and Upgrade Charts
To install and deploy a chart:
@@ -19,7 +19,7 @@ These nodes must be in the same region. You may place these servers in separate
To install the Rancher management server on a high-availability RKE2 cluster, we recommend setting up the following infrastructure:
- **Three Linux nodes,** typically virtual machines, in the infrastructure provider of your choice.
- **A load balancer** to direct traffic to the two nodes.
- **A load balancer** to direct traffic to the nodes.
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
### 1. Set up Linux Nodes
@@ -59,4 +59,4 @@ Depending on your environment, this may be an A record pointing to the load bala
You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one.
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
@@ -49,5 +49,5 @@ number of nodes for each Kubernetes role, refer to the section on [recommended a
### Networking
* Minimize network latency. Rancher recommends minimizing latency between the etcd nodes. The default setting for `heartbeat-interval` is `500`, and the default setting for `election-timeout` is `5000`. These [settings for etcd tuning](https://coreos.com/etcd/docs/latest/tuning.html) allow etcd to run in most networks (except really high latency networks).
* Minimize network latency. Rancher recommends minimizing latency between the etcd nodes. The default setting for `heartbeat-interval` is `500`, and the default setting for `election-timeout` is `5000`. These [settings for etcd tuning](https://etcd.io/docs/v3.5/tuning/) allow etcd to run in most networks (except really high latency networks).
* Cluster nodes should be located within a single region. Most cloud providers provide multiple availability zones within a region, which can be used to create higher availability for your cluster. Using multiple availability zones is fine for nodes with any role. If you are using [Kubernetes Cloud Provider](../set-up-cloud-providers/set-up-cloud-providers.md) resources, consult the documentation for any restrictions (i.e. zone storage restrictions).
@@ -57,7 +57,7 @@ The number of nodes that you can lose at once while maintaining cluster availabi
References:
* [Official etcd documentation on optimal etcd cluster size](https://etcd.io/docs/v3.4.0/faq/#what-is-failure-tolerance)
* [Official etcd documentation on optimal etcd cluster size](https://etcd.io/docs/v3.5/faq/#what-is-failure-tolerance)
* [Official Kubernetes documentation on operating etcd clusters for Kubernetes](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/)
### Number of Worker Nodes
@@ -108,7 +108,7 @@ Regarding CPU and memory, it is recommended that the different planes of Kuberne
For hardware recommendations for large Kubernetes clusters, refer to the official Kubernetes documentation on [building large clusters.](https://kubernetes.io/docs/setup/best-practices/cluster-large/)
For hardware recommendations for etcd clusters in production, refer to the official [etcd documentation.](https://etcd.io/docs/v3.4.0/op-guide/hardware/)
For hardware recommendations for etcd clusters in production, refer to the official [etcd documentation.](https://etcd.io/docs/v3.5/op-guide/hardware/)
## Networking Requirements
@@ -184,9 +184,7 @@ To prevent issues when upgrading, the [Kubernetes upgrade best practices](https:
## Authorized Cluster Endpoint Support for RKE2 and K3s Clusters
_Available as of v2.6.3_
Authorized Cluster Endpoint (ACE) support has been added for registered RKE2 and K3s clusters. This support includes manual steps you will perform on the downstream cluster to enable the ACE. For additional information on the authorized cluster endpoint, click [here](../manage-clusters/access-clusters/authorized-cluster-endpoint.md).
Rancher supports Authorized Cluster Endpoints (ACE) for registered RKE2 and K3s clusters. This support includes manual steps you will perform on the downstream cluster to enable the ACE. For additional information on the authorized cluster endpoint, click [here](../manage-clusters/access-clusters/authorized-cluster-endpoint.md).
:::note Notes:
@@ -332,7 +332,7 @@ Refer to the offical AWS upstream documentation for the [cloud controller manage
<Tabs>
<TabItem value="RKE2">
Official upstream docs for [Helm chart installation](https://github.com/kubernetes/cloud-provider-aws/tree/master/charts/aws-cloud-controller-manager) can be found on Github.
Official upstream docs for [Helm chart installation](https://github.com/kubernetes/cloud-provider-aws/tree/master/charts/aws-cloud-controller-manager) can be found on GitHub.
1. Add the Helm repository:
@@ -465,7 +465,7 @@ kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager
<TabItem value="RKE">
Official upstream docs for [Helm chart installation](https://github.com/kubernetes/cloud-provider-aws/tree/master/charts/aws-cloud-controller-manager) can be found on Github.
Official upstream docs for [Helm chart installation](https://github.com/kubernetes/cloud-provider-aws/tree/master/charts/aws-cloud-controller-manager) can be found on GitHub.
1. Add the Helm repository:
@@ -21,65 +21,48 @@ To interact with Azure APIs, an AKS cluster requires an Azure Active Directory (
Before creating the service principal, you need to obtain the following information from the [Microsoft Azure Portal](https://portal.azure.com):
- Subscription ID
- Client ID
- Client ID (also known as app ID)
- Client secret
The below sections describe how to set up these prerequisites using either the Azure command line tool or the Azure portal.
### Setting Up the Service Principal with the Azure Command Line Tool
You can create the service principal by running this command:
You must assign roles to the service principal so that it has communication privileges with the AKS API. It also needs access to create and list virtual networks.
In the following example, the command creates the service principal and gives it the Contributor role. The Contributor role can manage anything on AKS but cannot give access to others. Note that you must provide `scopes` a full path to at least one Azure resource:
```
az ad sp create-for-rbac --skip-assignment
az ad sp create-for-rbac --role Contributor --scopes /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>
```
The result should show information about the new service principal:
```
{
"appId": "xxxx--xxx",
"displayName": "<SERVICE-PRINCIPAL-NAME>",
"name": "http://<SERVICE-PRINCIPAL-NAME>",
"password": "<SECRET>",
"tenant": "<TENANT NAME>"
"displayName": "<service-principal-name>",
"name": "http://<service-principal-name>",
"password": "<secret>",
"tenant": "<tenant-name>"
}
```
You also need to add roles to the service principal so that it has privileges for communication with the AKS API. It also needs access to create and list virtual networks.
Below is an example command for assigning the Contributor role to a service principal. Contributors can manage anything on AKS but cannot give access to others:
The following creates a [Resource Group](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-cli) to contain your Azure resources:
```
az role assignment create \
--assignee $appId \
--scope /subscriptions/$<SUBSCRIPTION-ID>/resourceGroups/$<GROUP> \
--role Contributor
```
You can also create the service principal and give it Contributor privileges by combining the two commands into one. In this command, the scope needs to provide a full path to an Azure resource:
```
az ad sp create-for-rbac \
--scope /subscriptions/$<SUBSCRIPTION-ID>/resourceGroups/$<GROUP> \
--role Contributor
```
Create the Resource Group by running this command:
```
az group create --location AZURE_LOCATION_NAME --resource-group AZURE_RESOURCE_GROUP_NAME
az group create --location <azure-location-name> --resource-group <resource-group-name>
```
### Setting Up the Service Principal from the Azure Portal
You can also follow these instructions to set up a service principal and give it role-based access from the Azure Portal.
Follow these instructions to set up a service principal and give it role-based access from the Azure Portal.
1. Go to the Microsoft Azure Portal [home page](https://portal.azure.com).
1. Click **Azure Active Directory**.
1. Click **App registrations**.
1. Click **New registration**.
1. Enter a name. This will be the name of your service principal.
1. Enter a name for your service principal.
1. Optional: Choose which accounts can use the service principal.
1. Click **Register**.
1. You should now see the name of your service principal under **Azure Active Directory > App registrations**.
@@ -101,7 +84,7 @@ To give role-based access to your service principal,
**Result:** Your service principal now has access to AKS.
## 1. Create the AKS Cloud Credentials
## Create the AKS Cloud Credentials
1. In the Rancher UI, click **☰ > Cluster Management**.
1. Click **Cloud Credentials**.
@@ -110,7 +93,7 @@ To give role-based access to your service principal,
1. Fill out the form. For help with filling out the form, see the [configuration reference.](../../../../reference-guides/cluster-configuration/rancher-server-configuration/aks-cluster-configuration.md#cloud-credentials)
1. Click **Create**.
## 2. Create the AKS Cluster
## Create the AKS Cluster
Use Rancher to set up and configure your Kubernetes cluster.
@@ -124,7 +107,8 @@ Use Rancher to set up and configure your Kubernetes cluster.
You can access your cluster after its state is updated to **Active**.
## Role-based Access Control
## Configure Role-based Access Control
When provisioning an AKS cluster in the Rancher UI, RBAC is not configurable because it is required to be enabled.
RBAC is required for AKS clusters that are registered or imported into Rancher.
@@ -135,8 +119,8 @@ Assign the Rancher AKSv2 role to the service principal with the Azure Command Li
```
az role assignment create \
--assignee CLIENT_ID \
--scope "/subscriptions/SUBSCRIPTION_ID/resourceGroups/RESOURCE_GROUP_NAME" \
--assignee <client-id> \
--scope "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>" \
--role "Rancher AKSv2"
```
@@ -46,7 +46,7 @@ If you need to create a private registry, refer to the documentation pages for y
:::
1. Select a namespace for the registry.
1. Select the website that hosts your private registry. Then enter credentials that authenticate with the registry. For example, if you use DockerHub, provide your DockerHub username and password.
1. Select the website that hosts your private registry. Then enter credentials that authenticate with the registry. For example, if you use Docker Hub, provide your Docker Hub username and password.
1. Click **Save**.
**Result:**
@@ -89,7 +89,7 @@ Before v2.6, secrets were required to be in a project scope. Projects are no lon
:::
1. Select a namespace for the registry.
1. Select the website that hosts your private registry. Then enter credentials that authenticate with the registry. For example, if you use DockerHub, provide your DockerHub username and password.
1. Select the website that hosts your private registry. Then enter credentials that authenticate with the registry. For example, if you use Docker Hub, provide your Docker Hub username and password.
1. Click **Save**.
**Result:**
@@ -15,9 +15,9 @@ Rancher can provision nodes in vSphere and install Kubernetes on them. When crea
A vSphere cluster may consist of multiple groups of VMs with distinct properties, such as the amount of memory or the number of vCPUs. This grouping allows for fine-grained control over the sizing of nodes for each Kubernetes role.
## VMware vSphere Enhancements in Rancher v2.3
## VMware vSphere Enhancements
The vSphere node templates have been updated, allowing you to bring cloud operations on-premises with the following enhancements:
The vSphere node templates allow you to bring cloud operations on-premises with the following enhancements:
### Self-healing Node Pools
@@ -39,12 +39,6 @@ For the fields to be populated, your setup needs to fulfill the [prerequisites.]
You can provision VMs with any operating system that supports `cloud-init`. Only YAML format is supported for the [cloud config.](https://cloudinit.readthedocs.io/en/latest/topics/examples.html)
### Video Walkthrough of v2.3.3 Node Template Features
In this YouTube video, we demonstrate how to set up a node template with the new features designed to help you bring cloud operations to on-premises clusters.
<YouTube id="dPIwg6x1AlU"/>
## Creating a VMware vSphere Cluster
In [this section,](provision-kubernetes-clusters-in-vsphere.md) you'll learn how to use Rancher to install an [RKE](https://rancher.com/docs/rke/latest/en/) Kubernetes cluster in vSphere.
@@ -23,7 +23,7 @@ You will need a separate kubeconfig file for each cluster that you have access t
After you download the kubeconfig file, you will be able to use the kubeconfig file and its Kubernetes [contexts](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration) to access your downstream cluster.
If admins have [kubeconfig token generation turned off](../../../../reference-guides/about-the-api/api-tokens.md#disable-tokens-in-generated-kubeconfigs), the kubeconfig file requires [rancher cli](./authorized-cluster-endpoint.md) to be present in your PATH.
If admins have [kubeconfig token generation turned off](../../../../reference-guides/about-the-api/api-tokens.md#disable-tokens-in-generated-kubeconfigs), the kubeconfig file requires [rancher cli](../../../../reference-guides/cli-with-rancher/rancher-cli.md) to be present in your PATH.
### Two Authentication Methods for RKE Clusters
@@ -122,7 +122,7 @@ Install [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
## Cleaning up Nodes
<Tabs>
<Tabs groupId="k8s-distro" queryString>
<TabItem value="RKE1">
Before you run the following commands, first remove the node through the Rancher UI.
@@ -19,7 +19,7 @@ To provision new storage for your workloads, follow these steps:
1. [Add a storage class and configure it to use your storage.](#1-add-a-storage-class-and-configure-it-to-use-your-storage)
2. [Use the Storage Class for Pods Deployed with a StatefulSet.](#2-use-the-storage-class-for-pods-deployed-with-a-statefulset)
### Prerequisites
## Prerequisites
- To set up persistent storage, the `Manage Volumes` [role](../../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-role-reference) is required.
- If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider.
@@ -42,7 +42,7 @@ hostPath | `host-path`
To use a storage provisioner that is not on the above list, you will need to use a [feature flag to enable unsupported storage drivers.](../../../../advanced-user-guides/enable-experimental-features/unsupported-storage-drivers.md)
### 1. Add a storage class and configure it to use your storage
## 1. Add a storage class and configure it to use your storage
These steps describe how to set up a storage class at the cluster level.
@@ -59,7 +59,7 @@ These steps describe how to set up a storage class at the cluster level.
For full information about the storage class parameters, refer to the official [Kubernetes documentation.](https://kubernetes.io/docs/concepts/storage/storage-classes/#parameters).
### 2. Use the Storage Class for Pods Deployed with a StatefulSet
## 2. Use the Storage Class for Pods Deployed with a StatefulSet
StatefulSets manage the deployment and scaling of Pods while maintaining a sticky identity for each Pod. In this StatefulSet, we will configure a VolumeClaimTemplate. Each Pod managed by the StatefulSet will be deployed with a PersistentVolumeClaim based on this VolumeClaimTemplate. The PersistentVolumeClaim will refer to the StorageClass that we created. Therefore, when each Pod managed by the StatefulSet is deployed, it will be bound to dynamically provisioned storage using the StorageClass defined in its PersistentVolumeClaim.
@@ -70,7 +70,7 @@ StatefulSets manage the deployment and scaling of Pods while maintaining a stick
1. Click **StatefulSet**.
1. In the **Volume Claim Templates** tab, click **Add Claim Template**.
1. Enter a name for the persistent volume.
1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Launch**.
@@ -84,7 +84,7 @@ To attach the PVC to an existing workload,
1. Go to the workload that will use storage provisioned with the StorageClass that you cared at click **⋮ > Edit Config**.
1. In the **Volume Claim Templates** section, click **Add Claim Template**.
1. Enter a persistent volume name.
1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Save**.
@@ -20,12 +20,12 @@ To set up storage, follow these steps:
2. [Add a PersistentVolume that refers to the persistent storage.](#2-add-a-persistentvolume-that-refers-to-the-persistent-storage)
3. [Use the Storage Class for Pods Deployed with a StatefulSet.](#3-use-the-storage-class-for-pods-deployed-with-a-statefulset)
### Prerequisites
## Prerequisites
- To create a persistent volume as a Kubernetes resource, you must have the `Manage Volumes` [role.](../../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-role-reference)
- If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider.
### 1. Set up persistent storage
## 1. Set up persistent storage
Creating a persistent volume in Rancher will not create a storage volume. It only creates a Kubernetes resource that maps to an existing volume. Therefore, before you can create a persistent volume as a Kubernetes resource, you must have storage provisioned.
@@ -33,7 +33,7 @@ The steps to set up a persistent storage device will differ based on your infras
If you have a pool of block storage, and you don't want to use a cloud provider, Longhorn could help you provide persistent storage to your Kubernetes cluster. For more information, see [this page.](../../../../../integrations-in-rancher/longhorn.md)
### 2. Add a PersistentVolume that refers to the persistent storage
## 2. Add a PersistentVolume that refers to the persistent storage
These steps describe how to set up a PersistentVolume at the cluster level in Kubernetes.
@@ -51,8 +51,7 @@ These steps describe how to set up a PersistentVolume at the cluster level in Ku
**Result:** Your new persistent volume is created.
### 3. Use the Storage Class for Pods Deployed with a StatefulSet
## 3. Use the Storage Class for Pods Deployed with a StatefulSet
StatefulSets manage the deployment and scaling of Pods while maintaining a sticky identity for each Pod. In this StatefulSet, we will configure a VolumeClaimTemplate. Each Pod managed by the StatefulSet will be deployed with a PersistentVolumeClaim based on this VolumeClaimTemplate. The PersistentVolumeClaim will refer to the PersistentVolume that we created. Therefore, when each Pod managed by the StatefulSet is deployed, it will be bound a PersistentVolume as defined in its PersistentVolumeClaim.
@@ -86,4 +85,4 @@ The following steps describe how to assign persistent storage to an existing wor
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Launch**.
**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC.
**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC.
@@ -304,7 +304,7 @@ cloud-provider|-|Cloud provider type|
|max-node-provision-time|"15m"|Maximum time CA waits for node to be provisioned|
|nodes|-|sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: `<min>:<max>:<other...>`|
|node-group-auto-discovery|-|One or more definition(s) of node group auto-discovery. A definition is expressed `<name of discoverer>:[<key>[=<value>]]`|
|estimator|-|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|estimator|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|expander|"random"|Type of node group expander to be used in scale up. Available values: `["random","most-pods","least-waste","price","priority"]`|
|ignore-daemonsets-utilization|false|Should CA ignore DaemonSet pods when calculating resource utilization for scaling down|
|ignore-mirror-pods-utilization|false|Should CA ignore Mirror pods when calculating resource utilization for scaling down|
@@ -173,12 +173,12 @@ To add members:
### 4. Optional: Add Resource Quotas
Resource quotas limit the resources that a project (and its namespaces) can consume. For more information, see [Resource Quotas](projects-and-namespaces.md).
Resource quotas limit the resources that a project (and its namespaces) can consume. For more information, see [Resource Quotas](../../advanced-user-guides/manage-projects/manage-project-resource-quotas/manage-project-resource-quotas.md).
To add a resource quota,
1. In the **Resource Quotas** tab, click **Add Resource**.
1. Select a **Resource Type**. For more information, see [Resource Quotas.](projects-and-namespaces.md).
1. Select a **Resource Type**. For more information, see [Resource Quotas.](../../advanced-user-guides/manage-projects/manage-project-resource-quotas/manage-project-resource-quotas.md).
1. Enter values for the **Project Limit** and the **Namespace Default Limit**.
1. **Optional:** Specify **Container Default Resource Limit**, which will be applied to every container started in the project. The parameter is recommended if you have CPU or Memory limits set by the Resource Quota. It can be overridden on per an individual namespace or a container level. For more information, see [Container Default Resource Limit](../../advanced-user-guides/manage-projects/manage-project-resource-quotas/manage-project-resource-quotas.md)
1. Click **Create**.
@@ -25,11 +25,11 @@ To manage permissions in a vanilla Kubernetes cluster, cluster admins configure
:::note
If you create a namespace with `kubectl`, it may be unusable because `kubectl` doesn't require your new namespace to be scoped within a project that you have access to. If your permissions are restricted to the project level, it is better to [create a namespace through Rancher](manage-namespaces.md) to ensure that you will have permission to access the namespace.
If you create a namespace with `kubectl`, it may be unusable because `kubectl` doesn't require your new namespace to be scoped within a project that you have access to. If your permissions are restricted to the project level, it is better to [create a namespace through Rancher](#creating-namespaces) to ensure that you will have permission to access the namespace.
:::
### Creating Namespaces
## Creating Namespaces
Create a new namespace to isolate apps and resources in a project.
@@ -50,7 +50,7 @@ When working with project resources that you can assign to a namespace (i.e., [w
**Result:** Your namespace is added to the project. You can begin assigning cluster resources to the namespace.
### Moving Namespaces to Another Project
## Moving Namespaces to Another Project
Cluster admins and members may occasionally need to move a namespace to another project, such as when you want a different team to start using the application.
@@ -71,7 +71,7 @@ Cluster admins and members may occasionally need to move a namespace to another
**Result:** Your namespace is moved to a different project (or is unattached from all projects). If any project resources are attached to the namespace, the namespace releases them and then attached resources from the new project.
### Editing Namespace Resource Quotas
## Editing Namespace Resource Quotas
You can always override the namespace default limit to provide a specific namespace with access to more (or less) project resources.
@@ -14,7 +14,7 @@ To configure the custom resources, go to the **Cluster Dashboard** To configure
1. On the **Clusters** page, go to the cluster where you want to configure CIS scans and click **Explore**.
1. In the left navigation bar, click **CIS Benchmark**.
### Scans
## Scans
A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed.
@@ -31,7 +31,7 @@ spec:
scanProfileName: rke-profile-hardened
```
### Profiles
## Profiles
A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark.
@@ -66,7 +66,7 @@ spec:
- "1.1.21"
```
### Benchmark Versions
## Benchmark Versions
A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark.
@@ -17,7 +17,7 @@ When a cluster scan is run, you need to select a Profile which points to a speci
Follow all the steps below to add a custom Benchmark Version and run a scan using it.
### 1. Prepare the Custom Benchmark Version ConfigMap
## 1. Prepare the Custom Benchmark Version ConfigMap
To create a custom benchmark version, first you need to create a ConfigMap containing the benchmark version's config files and upload it to your Kubernetes cluster where you want to run the scan.
@@ -42,7 +42,7 @@ To prepare a custom benchmark version ConfigMap, suppose we want to add a custom
kubectl create configmap -n <namespace> foo --from-file=<path to directory foo>
```
### 2. Add a Custom Benchmark Version to a Cluster
## 2. Add a Custom Benchmark Version to a Cluster
1. In the upper left corner, click **☰ > Cluster Management**.
1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**.
@@ -54,7 +54,7 @@ To prepare a custom benchmark version ConfigMap, suppose we want to add a custom
1. Add the minimum and maximum Kubernetes version limits applicable, if any.
1. Click **Create**.
### 3. Create a New Profile for the Custom Benchmark Version
## 3. Create a New Profile for the Custom Benchmark Version
To run a scan using your custom benchmark version, you need to add a new Profile pointing to this benchmark version.
@@ -66,7 +66,7 @@ To run a scan using your custom benchmark version, you need to add a new Profile
1. Choose the Benchmark Version from the dropdown.
1. Click **Create**.
### 4. Run a Scan Using the Custom Benchmark Version
## 4. Run a Scan Using the Custom Benchmark Version
Once the Profile pointing to your custom benchmark version `foo` has been created, you can create a new Scan to run the custom test configs in the Benchmark Version.
@@ -19,19 +19,22 @@ In order to deploy and run the adapter successfully, you need to ensure its vers
| Rancher Version | Adapter Version |
|-----------------|:---------------:|
| v2.7.0 | v2.0.0 |
| v2.7.1 | v2.0.0 |
| v2.7.2 | v2.0.1 |
| v2.7.3 | v2.0.1 |
| v2.7.4 | v2.0.1 |
| v2.7.5 | v2.0.2 |
| v2.7.6 | v2.0.2 |
| v2.7.7 | v2.0.2 |
| v2.7.8 | v2.0.2 |
| v2.7.9 | v2.0.2 |
| v2.7.10 | v2.0.2 |
| v2.7.11 | v2.0.4 |
| v2.7.15 | v2.0.4 |
| v2.7.14 | v2.0.4 |
| v2.7.13 | v2.0.4 |
| v2.7.12 | v2.0.4 |
| v2.7.11 | v2.0.4 |
| v2.7.10 | v2.0.2 |
| v2.7.9 | v2.0.2 |
| v2.7.8 | v2.0.2 |
| v2.7.7 | v2.0.2 |
| v2.7.6 | v2.0.2 |
| v2.7.5 | v2.0.2 |
| v2.7.4 | v2.0.1 |
| v2.7.3 | v2.0.1 |
| v2.7.2 | v2.0.1 |
| v2.7.1 | v2.0.0 |
| v2.7.0 | v2.0.0 |
### 1. Gain Access to the Local Cluster
@@ -1,5 +1,5 @@
---
title: Supportconfig bundle
title: Supportconfig Bundle
---
<head>
@@ -12,7 +12,7 @@ These bundles can be created through Rancher or through direct access to the clu
> **Note:** Only admin users can generate/download supportconfig bundles, regardless of method.
### Accessing through Rancher
## Accessing Through Rancher
First, click on the hamburger menu. Then click the `Get Support` button.
@@ -24,7 +24,7 @@ In the next page, click on the `Generate Support Config` button.
![Get Support](/img/generate-support-config.png)
### Accessing without rancher
## Accessing Without Rancher
First, generate a kubeconfig for the cluster that Rancher is installed on.
@@ -6,7 +6,7 @@ title: Cluster API (CAPI) with Rancher Turtles
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/integrations-in-rancher/cluster-api"/>
</head>
[Rancher Turtles](https://turtles.docs.rancher.com/) is a [Rancher extension](../rancher-extensions.md) that manages the lifecycle of provisioned Kubernetes clusters, by providing integration between your Cluster API (CAPI) and Rancher. With Rancher Turtles, you can:
[Rancher Turtles](https://turtles.docs.rancher.com/) is a [Kubernetes Operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/#operators-in-kubernetes) that manages the lifecycle of provisioned Kubernetes clusters, by providing integration between your Cluster API (CAPI) and Rancher. With Rancher Turtles, you can:
- Import CAPI clusters into Rancher, by installing the Rancher Cluster Agent in CAPI provisioned clusters.
- Configure the [CAPI Operator](https://turtles.docs.rancher.com/reference-guides/rancher-turtles-chart/values#cluster-api-operator-values).
@@ -185,7 +185,7 @@ For detailed information on the values supported by the chart and their usage, r
:::note
Remember that if you opt for this installation option, you must manage the CAPI Operator installation yourself. You can follow the [CAPI Operator guide](https://turtles.docs.rancher.com/tasks/capi-operator/intro) in the Rancher Turtles documentation for assistance.
Remember that if you opt for this installation option, you must manage the CAPI Operator installation yourself. You can follow the [CAPI Operator guide](https://turtles.docs.rancher.com/contributing/install_capi_operator) in the Rancher Turtles documentation for assistance.
:::
@@ -63,6 +63,8 @@ The Helm chart in the git repository must include its dependencies in the charts
- **Temporary Workaround**: By default, user-defined secrets are not backed up in Fleet. It is necessary to recreate secrets if performing a disaster recovery restore or migration of Rancher into a fresh cluster. To modify resourceSet to include extra resources you want to backup, refer to docs [here](https://github.com/rancher/backup-restore-operator#user-flow).
- **Debug logging**: To enable debug logging of Fleet components, create a new **fleet** entry in the existing **rancher-config** ConfigMap in the **cattle-system** namespace with the value `{"debug": 1, "debugLevel": 1}`. The Fleet application restarts after you save the ConfigMap.
## Documentation
The Fleet documentation is at https://fleet.rancher.io/.
See the [official Fleet documentation](https://fleet.rancher.io/) to learn more.
@@ -30,7 +30,20 @@ When adding Fleet agent environment variables for the proxy, replace <PROXY_IP>
## Setting Environment Variables in the Rancher UI
To add the environment variable to an existing cluster,
To add the environment variable to an existing cluster:
<Tabs groupId="k8s-distro">
<TabItem value="RKE2/K3s" default>
1. Click **☰ > Cluster Management**.
1. Go to the cluster where you want to add environment variables and click **⋮ > Edit Config**.
1. Click **Agent Environment Vars** under **Cluster configuration**.
1. Click **Add**.
1. Enter the [required environment variables](#required-environment-variables)
1. Click **Save**.
</TabItem>
<TabItem value="RKE">
1. Click **☰ > Cluster Management**.
1. Go to the cluster where you want to add environment variables and click **⋮ > Edit Config**.
@@ -39,6 +52,9 @@ To add the environment variable to an existing cluster,
1. Enter the [required environment variables](#required-environment-variables)
1. Click **Save**.
</TabItem>
</Tabs>
**Result:** The Fleet agent works behind a proxy.
## Setting Environment Variables on Private Nodes
@@ -45,7 +45,7 @@ To configure the resources allocated to an Istio component,
1. In the left navigation bar, click **Apps**.
1. Click **Installed Apps**.
1. Go to the `istio-system` namespace. In one of the Istio workloads, such as `rancher-istio`, click **⋮ > Edit/Upgrade**.
1. Click **Upgrade** to edit the base components via changes to the values.yaml or add an [overlay file](configuration-options/configuration-options.md#overlay-file). For more information about editing the overlay file, see [this section.](cpu-and-memory-allocations.md#editing-the-overlay-file)
1. Click **Upgrade** to edit the base components via changes to the values.yaml or add an [overlay file](configuration-options/configuration-options.md#overlay-file). For more information about editing the overlay file, see [this section.](#editing-the-overlay-file)
1. Change the CPU or memory allocations, the nodes where each component will be scheduled to, or the node tolerations.
1. Click **Upgrade**. to rollout changes
@@ -43,10 +43,14 @@ It also includes the following:
### Kiali
Kiali is a comprehensive visualization aid used for graphing traffic flow throughout the service mesh. It allows you to see how they are connected, including the traffic rates and latencies between them.
[Kiali](https://kiali.io/) is a comprehensive visualization aid used for graphing traffic flow throughout the service mesh. It allows you to see how they are connected, including the traffic rates and latencies between them.
You can check the health of the service mesh, or drill down to see the incoming and outgoing requests to a single component.
:::note
For Istio installations `103.1.0+up1.19.6` and later, Kiali uses a token value for its authentication strategy. The name of the Kiali service account in Rancher is `kiali`. Use this name if you are writing commands that require you to enter the name of the Kiali service account (for example, if you are trying to generate or retrieve a session token). For more information, refer to the [Kiali token authentication FAQ](https://kiali.io/docs/faq/authentication/).
:::
### Jaeger
Our Istio installer includes a quick-start, all-in-one installation of [Jaeger,](https://www.jaegertracing.io/) a tool used for tracing distributed systems.
@@ -71,6 +75,10 @@ To remove Istio components from a cluster, namespace, or workload, refer to the
> By default, only cluster-admins have access to Kiali. For instructions on how to allow admin, edit or views roles to access them, see [this section.](rbac-for-istio.md)
:::note
For Istio installations `103.1.0+up1.19.6` and later, Kiali uses a token value for its authentication strategy. The name of the Kiali service account in Rancher is `kiali`. Use this name if you are writing commands that require you to enter the name of the Kiali service account (for example, if you are trying to generate or retrieve a session token). For more information, refer to the [Kiali token authentication FAQ](https://kiali.io/docs/faq/authentication/).
:::
After Istio is set up in a cluster, Grafana, Prometheus, and Kiali are available in the Rancher UI.
To access the Grafana and Prometheus visualizations,
@@ -10,7 +10,7 @@ This section summarizes the architecture of the Rancher logging application.
For more details about how the Logging operator works, see the [official documentation.](https://kube-logging.github.io/docs/#architecture)
### How the Logging Operator Works
## How the Logging Operator Works
The Logging operator automates the deployment and configuration of a Kubernetes logging pipeline. It deploys and configures a Fluent Bit DaemonSet on every node to collect container and application logs from the node file system.
@@ -6,7 +6,7 @@ title: rancher-logging Helm Chart Options
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/integrations-in-rancher/logging/logging-helm-chart-options"/>
</head>
### Enable/Disable Windows Node Logging
## Enable/Disable Windows Node Logging
You can enable or disable Windows node logging by setting `global.cattle.windows.enabled` to either `true` or `false` in the `values.yaml`.
@@ -21,7 +21,7 @@ Currently an [issue](https://github.com/rancher/rancher/issues/32325) exists whe
:::
### Working with a Custom Docker Root Directory
## Working with a Custom Docker Root Directory
If using a custom Docker root directory, you can set `global.dockerRootDirectory` in `values.yaml`.
@@ -31,11 +31,11 @@ Note that this only affects Linux nodes.
If there are any Windows nodes in the cluster, the change will not be applicable to those nodes.
### Adding NodeSelector Settings and Tolerations for Custom Taints
## Adding NodeSelector Settings and Tolerations for Custom Taints
You can add your own `nodeSelector` settings and add `tolerations` for additional taints by editing the logging Helm chart values. For details, see [this page.](taints-and-tolerations.md)
### Enabling the Logging Application to Work with SELinux
## Enabling the Logging Application to Work with SELinux
:::note Requirements:
@@ -49,7 +49,7 @@ To use Logging v2 with SELinux, we recommend installing the `rancher-selinux` RP
Then, when installing the logging application, configure the chart to be SELinux aware by changing `global.seLinux.enabled` to `true` in the `values.yaml`.
### Additional Logging Sources
## Additional Logging Sources
By default, Rancher collects logs for [control plane components](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) and [node components](https://kubernetes.io/docs/concepts/overview/components/#node-components) for all cluster types.
@@ -72,7 +72,7 @@ When enabled, Rancher collects all additional node and control plane logs the pr
If you're already using a cloud provider's own logging solution such as AWS CloudWatch or Google Cloud operations suite (formerly Stackdriver), it is not necessary to enable this option as the native solution will have unrestricted access to all logs.
### Systemd Configuration
## Systemd Configuration
In Rancher logging, `SystemdLogPath` must be configured for K3s and RKE2 Kubernetes distributions.
@@ -87,7 +87,7 @@ K3s and RKE2 Kubernetes distributions log to journald, which is the subsystem of
* If `/var/log/journal` exists, then use `/var/log/journal`.
* If `/var/log/journal` does not exist, then use `/run/log/journal`.
:::note Notes:
:::note
If any value not described above is returned, Rancher Logging will not be able to collect control plane logs. To address this issue, you will need to perform the following actions on every control plane node:
@@ -20,7 +20,7 @@ Both provide choice for the what node(s) the pod will run on.
- [Adding NodeSelector Settings and Tolerations for Custom Taints](#adding-nodeselector-settings-and-tolerations-for-custom-taints)
### Default Implementation in Rancher's Logging Stack
## Default Implementation in Rancher's Logging Stack
By default, Rancher taints all Linux nodes with `cattle.io/os=linux`, and does not taint Windows nodes.
The logging stack pods have `tolerations` for this taint, which enables them to run on Linux nodes.
@@ -47,7 +47,7 @@ In the above example, we ensure that our pod only runs on Linux nodes, and we ad
You can do the same with Rancher's existing taints, or with your own custom ones.
### Adding NodeSelector Settings and Tolerations for Custom Taints
## Adding NodeSelector Settings and Tolerations for Custom Taints
If you would like to add your own `nodeSelector` settings, or if you would like to add `tolerations` for additional taints, you can pass the following to the chart's values.
@@ -15,7 +15,7 @@ For information on V1 monitoring and alerting, available in Rancher v2.2 up to v
Using the `rancher-monitoring` application, you can quickly deploy leading open-source monitoring and alerting solutions onto your cluster.
### Features
## Features
Prometheus lets you view metrics from your Rancher and Kubernetes objects. Using timestamps, Prometheus lets you query and view these metrics in easy-to-read graphs and visuals, either through the Rancher UI or Grafana, which is an analytics viewing platform deployed along with Prometheus.
@@ -97,7 +97,6 @@ To be able to fully deploy Monitoring V2 for Windows, all of your Windows hosts
For more details on how to upgrade wins on existing Windows hosts, see [Windows cluster support for Monitoring V2.](windows-support.md).
## Known Issues
There is a [known issue](https://github.com/rancher/rancher/issues/28787#issuecomment-693611821) that K3s clusters require more than the allotted default memory. If you enable monitoring on a K3s cluster, set `prometheus.prometheusSpec.resources.memory.limit` to 2500 Mi and `prometheus.prometheusSpec.resources.memory.request` to 1750 Mi.
@@ -112,7 +112,7 @@ Monitoring also creates additional `ClusterRoles` that aren't assigned to users
| Role | Purpose |
| ------------------------------| ---------------------------|
| monitoring-ui-view | _Available as of Monitoring v2 14.5.100+_ This ClusterRole allows users with write access to the project to view metrics graphs for the specified cluster in the Rancher UI. This is done by granting Read-only access to external Monitoring UIs. Users with this role have permission to list the Prometheus, Alertmanager, and Grafana endpoints and make GET requests to Prometheus, Alertmanager, and Grafana UIs through the Rancher proxy. <br/> <br/> This role doesn't grant access to monitoring endpoints. As a result, users with this role won't be able to view cluster monitoring graphs and dashboards in the Rancher UI; however, they are able to access the monitoring Grafana, Prometheus, and Alertmanager UIs if provided those links. |
| monitoring-ui-view | This ClusterRole allows users with write access to the project to view metrics graphs for the specified cluster in the Rancher UI. This is done by granting Read-only access to external Monitoring UIs. Users with this role have permission to list the Prometheus, Alertmanager, and Grafana endpoints and make GET requests to Prometheus, Alertmanager, and Grafana UIs through the Rancher proxy. <br/> <br/> This role doesn't grant access to monitoring endpoints. As a result, users with this role won't be able to view cluster monitoring graphs and dashboards in the Rancher UI; however, they are able to access the monitoring Grafana, Prometheus, and Alertmanager UIs if provided those links. |
:::note
@@ -6,9 +6,7 @@ title: Windows Cluster Support for Monitoring V2
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/integrations-in-rancher/monitoring-and-alerting/windows-support"/>
</head>
_Available as of v2.5.8_
Starting at Monitoring V2 14.5.100 (used by default in Rancher 2.5.8), Monitoring V2 can now be deployed on a Windows cluster and will scrape metrics from Windows nodes using [prometheus-community/windows_exporter](https://github.com/prometheus-community/windows_exporter) (previously named `wmi_exporter`).
Monitoring V2 can be deployed on a Windows cluster to scrape metrics from Windows nodes using [prometheus-community/windows_exporter](https://github.com/prometheus-community/windows_exporter) (previously named `wmi_exporter`).
## Cluster Requirements
@@ -18,7 +18,7 @@ When you set up your high-availability Rancher installation, consider the follow
Don't run other workloads or microservices in the Kubernetes cluster that Rancher is installed on.
### Make sure nodes are configured correctly for Kubernetes
It's important to follow K8s and etcd best practices when deploying your nodes, including disabling swap, double checking you have full network connectivity between all machines in the cluster, using unique hostnames, MAC addresses, and product_uuids for every node, checking that all correct ports are opened, and deploying with ssd backed etcd. More details can be found in the [kubernetes docs](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) and [etcd's performance op guide](https://etcd.io/docs/v3.4/op-guide/performance/).
It's important to follow K8s and etcd best practices when deploying your nodes, including disabling swap, double checking you have full network connectivity between all machines in the cluster, using unique hostnames, MAC addresses, and product_uuids for every node, checking that all correct ports are opened, and deploying with ssd backed etcd. More details can be found in the [kubernetes docs](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) and [etcd's performance op guide](https://etcd.io/docs/v3.5/op-guide/performance/).
### When using RKE: Back up the Statefile
RKE keeps record of the cluster state in a file called `cluster.rkestate`. This file is important for the recovery of a cluster and/or the continued maintenance of the cluster through RKE. Because this file contains certificate material, we strongly recommend encrypting this file before backing up. After each run of `rke up` you should backup the state file.
@@ -49,7 +49,6 @@ This is typical in Rancher, as many operations create new `RoleBinding` objects
You can reduce the number of `RoleBindings` in the upstream cluster in the following ways:
* Limit the use of the [Restricted Admin](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md#restricted-admin) role. Apply other roles wherever possible.
* If you use [external authentication](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/authentication-config.md), use groups to assign roles.
* Only add users to clusters and projects when necessary.
* Remove clusters and projects when they are no longer needed.
* Only use custom roles if necessary.
@@ -59,6 +58,12 @@ You can reduce the number of `RoleBindings` in the upstream cluster in the follo
* Kubernetes permissions are always "additive" (allow-list) rather than "subtractive" (deny-list). Try to minimize configurations that gives access to all but one aspect of a cluster, project, or namespace, as that will result in the creation of a high number of `RoleBinding` objects.
* Experiment to see if creating new projects or clusters manifests in fewer `RoleBindings` for your specific use case.
### Using External Authentication
If you have fifty or more users, you should configure an [external authentication provider](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/authentication-config.md). This is necessary for better performance.
After you configure external authentication, make sure to assign permissions to groups instead of to individual users. This helps reduce the `RoleBinding` object count.
### RoleBinding Count Estimation
Predicting how many `RoleBinding` objects a given configuration will create is complicated. However, the following considerations can offer a rough estimate:
@@ -83,7 +88,7 @@ An [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-archi
### Reducing Event Handler Executions
The bulk of Rancher's logic occurs on event handlers. These event handlers run on an object whenever the object is updated, and when Rancher is started. Additionally, they run every 15 hours when Rancher syncs caches. In scaled setups these scheduled runs come with huge performance costs because every handler is being run on every applicable object. However, the scheduled handler execution can be disabled with the `CATTLE_SYNC_ONLY_CHANGED_OBJECTS` environment variable. If resource allocation spikes are seen every 15 hours, this setting can help.
The bulk of Rancher's logic occurs on event handlers. These event handlers run on an object whenever the object is updated, and when Rancher is started. Additionally, they run every 10 hours when Rancher syncs caches. In scaled setups these scheduled runs come with huge performance costs because every handler is being run on every applicable object. However, the scheduled handler execution can be disabled with the `CATTLE_SYNC_ONLY_CHANGED_OBJECTS` environment variable. If resource allocation spikes are seen every 10 hours, this setting can help.
The value for `CATTLE_SYNC_ONLY_CHANGED_OBJECTS` can be a comma separated list of the following options. The values refer to types of handlers and controllers (the structures that contain and run handlers). Adding the controller types to the variable disables that set of controllers from running their handlers as part of cache resyncing.
@@ -91,7 +96,7 @@ The value for `CATTLE_SYNC_ONLY_CHANGED_OBJECTS` can be a comma separated list o
* `user` refers to user controllers which run for every cluster. Some of these run on the same node as management controllers, while others run in the downstream cluster. This option targets the former.
* `scaled` refers to scaled controllers which run on every Rancher node. You should avoid setting this value, as the scaled handlers are responsible for critical functions and changes may disrupt cluster stability.
In short, if you notice CPU usage peaks every 15 hours, add the `CATTLE_SYNC_ONLY_CHANGED_OBJECTS` environment variable to your Rancher deployment (in the `spec.containers.env` list) with the value `mgmt,user`
In short, if you notice CPU usage peaks every 10 hours, add the `CATTLE_SYNC_ONLY_CHANGED_OBJECTS` environment variable to your Rancher deployment (in the `spec.containers.env` list) with the value `mgmt,user`
## Optimizations Outside of Rancher
@@ -105,6 +110,14 @@ Although managed Kubernetes services make it easier to deploy and run Kubernetes
Use RKE2 for large scale use cases.
### Keep all Upstream Cluster Nodes co-located
To provide high availability, Kubernetes is designed to run nodes and control components in different zones. However, if nodes and control plane components are located in different zones, network traffic may be slower.
Traffic between Rancher components and the Kubernetes API is especially sensitive to network latency, as is etcd traffic between nodes.
To improve performance, run all upstream node clusters in the same location. In particular, make sure that latency between etcd nodes and Rancher is as low as possible.
### Keeping Kubernetes Versions Up to Date
You should keep the local Kubernetes cluster up to date. This will ensure that your cluster has all available performance enhancements and bug fixes.
@@ -113,8 +126,18 @@ You should keep the local Kubernetes cluster up to date. This will ensure that y
Etcd is the backend database for Kubernetes and for Rancher. It plays a very important role in Rancher performance.
The two main bottlenecks to [etcd performance](https://etcd.io/docs/v3.4/op-guide/performance/) are disk and network speed. Etcd should run on dedicated nodes with a fast network setup and with SSDs that have high input/output operations per second (IOPS). For more information regarding etcd performance, see [Slow etcd performance (performance testing and optimization)](https://www.suse.com/support/kb/doc/?id=000020100) and [Tuning etcd for Large Installations](../../../how-to-guides/advanced-user-guides/tune-etcd-for-large-installs.md). Information on disks can also be found in the [Installation Requirements](../../../getting-started/installation-and-upgrade/installation-requirements/installation-requirements.md#disks).
The two main bottlenecks to [etcd performance](https://etcd.io/docs/v3.5/op-guide/performance/) are disk and network speed. Etcd should run on dedicated nodes with a fast network setup and with SSDs that have high input/output operations per second (IOPS). For more information regarding etcd performance, see [Slow etcd performance (performance testing and optimization)](https://www.suse.com/support/kb/doc/?id=000020100) and [Tuning etcd for Large Installations](../../../how-to-guides/advanced-user-guides/tune-etcd-for-large-installs.md). Information on disks can also be found in the [Installation Requirements](../../../getting-started/installation-and-upgrade/installation-requirements/installation-requirements.md#disks).
It's best to run etcd on exactly three nodes, as adding more nodes will reduce operation speed. This may be counter-intuitive to common scaling approaches, but it's due to etcd's [replication mechanisms](https://etcd.io/docs/v3.5/faq/#what-is-maximum-cluster-size).
Etcd performance will also be negatively affected by network latency between nodes as that will slow down network communication. Etcd nodes should be located together with Rancher nodes.
### Browser Requirements
At high scale, Rancher transfers more data from the upstream cluster to UI components running in the browser, and those components also need to perform more processing.
For best performance, ensure that the host running the hardware meets these requirements:
- 2020 i5 10th generation Intel (4 cores) or equivalent
- 8 GB RAM
- Total network bandwith to the upstream cluster: 72 Mb/s (equivalent to a single 802.11n Wi-Fi 4 link stream, ~8 MB/s http download throughput)
- Round-trip time (ping time) from browser to upstream cluster: 150 ms or less
@@ -9,7 +9,7 @@ description: Interact with Rancher using command line interface (CLI) tools from
The Rancher CLI (Command Line Interface) is a unified tool that you can use to interact with Rancher. With this tool, you can operate Rancher using a command line rather than the GUI.
### Download Rancher CLI
## Download Rancher CLI
The binary can be downloaded directly from the UI.
@@ -17,14 +17,14 @@ The binary can be downloaded directly from the UI.
1. At the bottom of the navigation sidebar menu, click **About**.
1. Under the **CLI Downloads section**, there are links to download the binaries for Windows, Mac, and Linux. You can also check the [releases page for our CLI](https://github.com/rancher/cli/releases) for direct downloads of the binary.
### Requirements
## Requirements
After you download the Rancher CLI, you need to make a few configurations. Rancher CLI requires:
- Your Rancher Server URL, which is used to connect to Rancher Server.
- An API Bearer Token, which is used to authenticate with Rancher. For more information about obtaining a Bearer Token, see [Creating an API Key](../user-settings/api-keys.md).
### CLI Authentication
## CLI Authentication
Before you can use Rancher CLI to control your Rancher Server, you must authenticate using an API Bearer Token. Log in using the following command (replace `<BEARER_TOKEN>` and `<SERVER_URL>` with your information):
@@ -34,7 +34,7 @@ $ ./rancher login https://<SERVER_URL> --token <BEARER_TOKEN>
If Rancher Server uses a self-signed certificate, Rancher CLI prompts you to continue with the connection.
### Project Selection
## Project Selection
Before you can perform any commands, you must select a Rancher project to perform those commands against. To select a [project](../../how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces.md) to work on, use the command `./rancher context switch`. When you enter this command, a list of available projects displays. Enter a number to choose your project.
@@ -58,7 +58,7 @@ INFO[0005] Saving config to /Users/markbishop/.ranchcli2.json
Ensure you can run `rancher kubectl get pods` successfully.
### Commands
## Commands
The following commands are available for use in Rancher CLI.
@@ -86,13 +86,12 @@ The following commands are available for use in Rancher CLI.
| `token` | Authenticates and generates new kubeconfig token. |
| `help, [h]` | Shows a list of commands or help for one command. |
### Rancher CLI Help
## Rancher CLI Help
Once logged into Rancher Server using the CLI, enter `./rancher --help` for a list of commands.
All commands accept the `--help` flag, which documents each command's usage.
### Limitations
## Limitations
The Rancher CLI **cannot** be used to install [dashboard apps or Rancher feature charts](../../how-to-guides/new-user-guides/helm-charts-in-rancher/helm-charts-in-rancher.md).
@@ -10,7 +10,7 @@ After you provision a Kubernetes cluster using Rancher, you can still edit optio
For information on editing cluster membership, go to [this page.](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/add-users-to-clusters.md)
### Cluster Configuration References
## Cluster Configuration References
The cluster configuration options depend on the type of Kubernetes cluster:
@@ -21,7 +21,7 @@ The cluster configuration options depend on the type of Kubernetes cluster:
- [GKE Cluster Configuration](rancher-server-configuration/gke-cluster-configuration/gke-cluster-configuration.md)
- [AKS Cluster Configuration](rancher-server-configuration/aks-cluster-configuration.md)
### Cluster Management Capabilities by Cluster Type
## Cluster Management Capabilities by Cluster Type
The options and settings available for an existing cluster change based on the method that you used to provision it.
@@ -8,11 +8,11 @@ title: DigitalOcean Node Template Configuration
Account access information is stored as a cloud credential. Cloud credentials are stored as Kubernetes secrets. Multiple node templates can use the same cloud credential. You can use an existing cloud credential or create a new one.
### Droplet Options
## Droplet Options
The **Droplet Options** provision your cluster's geographical region and specifications.
### Docker Daemon
## Docker Daemon
If you use Docker, the [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include:
@@ -6,13 +6,6 @@ title: AKS Cluster Configuration Reference
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/aks-cluster-configuration"/>
</head>
## Changes in Rancher v2.6
- Support for adding more than one node pool
- Support for private clusters
- Enabled autoscaling node pools
- The AKS permissions are now configured in cloud credentials
## Role-based Access Control
When provisioning an AKS cluster in the Rancher UI, RBAC cannot be disabled. If role-based access control is disabled for the cluster in AKS, the cluster cannot be registered or imported into Rancher.
@@ -6,12 +6,6 @@ title: GKE Cluster Configuration Reference
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration"/>
</head>
## Changes in Rancher v2.6
- Support for additional configuration options:
- Project network isolation
- Network tags
## Cluster Location
| Value | Description |
@@ -8,11 +8,11 @@ title: Private Clusters
In GKE, [private clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept) are clusters whose nodes are isolated from inbound and outbound traffic by assigning them internal IP addresses only. Private clusters in GKE have the option of exposing the control plane endpoint as a publicly accessible address or as a private address. This is different from other Kubernetes providers, which may refer to clusters with private control plane endpoints as "private clusters" but still allow traffic to and from nodes. You may want to create a cluster with private nodes, with or without a public control plane endpoint, depending on your organization's networking and security requirements. A GKE cluster provisioned from Rancher can use isolated nodes by selecting "Private Cluster" in the Cluster Options (under "Show advanced options"). The control plane endpoint can optionally be made private by selecting "Enable Private Endpoint".
### Private Nodes
## Private Nodes
Because the nodes in a private cluster only have internal IP addresses, they will not be able to install the cluster agent and Rancher will not be able to fully manage the cluster. This can be overcome in a few ways.
#### Cloud NAT
### Cloud NAT
:::caution
@@ -20,9 +20,9 @@ Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing).
:::
If restricting outgoing internet access is not a concern for your organization, use Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service to allow nodes in the private network to access the internet, enabling them to download the required images from Dockerhub and contact the Rancher management server. This is the simplest solution.
If restricting outgoing internet access is not a concern for your organization, use Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service to allow nodes in the private network to access the internet, enabling them to download the required images from Docker Hub and contact the Rancher management server. This is the simplest solution.
#### Private registry
### Private Registry
:::caution
@@ -32,11 +32,11 @@ This scenario is not officially supported, but is described for cases in which u
If restricting both incoming and outgoing traffic to nodes is a requirement, follow the air-gapped installation instructions to set up a private container image [registry](../../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) on the VPC where the cluster is going to be, allowing the cluster nodes to access and download the images they need to run the cluster agent. If the control plane endpoint is also private, Rancher will need [direct access](#direct-access) to it.
### Private Control Plane Endpoint
## Private Control Plane Endpoint
If the cluster has a public endpoint exposed, Rancher will be able to reach the cluster, and no additional steps need to be taken. However, if the cluster has no public endpoint, then considerations must be made to ensure Rancher can access the cluster.
#### Cloud NAT
### Cloud NAT
:::caution
@@ -47,7 +47,7 @@ Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing).
As above, if restricting outgoing internet access to the nodes is not a concern, then Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service can be used to allow the nodes to access the internet. While the cluster is provisioning, Rancher will provide a registration command to run on the cluster. Download the [kubeconfig](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl) for the new cluster and run the provided kubectl command on the cluster. Gaining access
to the cluster in order to run this command can be done by creating a temporary node or using an existing node in the VPC, or by logging on to or creating an SSH tunnel through one of the cluster nodes.
#### Direct access
### Direct Access
If the Rancher server is run on the same VPC as the cluster's control plane, it will have direct access to the control plane's private endpoint. The cluster nodes will need to have access to a [private registry](#private-registry) to download images as described above.
@@ -149,13 +149,13 @@ Project network isolation is available if you are using any RKE2 network plugin
##### CoreDNS
By default, [CoreDNS](https://coredns.io/) is installed as the default DNS provider. If CoreDNS is not installed, an alternate DNS provider must be installed yourself. Refer to the [RKE2 documentation](https://docs.rke2.io/networking#coredns) for additional CoreDNS configurations.
By default, [CoreDNS](https://coredns.io/) is installed as the default DNS provider. If CoreDNS is not installed, an alternate DNS provider must be installed yourself. Refer to the [RKE2 documentation](https://docs.rke2.io/networking/networking_services#coredns) for additional CoreDNS configurations.
##### NGINX Ingress
If you want to publish your applications in a high-availability configuration, and you're hosting your nodes with a cloud-provider that doesn't have a native load-balancing feature, enable this option to use NGINX Ingress within the cluster. Refer to the [RKE2 documentation](https://docs.rke2.io/networking#nginx-ingress-controller) for additional configuration options.
If you want to publish your applications in a high-availability configuration, and you're hosting your nodes with a cloud-provider that doesn't have a native load-balancing feature, enable this option to use NGINX Ingress within the cluster. Refer to the [RKE2 documentation](https://docs.rke2.io/networking/networking_services#nginx-ingress-controller) for additional configuration options.
Refer to the [RKE2 documentation](https://docs.rke2.io/networking#nginx-ingress-controller) for additional configuration options.
Refer to the [RKE2 documentation](https://docs.rke2.io/networking/networking_services#nginx-ingress-controller) for additional configuration options.
##### Metrics Server
@@ -6,15 +6,15 @@ title: Monitoring Configuration Examples
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/monitoring-v2-configuration/examples"/>
</head>
### ServiceMonitor
## ServiceMonitor
See the official prometheus-operator GitHub repo for an example [ServiceMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/master/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) YAML.
### PodMonitor
## PodMonitor
See the [Prometheus Operator documentation](https://prometheus-operator.dev/docs/user-guides/getting-started/#using-podmonitors) for an example PodMonitor and an example Prometheus resource that refers to a PodMonitor.
### PrometheusRule
## PrometheusRule
A PrometheusRule contains the alerting and recording rules that you would usually place in a [Prometheus rule file](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/).
@@ -22,6 +22,6 @@ For a more fine-grained approach, the `ruleSelector` field on a Prometheus resou
See the [Prometheus Operator documentation](https://prometheus-operator.dev/docs/user-guides/alerting/) for an example PrometheusRule.
### Alertmanager Config
## Alertmanager Config
See the Rancher docs page on Receivers for an example [Alertmanager config](./receivers.md#example-alertmanager-configs).
@@ -18,7 +18,7 @@ This section assumes familiarity with how monitoring components work together. F
:::
### ServiceMonitors
## ServiceMonitors
This pseudo-CRD maps to a section of the Prometheus custom resource configuration. It declaratively specifies how groups of Kubernetes services should be monitored.
@@ -28,7 +28,7 @@ Any Services in your cluster that match the labels located within the ServiceMon
For more information about how ServiceMonitors work, refer to the [Prometheus Operator documentation.](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/running-exporters.md)
### PodMonitors
## PodMonitors
This pseudo-CRD maps to a section of the Prometheus custom resource configuration. It declaratively specifies how group of pods should be monitored.
@@ -26,18 +26,18 @@ Prometheus Federator is designed to be deployed alongside an existing Prometheus
2. On seeing each ProjectHelmChartCR, the operator will automatically deploy a Project Prometheus stack on the Project Owner's behalf in the **Project Release Namespace (`cattle-project-<id>-monitoring`)** based on a HelmChart CR and a HelmRelease CR automatically created by the ProjectHelmChart controller in the **Operator / System Namespace**.
3. RBAC will automatically be assigned in the Project Release Namespace to allow users to view the Prometheus, Alertmanager, and Grafana UIs of the Project Monitoring Stack deployed; this will be based on RBAC defined on the Project Registration Namespace against the [default Kubernetes user-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). For more information, see the section on [configuring RBAC](rbac.md).
### What is a Project?
## What is a Project?
In Prometheus Federator, a Project is a group of namespaces that can be identified by a `metav1.LabelSelector`. By default, the label used to identify projects is `field.cattle.io/projectId`, the label used to identify namespaces that are contained within a given Rancher Project.
### Configuring the Helm release created by a ProjectHelmChart
## Configuring the Helm release created by a ProjectHelmChart
The `spec.values` of this ProjectHelmChart's resources will correspond to the `values.yaml` override to be supplied to the underlying Helm chart deployed by the operator on the user's behalf; to see the underlying chart's `values.yaml` spec, either:
- View the chart's definition located at [`rancher/prometheus-federator` under `charts/rancher-project-monitoring`](https://github.com/rancher/prometheus-federator/blob/main/charts/rancher-project-monitoring) (where the chart version will be tied to the version of this operator).
- Look for the ConfigMap named `monitoring.cattle.io.v1alpha1` that is automatically created in each Project Registration Namespace, which will contain both the `values.yaml` and `questions.yaml` that was used to configure the chart (which was embedded directly into the `prometheus-federator` binary).
### Namespaces
## Namespaces
As a Project Operator based on [rancher/helm-project-operator](https://github.com/rancher/helm-project-operator), Prometheus Federator has three different classifications of namespaces that the operator looks out for:
@@ -65,7 +65,7 @@ As a Project Operator based on [rancher/helm-project-operator](https://github.co
:::
### Helm Resources (HelmChart, HelmRelease)
## Helm Resources (HelmChart, HelmRelease)
On deploying a ProjectHelmChart, the Prometheus Federator will automatically create and manage two child custom resources that manage the underlying Helm resources in turn:
@@ -87,7 +87,7 @@ HelmRelease CRs emit Kubernetes Events that detect when an underlying Helm relea
Both of these resources are created for all Helm charts in the Operator / System namespaces to avoid escalation of privileges to underprivileged users.
### Advanced Helm Project Operator Configuration
## Advanced Helm Project Operator Configuration
For more information on advanced configurations, refer to [this page](https://github.com/rancher/prometheus-federator/blob/main/charts/prometheus-federator/0.0.1/README.md#advanced-helm-project-operator-configuration).
@@ -103,6 +103,6 @@ For more information on advanced configurations, refer to [this page](https://gi
|`helmProjectOperator.hardenedNamespaces.configuration`| The configuration to be supplied to the default ServiceAccount or auto-generated NetworkPolicy on managing a namespace. |
-->
### Prometheus Federator on the Local Cluster
## Prometheus Federator on the Local Cluster
Prometheus Federator is a resource intensive application. Installing it to the local cluster is possible, but **not recommended**.
@@ -17,7 +17,7 @@ Logging is helpful because it allows you to:
- Look for trends in your environment
- Save your logs to a safe location outside of your cluster
- Stay informed of events like a container crashing, a pod eviction, or a node dying
- More easily debugg and troubleshoot problems
- More easily debug and troubleshoot problems
Rancher can integrate with Elasticsearch, splunk, kafka, syslog, and fluentd.
@@ -21,7 +21,7 @@ The following descriptions correspond to the numbers in the diagram above:
3. [Node Agents](#3-node-agents)
4. [Authorized Cluster Endpoint](#4-authorized-cluster-endpoint)
### 1. The Authentication Proxy
## 1. The Authentication Proxy
In this diagram, a user named Bob wants to see all pods running on a downstream user cluster called User Cluster 1. From within Rancher, he can run a `kubectl` command to see
the pods. Bob is authenticated through Rancher's authentication proxy.
@@ -32,7 +32,7 @@ Rancher communicates with Kubernetes clusters using a [service account](https://
By default, Rancher generates a [kubeconfig file](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) that contains credentials for proxying through the Rancher server to connect to the Kubernetes API server on a downstream user cluster. The kubeconfig file (`kube_config_rancher-cluster.yml`) contains full access to the cluster.
### 2. Cluster Controllers and Cluster Agents
## 2. Cluster Controllers and Cluster Agents
Each downstream user cluster has a cluster agent, which opens a tunnel to the corresponding cluster controller within the Rancher server.
@@ -52,13 +52,13 @@ The cluster agent, also called `cattle-cluster-agent`, is a component that runs
- Applies the roles and bindings defined in each cluster's global policies
- Communicates between the cluster and Rancher server (through a tunnel to the cluster controller) about events, stats, node info, and health
### 3. Node Agents
## 3. Node Agents
If the cluster agent (also called `cattle-cluster-agent`) is not available, one of the node agents creates a tunnel to the cluster controller to communicate with Rancher.
The `cattle-node-agent` is deployed using a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) resource to make sure it runs on every node in a Rancher-launched Kubernetes cluster. It is used to interact with the nodes when performing cluster operations. Examples of cluster operations include upgrading the Kubernetes version and creating or restoring etcd snapshots.
### 4. Authorized Cluster Endpoint
## 4. Authorized Cluster Endpoint
An authorized cluster endpoint (ACE) allows users to connect to the Kubernetes API server of a downstream cluster without having to route their requests through the Rancher authentication proxy.
@@ -81,6 +81,12 @@ You will need to use a context defined in this kubeconfig file to access the clu
## Impersonation
:::caution Known Issue
Service account impersonation (`--as`) used by lower privileged user accounts to remove privileges is not implemented and is a [feature](https://github.com/rancher/rancher/issues/41988) being tracked.
:::
Users technically exist only on the upstream cluster. Rancher creates [RoleBindings and ClusterRoleBindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) that refer to Rancher users, even though there is [no actual User resource](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#users-in-kubernetes) on the downstream cluster.
When users interact with a downstream cluster through the authentication proxy, there needs to be some entity downstream to serve as the actor for those requests. Rancher creates service accounts to be that entity. Each service account is only granted one permission, which is to **impersonate** the user they belong to. If there was only one service account that could impersonate any user, then it would be possible for a malicious user to corrupt that account and escalate their privileges by impersonating another user. This issue was the basis for a [CVE](https://github.com/rancher/rancher/security/advisories/GHSA-pvxj-25m6-7vqr).
@@ -25,7 +25,7 @@ Logging is helpful because it allows you to:
- Look for trends in your environment
- Save your logs to a safe location outside of your cluster
- Stay informed of events like a container crashing, a pod eviction, or a node dying
- More easily debugg and troubleshoot problems
- More easily debug and troubleshoot problems
Rancher can integrate with Elasticsearch, splunk, kafka, syslog, and fluentd.
@@ -102,7 +102,7 @@ The `rancher-restricted` template is provided by Rancher to enforce the highly-r
</TabItem>
<TabItem value="v1.24 and Older">
K3s v1.24 and older support [Pod Security Policy (PSP)](https://v1-24.docs.kubernetes.io/docs/concepts/security/pod-security-policy/) for controlling pod security.
K3s v1.24 and older support [Pod Security Policy (PSP)](https://github.com/kubernetes/website/blob/release-1.24/content/en/docs/concepts/security/pod-security-policy.md) for controlling pod security.
You can enable PSPs by passing the following flags in the cluster configuration in Rancher:
@@ -6,7 +6,7 @@ title: Kubernetes Security Best Practices
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/rancher-security/kubernetes-security-best-practices"/>
</head>
### Restricting cloud metadata API access
## Restricting Cloud Metadata API Access
Cloud providers such as AWS, Azure, DigitalOcean or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS, DigitalOcean Kubernetes or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets.
@@ -6,7 +6,7 @@ title: Rancher Security Best Practices
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/rancher-security/rancher-security-best-practices"/>
</head>
### Restrict Public Access to /version and /rancherversion Path
## Restrict Public Access to /version and /rancherversion Path
The upstream (local) Rancher instance provides information about the Rancher version it is running and the Go version that was used to build it. That information is accessible via the `/version` path, which is used for tasks such as automating version bumps, or confirming that a deployment was successful. The upstream instance also provides Rancher version information accessible via the `/rancherversion` path.
@@ -14,8 +14,17 @@ Adversaries can misuse this information to identify the running Rancher version
See [OWASP Web Application Security Testing - Enumerate Infrastructure and Application Admin Interfaces](https://owasp.org/www-project-web-security-testing-guide/stable/4-Web_Application_Security_Testing/02-Configuration_and_Deployment_Management_Testing/05-Enumerate_Infrastructure_and_Application_Admin_Interfaces.html) for more information on protecting your server.
### Session Management
## Session Management
Some environments may require additional security controls for session management. For example, you may want to limit users' concurrent active sessions or restrict which geolocations those sessions can be initiated from. Such features are not supported by Rancher out of the box.
If you require such features, combine Layer 7 firewalls with [external authentication providers](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/authentication-config.md#external-vs-local-authentication).
If you require such features, combine Layer 7 firewalls with [external authentication providers](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/authentication-config.md#external-vs-local-authentication).
## Use External Load Balancers to Protect Vulnerable Ports
You should protect the following ports behind an [external load balancer](../../how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/layer-4-and-layer-7-load-balancing.md#layer-4-load-balancer) that has SSL offload enabled:
- **K3s:** Port 6443, used by the Kubernetes API.
- **RKE and RKE2:** Port 6443, used by the Kubernetes API, and port 9345, used for node registration.
These ports have TLS SAN certificates which list nodes' public IP addresses. An attacker could use that information to gain unauthorized access or monitor activity on the cluster. Protecting these ports helps mitigate against nodes' public IP addresses being disclosed to potential attackers.
@@ -28,11 +28,11 @@ Security is at the heart of all Rancher features. From integrating with all the
On this page, we provide security related documentation along with resources to help you secure your Rancher installation and your downstream Kubernetes clusters.
### NeuVector Integration with Rancher
## NeuVector Integration with Rancher
NeuVector is an open-source, container-focused security application that is now integrated into Rancher. NeuVector provides production security, DevOps vulnerability protection, and a container firewall, et al. Please see the [Rancher docs](../../integrations-in-rancher/neuvector.md) and the [NeuVector docs](https://open-docs.neuvector.com/) for more information.
### Running a CIS Security Scan on a Kubernetes Cluster
## Running a CIS Security Scan on a Kubernetes Cluster
Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the [CIS](https://www.cisecurity.org/cis-benchmarks/) (Center for Internet Security) Kubernetes Benchmark.
@@ -48,13 +48,13 @@ When Rancher runs a CIS security scan on a cluster, it generates a report showin
For details, refer to the section on [security scans](../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md).
### SELinux RPM
## SELinux RPM
[Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8.
We provide two RPMs (Red Hat packages) that enable Rancher products to function properly on SELinux-enforcing hosts: `rancher-selinux` and `rke2-selinux`. For details, see [this page](selinux-rpm/selinux-rpm.md).
### Rancher Hardening Guide
## Rancher Hardening Guide
The Rancher Hardening Guide is based on controls and best practices found in the <a href="https://www.cisecurity.org/benchmark/kubernetes/" target="_blank">CIS Kubernetes Benchmark</a> from the Center for Internet Security.
@@ -64,7 +64,7 @@ The hardening guides provide prescriptive guidance for hardening a production in
Each version of the hardening guide is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher.
### The CIS Benchmark and Self-Assessment
## The CIS Benchmark and Self-Assessment
The benchmark self-assessment is a companion to the Rancher security hardening guide. While the hardening guide shows you how to harden the cluster, the benchmark guide is meant to help you evaluate the level of security of the hardened cluster.
@@ -72,7 +72,7 @@ Because Rancher and RKE install Kubernetes services as Docker containers, many o
Each version of Rancher's self-assessment guide corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark.
### Third-party Penetration Test Reports
## Third-party Penetration Test Reports
Rancher periodically hires third parties to perform security audits and penetration tests of the Rancher software stack. The environments under test follow the Rancher provided hardening guides at the time of the testing. Previous penetration test reports are available below.
@@ -83,14 +83,14 @@ Results:
Please note that new reports are no longer shared or made publicly available.
### Rancher Security Advisories and CVEs
## Rancher Security Advisories and CVEs
Rancher is committed to informing the community of security issues in our products. For the list of CVEs (Common Vulnerabilities and Exposures) for issues we have resolved, refer to [this page.](security-advisories-and-cves.md)
### Kubernetes Security Best Practices
## Kubernetes Security Best Practices
For recommendations on securing your Kubernetes cluster, refer to the [Kubernetes Security Best Practices](kubernetes-security-best-practices.md) guide.
### Rancher Security Best Practices
## Rancher Security Best Practices
For recommendations on securing your Rancher Manager deployments, refer to the [Rancher Security Best Practices](rancher-security-best-practices.md) guide.
@@ -10,6 +10,11 @@ Rancher is committed to informing the community of security issues in our produc
| ID | Description | Date | Resolution |
|----|-------------|------|------------|
[CVE-2024-22030](https://github.com/rancher/rancher/security/advisories/GHSA-h4h5-9833-v2p4) | A high severity vulnerability was discovered in Rancher's agents that under very specific circumstances allows a malicious actor to take over existing Rancher nodes. The attacker needs to have control of an expired domain or execute a DNS spoofing/hijacking attack against the domain in order to exploit this vulnerability. The targeted domain is the one used as the Rancher URL (the `server-url` of the Rancher cluster). | 19 Sep 2024 | Rancher [v2.9.2](https://github.com/rancher/rancher/releases/tag/v2.9.2), [v2.8.8](https://github.com/rancher/rancher/releases/tag/v2.8.8) and [v2.7.15](https://github.com/rancher/rancher/releases/tag/v2.7.15) |
| [CVE-2024-22032](https://github.com/rancher/rancher/security/advisories/GHSA-q6c7-56cq-g2wm) | An issue was discovered in Rancher versions up to and including 2.7.13 and 2.8.4, where custom secrets encryption configurations are stored in plaintext under the clusters `AppliedSpec`. This also causes clusters to continuously reconcile, as the `AppliedSpec` would never match the desired cluster `Spec`. The stored information contains the encryption configuration for secrets within etcd, and could potentially expose sensitive data if the etcd database was exposed directly. | 17 Jun 2024 | Rancher [v2.8.5](https://github.com/rancher/rancher/releases/tag/v2.8.5) and [v2.7.14](https://github.com/rancher/rancher/releases/tag/v2.7.14) |
| [CVE-2023-32196](https://github.com/rancher/rancher/security/advisories/GHSA-64jq-m7rq-768h) | An issue was discovered in Rancher versions up to and including 2.7.13 and 2.8.4, where the webhook rule resolver ignores rules from a `ClusterRole` for an external `RoleTemplate` set with `.context=project` or `.context=""`. This allows a user to create an external `ClusterRole` with `.context=project` or `.context=""`, depending on the use of the new feature flag `external-rules` and backing `ClusterRole`. | 17 Jun 2024 | Rancher [v2.8.5](https://github.com/rancher/rancher/releases/tag/v2.8.5) and [v2.7.14](https://github.com/rancher/rancher/releases/tag/v2.7.14) |
| [CVE-2023-22650](https://github.com/rancher/rancher/security/advisories/GHSA-9ghh-mmcq-8phc) | An issue was discovered in Rancher versions up to and including 2.7.13 and 2.8.4, where Rancher did not have a user retention process for when external authentication providers are used, that could be configured to run periodically and disable and/or delete inactive users. The new user retention process added in Rancher v2.8.5 and Rancher v2.7.14 is disabled by default. If enabled, a user becomes subject to the retention process if they don't log in for a configurable period of time. It's possible to set overrides for user accounts that are primarily intended for programmatic access (e.g. CI, scripts, etc.) so that they don't become subject to the retention process for a longer period of time or at all. | 17 Jun 2024 | Rancher [v2.8.5](https://github.com/rancher/rancher/releases/tag/v2.8.5) and [v2.7.14](https://github.com/rancher/rancher/releases/tag/v2.7.14) |
| [CVE-2023-32191](https://github.com/rancher/rke/security/advisories/GHSA-6gr4-52w6-vmqx) | An issue was discovered in Rancher versions up to and including 2.7.13 and 2.8.4, in which supported RKE versions store credentials inside a ConfigMap that can be accessible by non-administrative users in Rancher. This vulnerability only affects an RKE-provisioned cluster. | 17 Jun 2024 | Rancher [v2.8.5](https://github.com/rancher/rancher/releases/tag/v2.8.5) and [v2.7.14](https://github.com/rancher/rancher/releases/tag/v2.7.14) |
| [CVE-2023-32193](https://github.com/rancher/norman/security/advisories/GHSA-r8f4-hv23-6qp6) | An issue was discovered in Rancher versions up to and including 2.6.13, 2.7.9 and 2.8.1, where multiple Cross-Site Scripting (XSS) vulnerabilities can be exploited via the Rancher UI (Norman). | 8 Feb 2024 | Rancher [v2.8.2](https://github.com/rancher/rancher/releases/tag/v2.8.2), [v2.7.10](https://github.com/rancher/rancher/releases/tag/v2.7.10) and [v2.6.14](https://github.com/rancher/rancher/releases/tag/v2.6.14) |
| [CVE-2023-32192](https://github.com/rancher/apiserver/security/advisories/GHSA-833m-37f7-jq55) | An issue was discovered in Rancher versions up to and including 2.6.13, 2.7.9 and 2.8.1, where multiple Cross-Site Scripting (XSS) vulnerabilities can be exploited via the Rancher UI (Apiserver). | 8 Feb 2024 | Rancher [v2.8.2](https://github.com/rancher/rancher/releases/tag/v2.8.2), [v2.7.10](https://github.com/rancher/rancher/releases/tag/v2.7.10) and [v2.6.14](https://github.com/rancher/rancher/releases/tag/v2.6.14) |
| [CVE-2023-22649](https://github.com/rancher/rancher/security/advisories/GHSA-xfj7-qf8w-2gcr) | An issue was discovered in Rancher versions up to and including 2.6.13, 2.7.9 and 2.8.1, in which sensitive data may be leaked into Rancher's audit logs. | 8 Feb 2024 | Rancher [v2.8.2](https://github.com/rancher/rancher/releases/tag/v2.8.2), [v2.7.10](https://github.com/rancher/rancher/releases/tag/v2.7.10) and [v2.6.14](https://github.com/rancher/rancher/releases/tag/v2.6.14) |
@@ -27,7 +32,7 @@ Rancher is committed to informing the community of security issues in our produc
| [CVE-2022-31247](https://github.com/rancher/rancher/security/advisories/GHSA-6x34-89p7-95wg) | An issue was discovered in Rancher versions up to and including 2.5.15 and 2.6.6 where a flaw with authorization logic allows privilege escalation in downstream clusters through cluster role template binding (CRTB) and project role template binding (PRTB). The vulnerability can be exploited by any user who has permissions to create/edit CRTB or PRTB (such as `cluster-owner`, `manage cluster members`, `project-owner`, and `manage project members`) to gain owner permission in another project in the same cluster or in another project on a different downstream cluster. | 18 August 2022 | [Rancher v2.6.7](https://github.com/rancher/rancher/releases/tag/v2.6.7) and [Rancher v2.5.16](https://github.com/rancher/rancher/releases/tag/v2.5.16) |
| [CVE-2021-36783](https://github.com/rancher/rancher/security/advisories/GHSA-8w87-58w6-hfv8) | It was discovered that in Rancher versions up to and including 2.5.12 and 2.6.3, there is a failure to properly sanitize credentials in cluster template answers. This failure can lead to plaintext storage and exposure of credentials, passwords, and API tokens. The exposed credentials are visible in Rancher to authenticated `Cluster Owners`, `Cluster Members`, `Project Owners`, and `Project Members` on the endpoints `/v1/management.cattle.io.clusters`, `/v3/clusters`, and `/k8s/clusters/local/apis/management.cattle.io/v3/clusters`. | 18 August 2022 | [Rancher v2.6.7](https://github.com/rancher/rancher/releases/tag/v2.6.7) and [Rancher v2.5.16](https://github.com/rancher/rancher/releases/tag/v2.5.16) |
| [CVE-2021-36782](https://github.com/rancher/rancher/security/advisories/GHSA-g7j7-h4q8-8w2f) | An issue was discovered in Rancher versions up to and including 2.5.15 and 2.6.6 where sensitive fields like passwords, API keys, and Rancher's service account token (used to provision clusters) were stored in plaintext directly on Kubernetes objects like `Clusters` (e.g., `cluster.management.cattle.io`). Anyone with read access to those objects in the Kubernetes API could retrieve the plaintext version of those sensitive data. The issue was partially found and reported by Florian Struck (from [Continum AG](https://www.continum.net/)) and [Marco Stuurman](https://github.com/fe-ax) (from [Shock Media B.V.](https://www.shockmedia.nl/)). | 18 August 2022 | [Rancher v2.6.7](https://github.com/rancher/rancher/releases/tag/v2.6.7) and [Rancher v2.5.16](https://github.com/rancher/rancher/releases/tag/v2.5.16) |
| [CVE-2022-21951](https://github.com/rancher/rancher/security/advisories/GHSA-vrph-m5jj-c46c) | This vulnerability only affects customers using [Weave](../../faq/container-network-interface-providers.md#weave) Container Network Interface (CNI) when configured through [RKE templates](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/about-rke1-templates.md). A vulnerability was discovered in Rancher versions 2.5.0 up to and including 2.5.13, and 2.6.0 up to and including 2.6.4, where a user interface (UI) issue with RKE templates does not include a value for the Weave password when Weave is chosen as the CNI. If a cluster is created based on the mentioned template, and Weave is configured as the CNI, no password will be created for [network encryption](https://www.weave.works/docs/net/latest/tasks/manage/security-untrusted-networks/) in Weave; therefore, network traffic in the cluster will be sent unencrypted. | 24 May 2022 | [Rancher v2.6.5](https://github.com/rancher/rancher/releases/tag/v2.6.5) and [Rancher v2.5.14](https://github.com/rancher/rancher/releases/tag/v2.5.14) |
| [CVE-2022-21951](https://github.com/rancher/rancher/security/advisories/GHSA-vrph-m5jj-c46c) | This vulnerability only affects customers using [Weave](../../faq/container-network-interface-providers.md#weave) Container Network Interface (CNI) when configured through [RKE templates](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-rke1-templates/about-rke1-templates.md). A vulnerability was discovered in Rancher versions 2.5.0 up to and including 2.5.13, and 2.6.0 up to and including 2.6.4, where a user interface (UI) issue with RKE templates does not include a value for the Weave password when Weave is chosen as the CNI. If a cluster is created based on the mentioned template, and Weave is configured as the CNI, no password will be created for [network encryption](https://github.com/weaveworks/weave/blob/master/site/tasks/manage/security-untrusted-networks.md) in Weave; therefore, network traffic in the cluster will be sent unencrypted. | 24 May 2022 | [Rancher v2.6.5](https://github.com/rancher/rancher/releases/tag/v2.6.5) and [Rancher v2.5.14](https://github.com/rancher/rancher/releases/tag/v2.5.14) |
| [CVE-2021-36784](https://github.com/rancher/rancher/security/advisories/GHSA-jwvr-vv7p-gpwq) | A vulnerability was discovered in Rancher versions from 2.5.0 up to and including 2.5.12 and from 2.6.0 up to and including 2.6.3 which allows users who have create or update permissions on [Global Roles](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/manage-role-based-access-control-rbac.md) to escalate their permissions, or those of another user, to admin-level permissions. Global Roles grant users Rancher-wide permissions, such as the ability to create clusters. In the identified versions of Rancher, when users are given permission to edit or create Global Roles, they are not restricted to only granting permissions which they already posses. This vulnerability affects customers who utilize non-admin users that are able to create or edit Global Roles. The most common use case for this scenario is the `restricted-admin` role. | 14 Apr 2022 | [Rancher v2.6.4](https://github.com/rancher/rancher/releases/tag/v2.6.4) and [Rancher v2.5.13](https://github.com/rancher/rancher/releases/tag/v2.5.13) |
| [CVE-2021-4200](https://github.com/rancher/rancher/security/advisories/GHSA-hx8w-ghh8-r4xf) | This vulnerability only affects customers using the `restricted-admin` role in Rancher. A vulnerability was discovered in Rancher versions from 2.5.0 up to and including 2.5.12 and from 2.6.0 up to and including 2.6.3 where the `global-data` role in `cattle-global-data` namespace grants write access to the Catalogs. Since each user with any level of catalog access was bound to the `global-data` role, this grants write access to templates (`CatalogTemplates`) and template versions (`CatalogTemplateVersions`) for any user with any level of catalog access. New users created in Rancher are by default assigned to the `user` role (standard user), which is not designed to grant write catalog access. This vulnerability effectively elevates the privilege of any user to write access for the catalog template and catalog template version resources. | 14 Apr 2022 | [Rancher v2.6.4](https://github.com/rancher/rancher/releases/tag/v2.6.4) and [Rancher v2.5.13](https://github.com/rancher/rancher/releases/tag/v2.5.13) |
| [GHSA-wm2r-rp98-8pmh](https://github.com/rancher/rancher/security/advisories/GHSA-wm2r-rp98-8pmh) | This vulnerability only affects customers using [Continuous Delivery with Fleet](../../integrations-in-rancher/fleet-gitops-at-scale/fleet-gitops-at-scale.md) for continuous delivery with authenticated Git and/or Helm repositories. An issue was discovered in `go-getter` library in versions prior to [`v1.5.11`](https://github.com/hashicorp/go-getter/releases/tag/v1.5.11) that exposes SSH private keys in base64 format due to a failure in redacting such information from error messages. The vulnerable version of this library is used in Rancher through Fleet in versions of Fleet prior to [`v0.3.9`](https://github.com/rancher/fleet/releases/tag/v0.3.9). This issue affects Rancher versions 2.5.0 up to and including 2.5.12 and from 2.6.0 up to and including 2.6.3. The issue was found and reported by Dagan Henderson from Raft Engineering. | 14 Apr 2022 | [Rancher v2.6.4](https://github.com/rancher/rancher/releases/tag/v2.6.4) and [Rancher v2.5.13](https://github.com/rancher/rancher/releases/tag/v2.5.13) |
@@ -20,6 +20,9 @@ Each Rancher version is designed to be compatible with a single version of the w
| Rancher Version | Webhook Version | Availability in Prime | Availability in Community |
|-----------------|-----------------|-----------------------|---------------------------|
| v2.7.15 | v0.3.11 | &check; | N/A |
| v2.7.14 | v0.3.11 | &check; | N/A |
| v2.7.13 | v0.3.8 | &check; | N/A |
| v2.7.12 | v0.3.7 | &check; | N/A |
| v2.7.11 | v0.3.7 | &check; | N/A |
| v2.7.10 | v0.3.6 | &check; | &check; |
@@ -6,7 +6,7 @@ title: Advanced Options for Docker Installs
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/single-node-rancher-in-docker/advanced-options"/>
</head>
### Custom CA Certificate
## Custom CA Certificate
If you want to configure Rancher to use a CA root certificate to be used when validating services, you would start the Rancher container sharing the directory that contains the CA root certificate.
@@ -30,7 +30,7 @@ docker run -d --restart=unless-stopped \
rancher/rancher:latest
```
### API Audit Log
## API Audit Log
The API Audit Log records all the user and system transactions made through Rancher server.
@@ -49,7 +49,7 @@ docker run -d --restart=unless-stopped \
rancher/rancher:latest
```
### TLS settings
## TLS settings
To set a different TLS configuration, you can use the `CATTLE_TLS_MIN_VERSION` and `CATTLE_TLS_CIPHERS` environment variables. For example, to configure TLS 1.0 as minimum accepted TLS version:
@@ -65,7 +65,7 @@ Privileged access is [required.](../../getting-started/installation-and-upgrade/
See [TLS settings](../../getting-started/installation-and-upgrade/installation-references/tls-settings.md) for more information and options.
### Air Gap
## Air Gap
If you are visiting this page to complete an air gap installation, you must prepend your private registry URL to the server tag when running the installation command in the option that you choose. Add `<REGISTRY.DOMAIN.COM:PORT>` with your private registry URL in front of `rancher/rancher:latest`.
@@ -73,7 +73,7 @@ If you are visiting this page to complete an air gap installation, you must prep
<REGISTRY.DOMAIN.COM:PORT>/rancher/rancher:latest
### Persistent Data
## Persistent Data
Rancher uses etcd as a datastore. When Rancher is installed with Docker, the embedded etcd is being used. The persistent data is at the following path in the container: `/var/lib/rancher`.
@@ -89,7 +89,7 @@ docker run -d --restart=unless-stopped \
Privileged access is [required.](../../getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher)
### Running `rancher/rancher` and `rancher/rancher-agent` on the Same Node
## Running `rancher/rancher` and `rancher/rancher-agent` on the Same Node
In the situation where you want to use a single node to run Rancher and to be able to add the same node to a cluster, you have to adjust the host ports mapped for the `rancher/rancher` container.
@@ -1,9 +0,0 @@
---
title: Security Scans
---
<head>
https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides
</head>
The documentation about CIS security scans has moved [here.](../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md)
@@ -12,7 +12,7 @@ Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG
Before running the DNS checks, check the [default DNS provider](../../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#default-dns-provider) for your cluster and make sure that [the overlay network is functioning correctly](networking.md#check-if-overlay-network-is-functioning-correctly) as this can also be the reason why DNS resolution (partly) fails.
### Check if DNS pods are running
## Check if DNS pods are running
```
kubectl -n kube-system get pods -l k8s-app=kube-dns
@@ -30,7 +30,7 @@ NAME READY STATUS RESTARTS AGE
kube-dns-5fd74c7488-h6f7n 3/3 Running 0 4m13s
```
### Check if the DNS service is present with the correct cluster-ip
## Check if the DNS service is present with the correct cluster-ip
```
kubectl -n kube-system get svc -l k8s-app=kube-dns
@@ -41,7 +41,7 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 4m13s
```
### Check if domain names are resolving
## Check if domain names are resolving
Check if internal cluster names are resolving (in this example, `kubernetes.default`), the IP shown after `Server:` should be the same as the `CLUSTER-IP` from the `kube-dns` service.
@@ -132,15 +132,15 @@ command terminated with exit code 1
Cleanup the alpine DaemonSet by running `kubectl delete ds/dnstest`.
### CoreDNS specific
## CoreDNS specific
#### Check CoreDNS logging
### Check CoreDNS logging
```
kubectl -n kube-system logs -l k8s-app=kube-dns
```
#### Check configuration
### Check configuration
CoreDNS configuration is stored in the configmap `coredns` in the `kube-system` namespace.
@@ -148,7 +148,7 @@ CoreDNS configuration is stored in the configmap `coredns` in the `kube-system`
kubectl -n kube-system get configmap coredns -o go-template={{.data.Corefile}}
```
#### Check upstream nameservers in resolv.conf
### Check upstream nameservers in resolv.conf
By default, the configured nameservers on the host (in `/etc/resolv.conf`) will be used as upstream nameservers for CoreDNS. You can check this file on the host or run the following Pod with `dnsPolicy` set to `Default`, which will inherit the `/etc/resolv.conf` from the host it is running on.
@@ -156,7 +156,7 @@ By default, the configured nameservers on the host (in `/etc/resolv.conf`) will
kubectl run -i --restart=Never --rm test-${RANDOM} --image=ubuntu --overrides='{"kind":"Pod", "apiVersion":"v1", "spec": {"dnsPolicy":"Default"}}' -- sh -c 'cat /etc/resolv.conf'
```
#### Enable query logging
### Enable query logging
Enabling query logging can be done by enabling the [log plugin](https://coredns.io/plugins/log/) in the Corefile configuration in the configmap `coredns`. You can do so by using `kubectl -n kube-system edit configmap coredns` or use the command below to replace the configuration in place:
@@ -166,9 +166,9 @@ kubectl get configmap -n kube-system coredns -o json | sed -e 's_loadbalance_log
All queries will now be logged and can be checked using the command in [Check CoreDNS logging](#check-coredns-logging).
### kube-dns specific
## kube-dns specific
#### Check upstream nameservers in kubedns container
### Check upstream nameservers in kubedns container
By default, the configured nameservers on the host (in `/etc/resolv.conf`) will be used as upstream nameservers for kube-dns. Sometimes the host will run a local caching DNS nameserver, which means the address in `/etc/resolv.conf` will point to an address in the loopback range (`127.0.0.0/8`) which will be unreachable by the container. In case of Ubuntu 18.04, this is done by `systemd-resolved`. We detect if `systemd-resolved` is running, and will automatically use the `/etc/resolv.conf` file with the correct upstream nameservers (which is located at `/run/systemd/resolve/resolv.conf`).
@@ -10,14 +10,15 @@ For Rancher versions that have `rancher-webhook` installed, certain versions cre
In Rancher v2.6.3 and up, rancher-webhook deployments will automatically renew their TLS certificate when it is within 30 or fewer days of its expiration date. If you are using v2.6.2 or below, there are two methods to work around this issue:
##### 1. Users with cluster access, run the following commands:
## 1. Users with Cluster Access, Run the Following Commands:
```
kubectl delete secret -n cattle-system cattle-webhook-tls
kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io --ignore-not-found=true rancher.cattle.io
kubectl delete pod -n cattle-system -l app=rancher-webhook
```
##### 2. Users with no cluster access via `kubectl`:
## 2. Users with No Cluster Access Via `kubectl`:
1. Delete the `cattle-webhook-tls` secret in the `cattle-system` namespace in the local cluster.

Some files were not shown because too many files have changed in this diff Show More