mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-05 20:53:33 +00:00
Merge branch 'v2.12.0' into update-ssp
This commit is contained in:
@@ -4,8 +4,6 @@ on:
|
|||||||
pull_request_target:
|
pull_request_target:
|
||||||
types:
|
types:
|
||||||
- closed
|
- closed
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
paths-ignore:
|
paths-ignore:
|
||||||
- '**/README.md'
|
- '**/README.md'
|
||||||
|
|
||||||
|
|||||||
@@ -29,8 +29,7 @@ kubectl patch feature ext-kubeconfigs -p '{"spec":{"value":false}}'
|
|||||||
|
|
||||||
## Creating a Kubeconfig
|
## Creating a Kubeconfig
|
||||||
|
|
||||||
Admins can delete any Kubeconfig, while regular users can only delete their own. When a Kubeconfig is deleted, the kubeconfig tokens are also deleted.
|
Only a **valid and active** Rancher user can create a Kubeconfig. For example, trying to create a Kubeconfig using a `system:admin` service account will lead to an error:
|
||||||
E.g. using a service account `system:admin` will lead to the following error:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
||||||
|
|||||||
@@ -0,0 +1,114 @@
|
|||||||
|
---
|
||||||
|
title: Tokens
|
||||||
|
---
|
||||||
|
|
||||||
|
<head>
|
||||||
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/api/workflows/tokens"/>
|
||||||
|
</head>
|
||||||
|
|
||||||
|
## Feature Flag
|
||||||
|
|
||||||
|
The Tokens Public API is available for Rancher v2.12.0 and later, and is enabled by default. You can disable the Tokens Public API by setting the `ext-tokens` feature flag to `false` as shown in the example `kubectl` command below:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl patch feature ext-tokens -p '{"spec":{"value":false}}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Creating a Token
|
||||||
|
|
||||||
|
Only a **valid and active** Rancher user can create a Token. Otherwise, you will get an error displayed (`Error from server (Forbidden)...`) when attempting to create a Token.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
||||||
|
apiVersion: ext.cattle.io/v1
|
||||||
|
kind: Token
|
||||||
|
EOF
|
||||||
|
Error from server (Forbidden): error when creating "STDIN": tokens.ext.cattle.io is forbidden: user system:admin is not a Rancher user
|
||||||
|
```
|
||||||
|
|
||||||
|
A Token is always created for the user making the request. Attempting to create a Token for a different user, by specifying a different `spec.userID`, is forbidden and will fail.
|
||||||
|
|
||||||
|
- The `spec.description` field can be set to an arbitrary human-readable description of the Token's purpose. The default value is empty.
|
||||||
|
|
||||||
|
- The `spec.kind` field can be set to the kind of Token. The value `session` indicates a login Token. All other values, including the default empty string, indicate a kind of derived Token.
|
||||||
|
|
||||||
|
- The `metadata.name` and `metadata.generateName` fields are ignored, and the name of the new Token is automatically generated using the prefix `token-`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
||||||
|
apiVersion: ext.cattle.io/v1
|
||||||
|
kind: Token
|
||||||
|
spec:
|
||||||
|
description: My Token
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
- If the `spec.ttl` is not specified, the Token is created with the expiration time defined in the `auth-token-max-ttl-minutes` setting. The default expiration time is 90 days. If `spec.ttl` is specified, it should be greater than 0 and less than or equal to the value of the `auth-token-max-ttl-minutes` setting expressed in milliseconds.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
||||||
|
apiVersion: ext.cattle.io/v1
|
||||||
|
kind: Token
|
||||||
|
spec:
|
||||||
|
ttl: 7200000 # 2 hours
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
## Listing Tokens
|
||||||
|
|
||||||
|
Listing previously generated Tokens can help clean up tokens that are no longer needed (e.g., they were issued temporarily). Admins can list all Tokens, while regular users can only see their own.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl get tokens.ext.cattle.io
|
||||||
|
NAME KIND TTL AGE
|
||||||
|
token-chjc9 90d 18s
|
||||||
|
token-6fzgj 90d 16s
|
||||||
|
token-8nbrm 90d 14s
|
||||||
|
```
|
||||||
|
|
||||||
|
Use `-o wide` to get more details:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl get tokens.ext.cattle.io -o wide
|
||||||
|
NAME USER KIND TTL AGE DESCRIPTION
|
||||||
|
token-chjc9 user-jtghh 90d 24s example
|
||||||
|
token-6fzgj user-jtghh 90d 22s box
|
||||||
|
token-8nbrm user-jtghh 90d 20s jinx
|
||||||
|
```
|
||||||
|
|
||||||
|
## Viewing a Token
|
||||||
|
|
||||||
|
Admins can get any Token, while regular users can only get their own.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl get tokens.ext.cattle.io token-chjc9
|
||||||
|
NAME KIND TTL AGE
|
||||||
|
token-chjc9 90d 18s
|
||||||
|
```
|
||||||
|
|
||||||
|
Use `-o wide` to get more details:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl get tokens.ext.cattle.io token-chjc9 -o wide
|
||||||
|
NAME USER KIND TTL AGE DESCRIPTION
|
||||||
|
token-chjc9 user-jtghh 90d 24s example
|
||||||
|
```
|
||||||
|
|
||||||
|
## Deleting a Token
|
||||||
|
|
||||||
|
Admins can delete any Token, while regular users can only delete their own.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl delete tokens.ext.cattle.io token-chjc9
|
||||||
|
token.ext.cattle.io "token-chjc9" deleted
|
||||||
|
```
|
||||||
|
|
||||||
|
## Updating a Token
|
||||||
|
|
||||||
|
Only the metadata fields `spec.description`, `spec.ttl`, and `spec.enabled` can be updated. All other `spec` fields are immutable. Admins can extend the `spec.ttl` field, while regular users can only reduce the value.
|
||||||
|
|
||||||
|
An example `kubectl` command to edit a Token:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl edit tokens.ext.cattle.io token-zp786
|
||||||
|
```
|
||||||
@@ -39,7 +39,6 @@ User Interface | https://github.com/rancher/dashboard/ | This repository is the
|
|||||||
(Rancher) Docker Machine | https://github.com/rancher/machine | This repository is the source of the Docker Machine binary used when using Node Drivers. This is a fork of the `docker/machine` repository.
|
(Rancher) Docker Machine | https://github.com/rancher/machine | This repository is the source of the Docker Machine binary used when using Node Drivers. This is a fork of the `docker/machine` repository.
|
||||||
machine-package | https://github.com/rancher/machine-package | This repository is used to build the Rancher Docker Machine binary.
|
machine-package | https://github.com/rancher/machine-package | This repository is used to build the Rancher Docker Machine binary.
|
||||||
kontainer-engine | https://github.com/rancher/kontainer-engine | This repository is the source of kontainer-engine, the tool to provision hosted Kubernetes clusters.
|
kontainer-engine | https://github.com/rancher/kontainer-engine | This repository is the source of kontainer-engine, the tool to provision hosted Kubernetes clusters.
|
||||||
RKE repository | https://github.com/rancher/rke | This repository is the source of Rancher Kubernetes Engine, the tool to provision Kubernetes clusters on any machine.
|
|
||||||
CLI | https://github.com/rancher/cli | This repository is the source code for the Rancher CLI used in Rancher 2.x.
|
CLI | https://github.com/rancher/cli | This repository is the source code for the Rancher CLI used in Rancher 2.x.
|
||||||
(Rancher) Helm repository | https://github.com/rancher/helm | This repository is the source of the packaged Helm binary. This is a fork of the `helm/helm` repository.
|
(Rancher) Helm repository | https://github.com/rancher/helm | This repository is the source of the packaged Helm binary. This is a fork of the `helm/helm` repository.
|
||||||
loglevel repository | https://github.com/rancher/loglevel | This repository is the source of the loglevel binary, used to dynamically change log levels.
|
loglevel repository | https://github.com/rancher/loglevel | This repository is the source of the loglevel binary, used to dynamically change log levels.
|
||||||
@@ -109,27 +108,6 @@ Please remove any sensitive data as it will be publicly viewable.
|
|||||||
-l app=rancher \
|
-l app=rancher \
|
||||||
--timestamps=true
|
--timestamps=true
|
||||||
```
|
```
|
||||||
- Docker install using `docker` on each of the nodes in the RKE cluster
|
|
||||||
|
|
||||||
```
|
|
||||||
docker logs \
|
|
||||||
--timestamps \
|
|
||||||
$(docker ps | grep -E "rancher/rancher@|rancher_rancher" | awk '{ print $1 }')
|
|
||||||
```
|
|
||||||
- Kubernetes Install with RKE Add-On
|
|
||||||
|
|
||||||
:::note
|
|
||||||
|
|
||||||
Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml` if the Rancher server is installed on a Kubernetes cluster) or are using the embedded kubectl via the UI.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl -n cattle-system \
|
|
||||||
logs \
|
|
||||||
--timestamps=true \
|
|
||||||
-f $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name="cattle-server") | .metadata.name')
|
|
||||||
```
|
|
||||||
- System logging (these might not all exist, depending on operating system)
|
- System logging (these might not all exist, depending on operating system)
|
||||||
- `/var/log/messages`
|
- `/var/log/messages`
|
||||||
- `/var/log/syslog`
|
- `/var/log/syslog`
|
||||||
|
|||||||
@@ -16,10 +16,7 @@ Rancher will publish deprecated features as part of the [release notes](https://
|
|||||||
|
|
||||||
| Patch Version | Release Date |
|
| Patch Version | Release Date |
|
||||||
|---------------|---------------|
|
|---------------|---------------|
|
||||||
| [2.11.3](https://github.com/rancher/rancher/releases/tag/v2.11.3) | June 25, 2025 |
|
| [2.12.0](https://github.com/rancher/rancher/releases/tag/v2.12.0) | July 31, 2025 |
|
||||||
| [2.11.2](https://github.com/rancher/rancher/releases/tag/v2.11.2) | May 22, 2025 |
|
|
||||||
| [2.11.1](https://github.com/rancher/rancher/releases/tag/v2.11.1) | Apr 24, 2025 |
|
|
||||||
| [2.11.0](https://github.com/rancher/rancher/releases/tag/v2.11.0) | Mar 31, 2025 |
|
|
||||||
|
|
||||||
## What can I expect when a feature is marked for deprecation?
|
## What can I expect when a feature is marked for deprecation?
|
||||||
|
|
||||||
|
|||||||
+9
-1
@@ -10,7 +10,15 @@ Once the infrastructure is ready, you can continue with setting up a Kubernetes
|
|||||||
|
|
||||||
The steps to set up RKE, RKE2, or K3s are shown below.
|
The steps to set up RKE, RKE2, or K3s are shown below.
|
||||||
|
|
||||||
For convenience, export the IP address and port of your proxy into an environment variable and set up the HTTP_PROXY variables for your current shell on every node:
|
For convenience, export the IP address and port of your proxy into an environment variable and set up the `HTTP_PROXY` variables for your current shell on every node:
|
||||||
|
|
||||||
|
:::caution
|
||||||
|
|
||||||
|
The `NO_PROXY` environment variable is not standardized, and the accepted format of the value can differ between applications. When configuring the `NO_PROXY` variable for Rancher, the value must adhere to the format expected by Golang.
|
||||||
|
|
||||||
|
Specifically, the value should be a comma-delimited string which only contains IP addresses, CIDR notation, domain names, or special DNS labels (e.g. `*`). For a full description of the expected value format, refer to the [**upstream Golang documentation**](https://pkg.go.dev/golang.org/x/net/http/httpproxy#Config)
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
```
|
```
|
||||||
export proxy_host="10.0.0.5:8888"
|
export proxy_host="10.0.0.5:8888"
|
||||||
|
|||||||
+43
@@ -53,6 +53,10 @@ You can obtain `<RANCHER_CONTAINER_TAG>` and `<RANCHER_CONTAINER_NAME>` by loggi
|
|||||||
|
|
||||||
## Upgrade
|
## Upgrade
|
||||||
|
|
||||||
|
:::danger
|
||||||
|
Rancher upgrades to version 2.12.0 and later will be blocked if any RKE1-related resources are detected, as the Rancher Kubernetes Engine (RKE/RKE1) is end of life as of **July 31, 2025**. For detailed cleanup and recovery steps, refer to the [RKE1 Resource Validation and Upgrade Requirements in Rancher v2.12](#rke1-resource-validation-and-upgrade-requirements-in-rancher-v212).
|
||||||
|
:::
|
||||||
|
|
||||||
During upgrade, you create a copy of the data from your current Rancher container and a backup in case something goes wrong. Then you deploy the new version of Rancher in a new container using your existing data.
|
During upgrade, you create a copy of the data from your current Rancher container and a backup in case something goes wrong. Then you deploy the new version of Rancher in a new container using your existing data.
|
||||||
### 1. Create a copy of the data from your Rancher server container
|
### 1. Create a copy of the data from your Rancher server container
|
||||||
|
|
||||||
@@ -388,6 +392,45 @@ See [Restoring Cluster Networking](https://github.com/rancher/rancher-docs/tree/
|
|||||||
|
|
||||||
Remove the previous Rancher server container. If you only stop the previous Rancher server container (and don't remove it), the container may restart after the next server reboot.
|
Remove the previous Rancher server container. If you only stop the previous Rancher server container (and don't remove it), the container may restart after the next server reboot.
|
||||||
|
|
||||||
|
## RKE1 Resource Validation and Upgrade Requirements in Rancher v2.12
|
||||||
|
|
||||||
|
Rancher v2.12.0 and later has removed support for the Rancher Kubernetes Engine (RKE/RKE1). During upgrade, Rancher validates the cluster resources and blocks the upgrade if any RKE1-related resources are detected.
|
||||||
|
|
||||||
|
This validation affects the following resource types:
|
||||||
|
|
||||||
|
- Clusters with `rkeConfig` (`clusters.management.cattle.io`)
|
||||||
|
- NodeTemplates (`nodetemplates.management.cattle.io`)
|
||||||
|
- ClusterTemplates (`clustertemplates.management.cattle.io`)
|
||||||
|
|
||||||
|
This is particularly relevant for single-node Docker installations, where Rancher is not running during the upgrade. In such cases, controllers are not available to automatically clean up deprecated resources, and the upgrade process will fail early with an error listing the blocking resources.
|
||||||
|
|
||||||
|
### 1. Pre-Upgrade (Recommended)
|
||||||
|
|
||||||
|
Before upgrading, while Rancher is still running:
|
||||||
|
|
||||||
|
- Run the `pre-upgrade-hook` cleanup script to delete all RKE1 clusters and templates. You can find the script in the Rancher GitHub repository: [pre-upgrade-hook.sh](https://github.com/rancher/rancher/blob/v2.12.0/chart/scripts/pre-upgrade-hook.sh).
|
||||||
|
- This allows Rancher to clean up associated resources and finalizers.
|
||||||
|
|
||||||
|
### 2. Post-Upgrade Failure Due to Residual RKE1 Resources
|
||||||
|
|
||||||
|
If the upgrade to Rancher v2.12.0 or later is attempted without prior cleanup of RKE1 resources:
|
||||||
|
|
||||||
|
- The upgrade will fail and display an error listing the resource names that are preventing the upgrade.
|
||||||
|
- This occurs because Rancher includes validation to detect and block upgrades when unsupported RKE1 resources are still present.
|
||||||
|
- To proceed, [rollback](#rolling-back) to the previous Rancher version, delete the identified resources, and then retry after [manual cleanup](#manual-cleanup-after-rollback).
|
||||||
|
|
||||||
|
:::note Helm-based Rancher
|
||||||
|
Helm-based Rancher installations are not affected by this issue, as Rancher remains available during the upgrade and can perform resource cleanup as needed.
|
||||||
|
:::
|
||||||
|
|
||||||
|
### Manual Cleanup After Rollback
|
||||||
|
|
||||||
|
Users should perform the following steps after rolling back to a previous Rancher version:
|
||||||
|
|
||||||
|
- **Manually delete** the resources listed in the upgrade error message (e.g., RKE1 clusters, NodeTemplates, ClusterTemplates).
|
||||||
|
- If deletion is blocked due to **finalizers**, edit the resources and remove the `metadata.finalizers` field.
|
||||||
|
- If a **validating webhook** prevents deletion (e.g., for the `system-project`), please refer to the [Bypassing the Webhook](../../../../reference-guides/rancher-webhook.md#bypassing-the-webhook) documentation.
|
||||||
|
|
||||||
## Rolling Back
|
## Rolling Back
|
||||||
|
|
||||||
If your upgrade does not complete successfully, you can roll back Rancher server and its data back to its last healthy state. For more information, see [Docker Rollback](roll-back-docker-installed-rancher.md).
|
If your upgrade does not complete successfully, you can roll back Rancher server and its data back to its last healthy state. For more information, see [Docker Rollback](roll-back-docker-installed-rancher.md).
|
||||||
|
|||||||
@@ -1,17 +0,0 @@
|
|||||||
---
|
|
||||||
title: CIS Scan Guides
|
|
||||||
---
|
|
||||||
|
|
||||||
<head>
|
|
||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides"/>
|
|
||||||
</head>
|
|
||||||
|
|
||||||
- [Install rancher-cis-benchmark](install-rancher-cis-benchmark.md)
|
|
||||||
- [Uninstall rancher-cis-benchmark](uninstall-rancher-cis-benchmark.md)
|
|
||||||
- [Run a Scan](run-a-scan.md)
|
|
||||||
- [Run a Scan Periodically on a Schedule](run-a-scan-periodically-on-a-schedule.md)
|
|
||||||
- [Skip Tests](skip-tests.md)
|
|
||||||
- [View Reports](view-reports.md)
|
|
||||||
- [Enable Alerting for rancher-cis-benchmark](enable-alerting-for-rancher-cis-benchmark.md)
|
|
||||||
- [Configure Alerts for Periodic Scan on a Schedule](configure-alerts-for-periodic-scan-on-a-schedule.md)
|
|
||||||
- [Create a Custom Benchmark Version to Run](create-a-custom-benchmark-version-to-run.md)
|
|
||||||
-13
@@ -1,13 +0,0 @@
|
|||||||
---
|
|
||||||
title: Create a Custom Benchmark Version for Running a Cluster Scan
|
|
||||||
---
|
|
||||||
|
|
||||||
<head>
|
|
||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/create-a-custom-benchmark-version-to-run"/>
|
|
||||||
</head>
|
|
||||||
|
|
||||||
There could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them.
|
|
||||||
|
|
||||||
It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application.
|
|
||||||
|
|
||||||
For details, see [this page.](../../../integrations-in-rancher/cis-scans/custom-benchmark.md)
|
|
||||||
-24
@@ -1,24 +0,0 @@
|
|||||||
---
|
|
||||||
title: Enable Alerting for Rancher CIS Benchmark
|
|
||||||
---
|
|
||||||
|
|
||||||
<head>
|
|
||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark"/>
|
|
||||||
</head>
|
|
||||||
|
|
||||||
Alerts can be configured to be sent out for a scan that runs on a schedule.
|
|
||||||
|
|
||||||
:::note Prerequisite:
|
|
||||||
|
|
||||||
Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md)
|
|
||||||
|
|
||||||
While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-cis-scan-alerts)
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
While installing or upgrading the `rancher-cis-benchmark` Helm chart, set the following flag to `true` in the `values.yaml`:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
alerts:
|
|
||||||
enabled: true
|
|
||||||
```
|
|
||||||
@@ -1,26 +0,0 @@
|
|||||||
---
|
|
||||||
title: Run a Scan
|
|
||||||
---
|
|
||||||
|
|
||||||
<head>
|
|
||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan"/>
|
|
||||||
</head>
|
|
||||||
|
|
||||||
When a ClusterScan custom resource is created, it launches a new CIS scan on the cluster for the chosen ClusterScanProfile.
|
|
||||||
|
|
||||||
:::note
|
|
||||||
|
|
||||||
There is currently a limitation of running only one CIS scan at a time for a cluster. If you create multiple ClusterScan custom resources, they will be run one after the other by the operator, and until one scan finishes, the rest of the ClusterScan custom resources will be in the "Pending" state.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
To run a scan,
|
|
||||||
|
|
||||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
|
||||||
1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**.
|
|
||||||
1. Click **CIS Benchmark > Scan**.
|
|
||||||
1. Click **Create**.
|
|
||||||
1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on.
|
|
||||||
1. Click **Create**.
|
|
||||||
|
|
||||||
**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears.
|
|
||||||
@@ -1,38 +0,0 @@
|
|||||||
---
|
|
||||||
title: Skip Tests
|
|
||||||
---
|
|
||||||
|
|
||||||
<head>
|
|
||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/skip-tests"/>
|
|
||||||
</head>
|
|
||||||
|
|
||||||
CIS scans can be run using test profiles with user-defined skips.
|
|
||||||
|
|
||||||
To skip tests, you will create a custom CIS scan profile. A profile contains the configuration for the CIS scan, which includes the benchmark versions to use and any specific tests to skip in that benchmark.
|
|
||||||
|
|
||||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
|
||||||
1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**.
|
|
||||||
1. Click **CIS Benchmark > Profile**.
|
|
||||||
1. From here, you can create a profile in multiple ways. To make a new profile, click **Create** and fill out the form in the UI. To make a new profile based on an existing profile, go to the existing profile and click **⋮ Clone**. If you are filling out the form, add the tests to skip using the test IDs, using the relevant CIS Benchmark as a reference. If you are creating the new test profile as YAML, you will add the IDs of the tests to skip in the `skipTests` directive. You will also give the profile a name:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: cis.cattle.io/v1
|
|
||||||
kind: ClusterScanProfile
|
|
||||||
metadata:
|
|
||||||
annotations:
|
|
||||||
meta.helm.sh/release-name: clusterscan-operator
|
|
||||||
meta.helm.sh/release-namespace: cis-operator-system
|
|
||||||
labels:
|
|
||||||
app.kubernetes.io/managed-by: Helm
|
|
||||||
name: "<example-profile>"
|
|
||||||
spec:
|
|
||||||
benchmarkVersion: cis-1.5
|
|
||||||
skipTests:
|
|
||||||
- "1.1.20"
|
|
||||||
- "1.1.21"
|
|
||||||
```
|
|
||||||
1. Click **Create**.
|
|
||||||
|
|
||||||
**Result:** A new CIS scan profile is created.
|
|
||||||
|
|
||||||
When you [run a scan](./run-a-scan.md) that uses this profile, the defined tests will be skipped during the scan. The skipped tests will be marked in the generated report as `Skip`.
|
|
||||||
-13
@@ -1,13 +0,0 @@
|
|||||||
---
|
|
||||||
title: Uninstall Rancher CIS Benchmark
|
|
||||||
---
|
|
||||||
|
|
||||||
<head>
|
|
||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/uninstall-rancher-cis-benchmark"/>
|
|
||||||
</head>
|
|
||||||
|
|
||||||
1. From the **Cluster Dashboard,** go to the left navigation bar and click **Apps > Installed Apps**.
|
|
||||||
1. Go to the `cis-operator-system` namespace and check the boxes next to `rancher-cis-benchmark-crd` and `rancher-cis-benchmark`.
|
|
||||||
1. Click **Delete** and confirm **Delete**.
|
|
||||||
|
|
||||||
**Result:** The `rancher-cis-benchmark` application is uninstalled.
|
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
---
|
|
||||||
title: View Reports
|
|
||||||
---
|
|
||||||
|
|
||||||
<head>
|
|
||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/view-reports"/>
|
|
||||||
</head>
|
|
||||||
|
|
||||||
To view the generated CIS scan reports,
|
|
||||||
|
|
||||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
|
||||||
1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**.
|
|
||||||
1. Click **CIS Benchmark > Scan**.
|
|
||||||
1. The **Scans** page will show the generated reports. To see a detailed report, go to a scan report and click the name.
|
|
||||||
|
|
||||||
One can download the report from the Scans list or from the scan detail page.
|
|
||||||
|
|
||||||
To get the verbose version of the CIS scan results, run the following command on the cluster that was scanned. Note that the scan must be completed before this can be done.
|
|
||||||
|
|
||||||
```console
|
|
||||||
export REPORT="scan-report-name"
|
|
||||||
kubectl get clusterscanreport $REPORT -o json |jq ".spec.reportJSON | fromjson" | jq -r ".actual_value_map_data" | base64 -d | gunzip | jq .
|
|
||||||
```
|
|
||||||
+16
@@ -0,0 +1,16 @@
|
|||||||
|
---
|
||||||
|
title: Compliance Scan Guides
|
||||||
|
---
|
||||||
|
|
||||||
|
<head>
|
||||||
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides"/>
|
||||||
|
</head>
|
||||||
|
|
||||||
|
- [Install rancher-compliance](install-rancher-compliance.md)
|
||||||
|
- [Uninstall rancher-compliance](uninstall-rancher-compliance.md)
|
||||||
|
- [Run a Scan](run-a-scan.md)
|
||||||
|
- [Run a Scan Periodically on a Schedule](run-a-scan-periodically-on-a-schedule.md)
|
||||||
|
- [View Reports](view-reports.md)
|
||||||
|
- [Enable Alerting for rancher-compliance](enable-alerting-for-rancher-compliance.md)
|
||||||
|
- [Configure Alerts for Periodic Scan on a Schedule](configure-alerts-for-periodic-scan-on-a-schedule.md)
|
||||||
|
- [Create a Custom Benchmark Version to Run](create-a-custom-compliance-version-to-run.md)
|
||||||
+8
-8
@@ -3,7 +3,7 @@ title: Configure Alerts for Periodic Scan on a Schedule
|
|||||||
---
|
---
|
||||||
|
|
||||||
<head>
|
<head>
|
||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule"/>
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule"/>
|
||||||
</head>
|
</head>
|
||||||
|
|
||||||
It is possible to run a ClusterScan on a schedule.
|
It is possible to run a ClusterScan on a schedule.
|
||||||
@@ -12,27 +12,27 @@ A scheduled scan can also specify if you should receive alerts when the scan com
|
|||||||
|
|
||||||
Alerts are supported only for a scan that runs on a schedule.
|
Alerts are supported only for a scan that runs on a schedule.
|
||||||
|
|
||||||
The CIS Benchmark application supports two types of alerts:
|
The compliance application supports two types of alerts:
|
||||||
|
|
||||||
- Alert on scan completion: This alert is sent out when the scan run finishes. The alert includes details including the ClusterScan's name and the ClusterScanProfile name.
|
- Alert on scan completion: This alert is sent out when the scan run finishes. The alert includes details including the ClusterScan's name and the ClusterScanProfile name.
|
||||||
- Alert on scan failure: This alert is sent out if there are some test failures in the scan run or if the scan is in a `Fail` state.
|
- Alert on scan failure: This alert is sent out if there are some test failures in the scan run or if the scan is in a `Fail` state.
|
||||||
|
|
||||||
:::note Prerequisite
|
:::note Prerequisite
|
||||||
|
|
||||||
Before enabling alerts for `rancher-cis-benchmark`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md)
|
Before enabling alerts for `rancher-compliance`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md)
|
||||||
|
|
||||||
While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-cis-scan-alerts)
|
While configuring the routes for `rancher-compliance` alerts, you can specify the matching using the key-value pair `job: rancher-compliance-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-compliance-scan-alerts)
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
To configure alerts for a scan that runs on a schedule,
|
To configure alerts for a scan that runs on a schedule,
|
||||||
|
|
||||||
1. Please enable alerts on the `rancher-cis-benchmark` application. For more information, see [this page](../../../how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark.md).
|
1. Please enable alerts on the `rancher-compliance` application. For more information, see [this page](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance.md).
|
||||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||||
1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**.
|
1. On the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**.
|
||||||
1. Click **CIS Benchmark > Scan**.
|
1. Click **compliance > Scan**.
|
||||||
1. Click **Create**.
|
1. Click **Create**.
|
||||||
1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on.
|
1. Choose a cluster scan profile. The profile determines which compliance version will be used and which tests will be performed. If you choose the Default profile, then the Compliance Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on.
|
||||||
1. Choose the option **Run scan on a schedule**.
|
1. Choose the option **Run scan on a schedule**.
|
||||||
1. Enter a valid [cron schedule expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) in the field **Schedule**.
|
1. Enter a valid [cron schedule expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) in the field **Schedule**.
|
||||||
1. Check the boxes next to the Alert types under **Alerting**.
|
1. Check the boxes next to the Alert types under **Alerting**.
|
||||||
+13
@@ -0,0 +1,13 @@
|
|||||||
|
---
|
||||||
|
title: Create a Custom Compliance Version for Running a Cluster Scan
|
||||||
|
---
|
||||||
|
|
||||||
|
<head>
|
||||||
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run"/>
|
||||||
|
</head>
|
||||||
|
|
||||||
|
There could be some Kubernetes cluster setups that require custom configurations of the Compliance tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream Compliance look for them.
|
||||||
|
|
||||||
|
It is now possible to create a custom compliance version for running a cluster scan using the `rancher-compliance` application.
|
||||||
|
|
||||||
|
For details, see [this page.](../../../integrations-in-rancher/compliance-scans/custom-benchmark.md)
|
||||||
+24
@@ -0,0 +1,24 @@
|
|||||||
|
---
|
||||||
|
title: Enable Alerting for Rancher Compliance
|
||||||
|
---
|
||||||
|
|
||||||
|
<head>
|
||||||
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance"/>
|
||||||
|
</head>
|
||||||
|
|
||||||
|
Alerts can be configured to be sent out for a scan that runs on a schedule.
|
||||||
|
|
||||||
|
:::note Prerequisite:
|
||||||
|
|
||||||
|
Before enabling alerts for `rancher-compliance`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md)
|
||||||
|
|
||||||
|
While configuring the routes for `rancher-compliance` alerts, you can specify the matching using the key-value pair `job: rancher-compliance-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-compliance-scan-alerts)
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
While installing or upgrading the `rancher-compliance` Helm chart, set the following flag to `true` in the `values.yaml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
alerts:
|
||||||
|
enabled: true
|
||||||
|
```
|
||||||
+5
-5
@@ -1,15 +1,15 @@
|
|||||||
---
|
---
|
||||||
title: Install Rancher CIS Benchmark
|
title: Install Rancher Compliance
|
||||||
---
|
---
|
||||||
|
|
||||||
<head>
|
<head>
|
||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/install-rancher-cis-benchmark"/>
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance"/>
|
||||||
</head>
|
</head>
|
||||||
|
|
||||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||||
1. On the **Clusters** page, go to the cluster where you want to install CIS Benchmark and click **Explore**.
|
1. On the **Clusters** page, go to the cluster where you want to install Compliance and click **Explore**.
|
||||||
1. In the left navigation bar, click **Apps > Charts**.
|
1. In the left navigation bar, click **Apps > Charts**.
|
||||||
1. Click **CIS Benchmark**
|
1. Click **Compliance**
|
||||||
1. Click **Install**.
|
1. Click **Install**.
|
||||||
|
|
||||||
**Result:** The CIS scan application is deployed on the Kubernetes cluster.
|
**Result:** The compliance scan application is deployed on the Kubernetes cluster.
|
||||||
+4
-4
@@ -3,15 +3,15 @@ title: Run a Scan Periodically on a Schedule
|
|||||||
---
|
---
|
||||||
|
|
||||||
<head>
|
<head>
|
||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/cis-scan-guides/run-a-scan-periodically-on-a-schedule"/>
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan-periodically-on-a-schedule"/>
|
||||||
</head>
|
</head>
|
||||||
|
|
||||||
To run a ClusterScan on a schedule,
|
To run a ClusterScan on a schedule,
|
||||||
|
|
||||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||||
1. On the **Clusters** page, go to the cluster where you want to run a CIS scan and click **Explore**.
|
1. On the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**.
|
||||||
1. Click **CIS Benchmark > Scan**.
|
1. Click **Compliance > Scan**.
|
||||||
1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the CIS Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on.
|
1. Choose a cluster scan profile. The profile determines which Compliance version will be used and which tests will be performed. If you choose the Default profile, then the Compliance Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on.
|
||||||
1. Choose the option **Run scan on a schedule**.
|
1. Choose the option **Run scan on a schedule**.
|
||||||
1. Enter a valid <a href="https://en.wikipedia.org/wiki/Cron#CRON_expression" target="_blank">cron schedule expression</a> in the field **Schedule**.
|
1. Enter a valid <a href="https://en.wikipedia.org/wiki/Cron#CRON_expression" target="_blank">cron schedule expression</a> in the field **Schedule**.
|
||||||
1. Choose a **Retention** count, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged.
|
1. Choose a **Retention** count, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged.
|
||||||
@@ -0,0 +1,26 @@
|
|||||||
|
---
|
||||||
|
title: Run a Scan
|
||||||
|
---
|
||||||
|
|
||||||
|
<head>
|
||||||
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan"/>
|
||||||
|
</head>
|
||||||
|
|
||||||
|
When a ClusterScan custom resource is created, it launches a new compliance scan on the cluster for the chosen ClusterScanProfile.
|
||||||
|
|
||||||
|
:::note
|
||||||
|
|
||||||
|
There is currently a limitation of running only one compliance scan at a time for a cluster. If you create multiple ClusterScan custom resources, they will be run one after the other by the operator, and until one scan finishes, the rest of the ClusterScan custom resources will be in the "Pending" state.
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
To run a scan,
|
||||||
|
|
||||||
|
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||||
|
1. On the **Clusters** page, go to the cluster where you want to run a compliance scan and click **Explore**.
|
||||||
|
1. Click **Compliance > Scan**.
|
||||||
|
1. Click **Create**.
|
||||||
|
1. Choose a cluster scan profile. The profile determines which Compliance version will be used and which tests will be performed. If you choose the Default profile, then the Compliance Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on.
|
||||||
|
1. Click **Create**.
|
||||||
|
|
||||||
|
**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears.
|
||||||
+13
@@ -0,0 +1,13 @@
|
|||||||
|
---
|
||||||
|
title: Uninstall Rancher Compliance
|
||||||
|
---
|
||||||
|
|
||||||
|
<head>
|
||||||
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/uninstall-rancher-compliance"/>
|
||||||
|
</head>
|
||||||
|
|
||||||
|
1. From the **Cluster Dashboard,** go to the left navigation bar and click **Apps > Installed Apps**.
|
||||||
|
1. Go to the `compliance-operator-system` namespace and check the boxes next to `rancher-compliance-crd` and `rancher-compliance`.
|
||||||
|
1. Click **Delete** and confirm **Delete**.
|
||||||
|
|
||||||
|
**Result:** The `rancher-compliance` application is uninstalled.
|
||||||
@@ -0,0 +1,23 @@
|
|||||||
|
---
|
||||||
|
title: View Reports
|
||||||
|
---
|
||||||
|
|
||||||
|
<head>
|
||||||
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports"/>
|
||||||
|
</head>
|
||||||
|
|
||||||
|
To view the generated Compliance scan reports,
|
||||||
|
|
||||||
|
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||||
|
1. On the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**.
|
||||||
|
1. Click **Compliance > Scan**.
|
||||||
|
1. The **Scans** page will show the generated reports. To see a detailed report, go to a scan report and click the name.
|
||||||
|
|
||||||
|
One can download the report from the Scans list or from the scan detail page.
|
||||||
|
|
||||||
|
To get the verbose version of the compliance scan results, run the following command on the cluster that was scanned. Note that the scan must be completed before this can be done.
|
||||||
|
|
||||||
|
```console
|
||||||
|
export REPORT="scan-report-name"
|
||||||
|
kubectl get clusterscanreports.compliance.cattle.io $REPORT -o json |jq ".spec.reportJSON | fromjson" | jq -r ".actual_value_map_data" | base64 -d | gunzip | jq .
|
||||||
|
```
|
||||||
+1
@@ -49,3 +49,4 @@ Rancher supports several major cloud providers, but by default, these node drive
|
|||||||
There are several other node drivers that are disabled by default, but are packaged in Rancher:
|
There are several other node drivers that are disabled by default, but are packaged in Rancher:
|
||||||
|
|
||||||
* [Harvester](../../../../integrations-in-rancher/harvester/overview.md#harvester-node-driver/), available as of Rancher v2.6.1
|
* [Harvester](../../../../integrations-in-rancher/harvester/overview.md#harvester-node-driver/), available as of Rancher v2.6.1
|
||||||
|
* [Google GCE](../../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-google-compute-engine-cluster.md), available as of Rancher v2.12.0
|
||||||
|
|||||||
+8
@@ -57,6 +57,14 @@ GKE Autopilot clusters aren't supported. See [Compare GKE Autopilot and Standard
|
|||||||
9. If you are using self-signed certificates, you will receive the message `certificate signed by unknown authority`. To work around this validation, copy the command starting with `curl` displayed in Rancher to your clipboard. Then run the command on a node where kubeconfig is configured to point to the cluster you want to import.
|
9. If you are using self-signed certificates, you will receive the message `certificate signed by unknown authority`. To work around this validation, copy the command starting with `curl` displayed in Rancher to your clipboard. Then run the command on a node where kubeconfig is configured to point to the cluster you want to import.
|
||||||
10. When you finish running the command(s) on your node, click **Done**.
|
10. When you finish running the command(s) on your node, click **Done**.
|
||||||
|
|
||||||
|
:::important
|
||||||
|
|
||||||
|
The `NO_PROXY` environment variable is not standardized, and the accepted format of the value can differ between applications. When configuring the `NO_PROXY` variable in Rancher, the value must adhere to the format expected by Golang.
|
||||||
|
|
||||||
|
Specifically, the value should be a comma-delimited string which only contains IP addresses, CIDR notation, domain names, or special DNS labels (e.g. `*`). For a full description of the expected value format, refer to the [**upstream Golang documentation**](https://pkg.go.dev/golang.org/x/net/http/httpproxy#Config)
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
**Result:**
|
**Result:**
|
||||||
|
|
||||||
- Your cluster is registered and assigned a state of **Pending**. Rancher is deploying resources to manage your cluster.
|
- Your cluster is registered and assigned a state of **Pending**. Rancher is deploying resources to manage your cluster.
|
||||||
|
|||||||
+107
@@ -0,0 +1,107 @@
|
|||||||
|
---
|
||||||
|
title: Creating a Google Compute Engine cluster
|
||||||
|
---
|
||||||
|
|
||||||
|
<head>
|
||||||
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-google-gce-cluster"/>
|
||||||
|
</head>
|
||||||
|
|
||||||
|
|
||||||
|
In this section, you'll learn how to use Rancher to provision an [RKE2](https://docs.rke2.io/) Kubernetes cluster on the Google Cloud Platform (GCP) using Google Compute Engine (GCE) through Rancher.
|
||||||
|
|
||||||
|
|
||||||
|
First, you will enable the GCE node driver in the Rancher UI. Then, you follow the steps to create a GCP service account with the necessary permissions, and generate a JSON key file. This key file will be used to create a cloud credential in Rancher.
|
||||||
|
|
||||||
|
|
||||||
|
Then, you will create a GCE cluster in Rancher, and when configuring the cluster, you will define machine pools for it. Each machine pool will have a Kubernetes role of etcd, controlplane, or worker. Rancher will install RKE2 onto the new nodes, and it will set up each node with the Kubernetes role defined by the machine pool.
|
||||||
|
|
||||||
|
|
||||||
|
1. [Enable the GCE node driver](#1-enable-the-gce-node-driver)
|
||||||
|
1. [Create your cloud credential](#2-create-a-cloud-credential)
|
||||||
|
1. [Create a GCE cluster with your cloud credential](#3-create-a-cluster-using-the-cloud-credential)
|
||||||
|
1. [GCE Best Practices](#gce-best-practices)
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
1. A valid Google Cloud Platform account and project.
|
||||||
|
1. A GCP Service Account JSON key file. The service account associated with this key must have the following IAM roles:
|
||||||
|
1. **Compute Admin**
|
||||||
|
1. **Service Account User**
|
||||||
|
1. **Viewer**
|
||||||
|
1. A VPC Network to provision VMs within.
|
||||||
|
|
||||||
|
Refer to the [GCP documentation](https://cloud.google.com/iam/docs/service-account-overview) on creating and managing service account keys for more details.
|
||||||
|
|
||||||
|
|
||||||
|
### 1. Enable the GCE node driver
|
||||||
|
|
||||||
|
The GCE node driver is not enabled by default in Rancher. You must enable it before you can provision GCE clusters, or work with GCE specific CRDs.
|
||||||
|
|
||||||
|
1. Click **☰ > Cluster Management**.
|
||||||
|
1. On the left hand side, click **Drivers**.
|
||||||
|
1. Open the **Node Drivers** tab.
|
||||||
|
1. Find the **Google GCE** driver and select **⋮ > Activate**.
|
||||||
|
|
||||||
|
|
||||||
|
### 2. Create a cloud credential
|
||||||
|
|
||||||
|
1. Click **☰ > Cluster Management**.
|
||||||
|
1. Click **Cloud Credentials**.
|
||||||
|
1. Click **Create**.
|
||||||
|
1. Click **Google**.
|
||||||
|
1. Enter your GCP Service Account JSON key file.
|
||||||
|
1. Click **Create**.
|
||||||
|
|
||||||
|
**Result:** You have created the cloud credentials that will be used to provision nodes in your cluster. You can reuse these credentials in other clusters. Depending on the permissions granted to the service account, this credential may also be used for GKE clusters.
|
||||||
|
|
||||||
|
|
||||||
|
### 3. Create a cluster using the cloud credential
|
||||||
|
|
||||||
|
1. Click **☰ > Cluster Management**.
|
||||||
|
1. On the **Clusters** page, click **Create**.
|
||||||
|
1. Click **Google GCE**.
|
||||||
|
1. Select a **Cloud Credential** and provide the GCP project to create the VM in.
|
||||||
|
1. Enter a **Cluster Name**.
|
||||||
|
1. Create a machine pool for each Kubernetes role. Refer to the [best practices](use-new-nodes-in-an-infra-provider.md#node-roles) for recommendations on role assignments and counts.
|
||||||
|
1. For each machine pool, define the machine configuration. Refer to the [Google GCE machine configuration reference](../../../../reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/google-gce.md) for information on configuration options.
|
||||||
|
1. Use the **Cluster Configuration** to choose the version of Kubernetes that will be installed, what network provider will be used and if you want to enable project network isolation. For help configuring the cluster, refer to the [RKE2 cluster configuration reference.](../../../../reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md)
|
||||||
|
1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
|
||||||
|
1. Click **Create**.
|
||||||
|
|
||||||
|
|
||||||
|
**Result:**
|
||||||
|
|
||||||
|
Your cluster is created and assigned a state of **Provisioning**. Rancher is standing up your cluster.
|
||||||
|
|
||||||
|
You can access your cluster after its state is updated to **Active**.
|
||||||
|
|
||||||
|
**Active** clusters are assigned two Projects:
|
||||||
|
|
||||||
|
- `Default`, containing the `default` namespace
|
||||||
|
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
|
||||||
|
|
||||||
|
### GCE Best Practices
|
||||||
|
|
||||||
|
#### External Firewall Rules, Open Ports, and ACE
|
||||||
|
|
||||||
|
If the cluster being provisioned will utilize the [Authorized Cluster Endpoint (ACE) feature](../../../new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster), controlplane nodes must expose port `6443`. This port is not exposed in the default machine pool configuration to prevent it from being exposed across all cluster nodes, and to reduce the number of firewall rules created by Rancher.
|
||||||
|
|
||||||
|
In order for ACE to work as expected, you must specify this port in the Rancher UI when configuring the controlplane machine pool by enabling the `Expose external ports` checkbox, under the `Show Advanced` section of the machine pool configuration UI. Alternatively, you may manually create a custom firewall rule in GCP and provide the related network tag in the controlplane machine-pool configuration.
|
||||||
|
|
||||||
|
#### Internal Firewall Rules
|
||||||
|
|
||||||
|
Rancher will automatically create a firewall rule and network tag to facilitate communication between cluster nodes internally within the specified VPC network. This rule will contain the minimum number of ports required to create an RKE2/K3s cluster.
|
||||||
|
|
||||||
|
If you need to extend the number of ports exposed internally between cluster nodes, a new firewall rule should be manually created, and the associated network tag assigned to the relevant machine pools. If desired, the automatic creation of the internal firewall rule can be disabled for each given machine pool when creating or updating the cluster.
|
||||||
|
|
||||||
|
#### Cross Network Deployments
|
||||||
|
|
||||||
|
While it is possible to deploy different machine pools into different VPC networks, the internal firewall rule created by Rancher does not support this configuration by default. To create machine pools in different networks, additional firewall rules to facilitate communication between nodes in different networks must be manually created.
|
||||||
|
|
||||||
|
|
||||||
|
## Optional Next Steps
|
||||||
|
|
||||||
|
After creating your cluster, you can access it through the Rancher UI. As a best practice, we recommend setting up these alternate ways of accessing your cluster:
|
||||||
|
|
||||||
|
- **Access your cluster with the kubectl CLI:** Follow [these steps](../../../new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#accessing-clusters-with-kubectl-from-your-workstation) to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher server’s authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI.
|
||||||
|
- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps](../../../new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you can’t connect to Rancher, you can still access the cluster.
|
||||||
@@ -1,52 +0,0 @@
|
|||||||
---
|
|
||||||
title: Roles-based Access Control
|
|
||||||
---
|
|
||||||
|
|
||||||
<head>
|
|
||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/integrations-in-rancher/cis-scans/rbac-for-cis-scans"/>
|
|
||||||
</head>
|
|
||||||
|
|
||||||
This section describes the permissions required to use the rancher-cis-benchmark App.
|
|
||||||
|
|
||||||
The rancher-cis-benchmark is a cluster-admin only feature by default.
|
|
||||||
|
|
||||||
However, the `rancher-cis-benchmark` chart installs these two default `ClusterRoles`:
|
|
||||||
|
|
||||||
- cis-admin
|
|
||||||
- cis-view
|
|
||||||
|
|
||||||
In Rancher, only cluster owners and global administrators have `cis-admin` access by default.
|
|
||||||
|
|
||||||
Note: If you were using the `cis-edit` role added in Rancher v2.5 setup, it has now been removed since
|
|
||||||
Rancher v2.5.2 because it essentially is same as `cis-admin`. If you happen to create any clusterrolebindings
|
|
||||||
for `cis-edit`, please update them to use `cis-admin` ClusterRole instead.
|
|
||||||
|
|
||||||
## Cluster-Admin Access
|
|
||||||
|
|
||||||
Rancher CIS Scans is a cluster-admin only feature by default.
|
|
||||||
This means only the Rancher global admins, and the cluster’s cluster-owner can:
|
|
||||||
|
|
||||||
- Install/Uninstall the rancher-cis-benchmark App
|
|
||||||
- See the navigation links for CIS Benchmark CRDs - ClusterScanBenchmarks, ClusterScanProfiles, ClusterScans
|
|
||||||
- List the default ClusterScanBenchmarks and ClusterScanProfiles
|
|
||||||
- Create/Edit/Delete new ClusterScanProfiles
|
|
||||||
- Create/Edit/Delete a new ClusterScan to run the CIS scan on the cluster
|
|
||||||
- View and Download the ClusterScanReport created after the ClusterScan is complete
|
|
||||||
|
|
||||||
|
|
||||||
## Summary of Default Permissions for Kubernetes Default Roles
|
|
||||||
|
|
||||||
The rancher-cis-benchmark creates three `ClusterRoles` and adds the CIS Benchmark CRD access to the following default K8s `ClusterRoles`:
|
|
||||||
|
|
||||||
| ClusterRole created by chart | Default K8s ClusterRole | Permissions given with Role
|
|
||||||
| ------------------------------| ---------------------------| ---------------------------|
|
|
||||||
| `cis-admin` | `admin`| Ability to CRUD clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR
|
|
||||||
| `cis-view` | `view `| Ability to List(R) clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR
|
|
||||||
|
|
||||||
|
|
||||||
By default only cluster-owner role will have ability to manage and use `rancher-cis-benchmark` feature.
|
|
||||||
|
|
||||||
The other Rancher roles (cluster-member, project-owner, project-member) do not have any default permissions to manage and use rancher-cis-benchmark resources.
|
|
||||||
|
|
||||||
But if a cluster-owner wants to delegate access to other users, they can do so by creating ClusterRoleBindings between these users and the above CIS ClusterRoles manually.
|
|
||||||
There is no automatic role aggregation supported for the `rancher-cis-benchmark` ClusterRoles.
|
|
||||||
@@ -1,57 +0,0 @@
|
|||||||
---
|
|
||||||
title: Skipped and Not Applicable Tests
|
|
||||||
---
|
|
||||||
|
|
||||||
<head>
|
|
||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/integrations-in-rancher/cis-scans/skipped-and-not-applicable-tests"/>
|
|
||||||
</head>
|
|
||||||
|
|
||||||
This section lists the tests that are skipped in the permissive test profile for RKE.
|
|
||||||
|
|
||||||
> All the tests that are skipped and not applicable on this page will be counted as Not Applicable in the v2.5 generated report. The skipped test count will only mention the user-defined skipped tests. This allows user-skipped tests to be distinguished from the tests that are skipped by default in the RKE permissive test profile.
|
|
||||||
|
|
||||||
## CIS Benchmark v1.5
|
|
||||||
|
|
||||||
### CIS Benchmark v1.5 Skipped Tests
|
|
||||||
|
|
||||||
| Number | Description | Reason for Skipping |
|
|
||||||
| ---------- | ------------- | --------- |
|
|
||||||
| 1.1.12 | Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership. |
|
|
||||||
| 1.2.6 | Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. |
|
|
||||||
| 1.2.16 | Ensure that the admission control plugin PodSecurityPolicy is set (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
|
|
||||||
| 1.2.33 | Ensure that the --encryption-provider-config argument is set as appropriate (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. |
|
|
||||||
| 1.2.34 | Ensure that encryption providers are appropriately configured (Manual) | Enabling encryption changes how data can be recovered as data is encrypted. |
|
|
||||||
| 4.2.6 | Ensure that the --protect-kernel-defaults argument is set to true (Automated) | System level configurations are required before provisioning the cluster in order for this argument to be set to true. |
|
|
||||||
| 4.2.10 | Ensure that the--tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. |
|
|
||||||
| 5.1.5 | Ensure that default service accounts are not actively used. (Automated) | Kubernetes provides default service accounts to be used. |
|
|
||||||
| 5.2.2 | Minimize the admission of containers wishing to share the host process ID namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
|
|
||||||
| 5.2.3 | Minimize the admission of containers wishing to share the host IPC namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
|
|
||||||
| 5.2.4 | Minimize the admission of containers wishing to share the host network namespace (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
|
|
||||||
| 5.2.5 | Minimize the admission of containers with allowPrivilegeEscalation (Automated) | Enabling Pod Security Policy can cause applications to unexpectedly fail. |
|
|
||||||
| 5.3.2 | Ensure that all Namespaces have Network Policies defined (Automated) | Enabling Network Policies can prevent certain applications from communicating with each other. |
|
|
||||||
| 5.6.4 | The default namespace should not be used (Automated) | Kubernetes provides a default namespace. |
|
|
||||||
|
|
||||||
### CIS Benchmark v1.5 Not Applicable Tests
|
|
||||||
|
|
||||||
| Number | Description | Reason for being not applicable |
|
|
||||||
| ---------- | ------------- | --------- |
|
|
||||||
| 1.1.1 | Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 1.1.2 | Ensure that the API server pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 1.1.3 | Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 1.1.4 | Ensure that the controller manager pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 1.1.5 | Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 1.1.6 | Ensure that the scheduler pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 1.1.7 | Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 1.1.8 | Ensure that the etcd pod specification file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 1.1.13 | Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. |
|
|
||||||
| 1.1.14 | Ensure that the admin.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. |
|
|
||||||
| 1.1.15 | Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 1.1.16 | Ensure that the scheduler.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for scheduler. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 1.1.17 | Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 1.1.18 | Ensure that the controller-manager.conf file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn't require or maintain a configuration file for controller-manager. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 1.3.6 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. |
|
|
||||||
| 4.1.1 | Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 4.1.2 | Ensure that the kubelet service file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 4.1.9 | Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 4.1.10 | Ensure that the kubelet configuration file ownership is set to root:root (Automated) | Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. All configuration is passed in as arguments at container run time. |
|
|
||||||
| 4.2.12 | Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) | Clusters provisioned by RKE handles certificate rotation directly through RKE. |
|
|
||||||
+1
-4
@@ -19,10 +19,7 @@ In order to deploy and run the adapter successfully, you need to ensure its vers
|
|||||||
|
|
||||||
| Rancher Version | Adapter Version |
|
| Rancher Version | Adapter Version |
|
||||||
|-----------------|------------------|
|
|-----------------|------------------|
|
||||||
| v2.11.3 | v106.0.0+up6.0.0 |
|
| v2.12.0 | 107.0.0+up7.0.0 |
|
||||||
| v2.11.2 | v106.0.0+up6.0.0 |
|
|
||||||
| v2.11.1 | v106.0.0+up6.0.0 |
|
|
||||||
| v2.11.0 | v106.0.0+up6.0.0 |
|
|
||||||
|
|
||||||
### 1. Gain Access to the Local Cluster
|
### 1. Gain Access to the Local Cluster
|
||||||
|
|
||||||
|
|||||||
+7
-9
@@ -1,14 +1,14 @@
|
|||||||
---
|
---
|
||||||
title: CIS Scans
|
title: Compliance Scans
|
||||||
---
|
---
|
||||||
|
|
||||||
<head>
|
<head>
|
||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/integrations-in-rancher/cis-scans"/>
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/integrations-in-rancher/compliance-scans"/>
|
||||||
</head>
|
</head>
|
||||||
|
|
||||||
Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. The CIS scans can run on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE.
|
Rancher can run a security scan to check whether a cluster is deployed according to security best practices as defined in Kubernetes security benchmarks, such as the ones provided by STIG, BSI or CIS. The Compliance scans can run on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE.
|
||||||
|
|
||||||
The `rancher-cis-benchmark` app leverages <a href="https://github.com/aquasecurity/kube-bench" target="_blank">kube-bench,</a> an open-source tool from Aqua Security, to check clusters for CIS Kubernetes Benchmark compliance. Also, to generate a cluster-wide report, the application utilizes <a href="https://github.com/vmware-tanzu/sonobuoy" target="_blank">Sonobuoy</a> for report aggregation.
|
The `rancher-compliance` app leverages <a href="https://github.com/aquasecurity/kube-bench" target="_blank">kube-bench,</a> an open-source tool from Aqua Security, to check the compliance of clusters against Kubernetes Benchmarks. Also, to generate a cluster-wide report, the application utilizes <a href="https://github.com/vmware-tanzu/sonobuoy" target="_blank">Sonobuoy</a> for report aggregation.
|
||||||
|
|
||||||
|
|
||||||
## About the CIS Benchmark
|
## About the CIS Benchmark
|
||||||
@@ -94,7 +94,7 @@ In order to pass the "Hardened" profile, you will need to follow the steps on th
|
|||||||
|
|
||||||
The default profile and the supported CIS benchmark version depends on the type of cluster that will be scanned:
|
The default profile and the supported CIS benchmark version depends on the type of cluster that will be scanned:
|
||||||
|
|
||||||
The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version.
|
The `rancher-compliance` supports the CIS 1.6 Benchmark version.
|
||||||
|
|
||||||
- For RKE Kubernetes clusters, the RKE Permissive 1.6 profile is the default.
|
- For RKE Kubernetes clusters, the RKE Permissive 1.6 profile is the default.
|
||||||
- EKS and GKE have their own CIS Benchmarks published by `kube-bench`. The corresponding test profiles are used by default for those clusters.
|
- EKS and GKE have their own CIS Benchmarks published by `kube-bench`. The corresponding test profiles are used by default for those clusters.
|
||||||
@@ -103,15 +103,13 @@ The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version.
|
|||||||
|
|
||||||
## About Skipped and Not Applicable Tests
|
## About Skipped and Not Applicable Tests
|
||||||
|
|
||||||
For a list of skipped and not applicable tests, refer to [this page](../../how-to-guides/advanced-user-guides/cis-scan-guides/skip-tests.md).
|
|
||||||
|
|
||||||
For now, only user-defined skipped tests are marked as skipped in the generated report.
|
For now, only user-defined skipped tests are marked as skipped in the generated report.
|
||||||
|
|
||||||
Any skipped tests that are defined as being skipped by one of the default profiles are marked as not applicable.
|
Any skipped tests that are defined as being skipped by one of the default profiles are marked as not applicable.
|
||||||
|
|
||||||
## Roles-based Access Control
|
## Roles-based Access Control
|
||||||
|
|
||||||
For information about permissions, refer to [this page](rbac-for-cis-scans.md)
|
For information about permissions, refer to [this page](rbac-for-compliance-scans.md)
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
@@ -119,4 +117,4 @@ For more information about configuring the custom resources for the scans, profi
|
|||||||
|
|
||||||
## How-to Guides
|
## How-to Guides
|
||||||
|
|
||||||
Please refer to the [CIS Scan Guides](../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md) to learn how to run CIS scans.
|
Please refer to the [Compliance Scan Guides](../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) to learn how to run Compliance scans.
|
||||||
+16
-16
@@ -3,27 +3,27 @@ title: Configuration
|
|||||||
---
|
---
|
||||||
|
|
||||||
<head>
|
<head>
|
||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/integrations-in-rancher/cis-scans/configuration-reference"/>
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/integrations-in-rancher/compliance-scans/configuration-reference"/>
|
||||||
</head>
|
</head>
|
||||||
|
|
||||||
This configuration reference is intended to help you manage the custom resources created by the `rancher-cis-benchmark` application. These resources are used for performing CIS scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization.
|
This configuration reference is intended to help you manage the custom resources created by the `rancher-compliance` application. These resources are used for performing compliance scans on a cluster, skipping tests, setting the test profile that will be used during a scan, and other customization.
|
||||||
|
|
||||||
To configure the custom resources, go to the **Cluster Dashboard** To configure the CIS scans,
|
To configure the custom resources, go to the **Cluster Dashboard** To configure the compliance scans,
|
||||||
|
|
||||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||||
1. On the **Clusters** page, go to the cluster where you want to configure CIS scans and click **Explore**.
|
1. On the **Clusters** page, go to the cluster where you want to configure compliance scans and click **Explore**.
|
||||||
1. In the left navigation bar, click **CIS Benchmark**.
|
1. In the left navigation bar, click **Compliance**.
|
||||||
|
|
||||||
## Scans
|
## Scans
|
||||||
|
|
||||||
A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed.
|
A scan is created to trigger a compliance scan on the cluster based on the defined profile. A report is created after the scan is completed.
|
||||||
|
|
||||||
When configuring a scan, you need to define the name of the scan profile that will be used with the `scanProfileName` directive.
|
When configuring a scan, you need to define the name of the scan profile that will be used with the `scanProfileName` directive.
|
||||||
|
|
||||||
An example ClusterScan custom resource is below:
|
An example ClusterScan custom resource is below:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: cis.cattle.io/v1
|
apiVersion: compliance.cattle.io/v1
|
||||||
kind: ClusterScan
|
kind: ClusterScan
|
||||||
metadata:
|
metadata:
|
||||||
name: rke-cis
|
name: rke-cis
|
||||||
@@ -33,11 +33,11 @@ spec:
|
|||||||
|
|
||||||
## Profiles
|
## Profiles
|
||||||
|
|
||||||
A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark.
|
A profile contains the configuration for the compliance scan, which includes the benchmark version to use and any specific tests to skip in that benchmark.
|
||||||
|
|
||||||
:::caution
|
:::caution
|
||||||
|
|
||||||
By default, a few ClusterScanProfiles are installed as part of the `rancher-cis-benchmark` chart. If a user edits these default benchmarks or profiles, the next chart update will reset them back. So it is advisable for users to not edit the default ClusterScanProfiles.
|
By default, a few ClusterScanProfiles are installed as part of the `rancher-compliance` chart. If a user edits these default benchmarks or profiles, the next chart update will reset them back. So it is advisable for users to not edit the default ClusterScanProfiles.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
@@ -50,12 +50,12 @@ When you create a new profile, you will also need to give it a name.
|
|||||||
An example `ClusterScanProfile` is below:
|
An example `ClusterScanProfile` is below:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: cis.cattle.io/v1
|
apiVersion: compliance.cattle.io/v1
|
||||||
kind: ClusterScanProfile
|
kind: ClusterScanProfile
|
||||||
metadata:
|
metadata:
|
||||||
annotations:
|
annotations:
|
||||||
meta.helm.sh/release-name: clusterscan-operator
|
meta.helm.sh/release-name: clusterscan-operator
|
||||||
meta.helm.sh/release-namespace: cis-operator-system
|
meta.helm.sh/release-namespace: compliance-operator-system
|
||||||
labels:
|
labels:
|
||||||
app.kubernetes.io/managed-by: Helm
|
app.kubernetes.io/managed-by: Helm
|
||||||
name: "<example-profile>"
|
name: "<example-profile>"
|
||||||
@@ -70,9 +70,9 @@ spec:
|
|||||||
|
|
||||||
A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark.
|
A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark.
|
||||||
|
|
||||||
A `ClusterScanBenchmark` defines the CIS `BenchmarkVersion` name and test configurations. The `BenchmarkVersion` name is a parameter provided to the `kube-bench` tool.
|
A `ClusterScanBenchmark` defines the Compliance `BenchmarkVersion` name and test configurations. The `BenchmarkVersion` name is a parameter provided to the `kube-bench` tool.
|
||||||
|
|
||||||
By default, a few `BenchmarkVersion` names and test configurations are packaged as part of the CIS scan application. When this feature is enabled, these default BenchmarkVersions will be automatically installed and available for users to create a ClusterScanProfile.
|
By default, a few `BenchmarkVersion` names and test configurations are packaged as part of the Compliance scan application. When this feature is enabled, these default BenchmarkVersions will be automatically installed and available for users to create a ClusterScanProfile.
|
||||||
|
|
||||||
:::caution
|
:::caution
|
||||||
|
|
||||||
@@ -89,12 +89,12 @@ A ClusterScanBenchmark consists of the fields:
|
|||||||
An example `ClusterScanBenchmark` is below:
|
An example `ClusterScanBenchmark` is below:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: cis.cattle.io/v1
|
apiVersion: compliance.cattle.io/v1
|
||||||
kind: ClusterScanBenchmark
|
kind: ClusterScanBenchmark
|
||||||
metadata:
|
metadata:
|
||||||
annotations:
|
annotations:
|
||||||
meta.helm.sh/release-name: clusterscan-operator
|
meta.helm.sh/release-name: clusterscan-operator
|
||||||
meta.helm.sh/release-namespace: cis-operator-system
|
meta.helm.sh/release-namespace: compliance-operator-system
|
||||||
creationTimestamp: "2020-08-28T18:18:07Z"
|
creationTimestamp: "2020-08-28T18:18:07Z"
|
||||||
generation: 1
|
generation: 1
|
||||||
labels:
|
labels:
|
||||||
@@ -106,4 +106,4 @@ metadata:
|
|||||||
spec:
|
spec:
|
||||||
clusterProvider: ""
|
clusterProvider: ""
|
||||||
minKubernetesVersion: 1.15.0
|
minKubernetesVersion: 1.15.0
|
||||||
```
|
```
|
||||||
+11
-10
@@ -3,19 +3,20 @@ title: Creating a Custom Benchmark Version for Running a Cluster Scan
|
|||||||
---
|
---
|
||||||
|
|
||||||
<head>
|
<head>
|
||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/integrations-in-rancher/cis-scans/custom-benchmark"/>
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/integrations-in-rancher/compliance-scans/custom-benchmark"/>
|
||||||
</head>
|
</head>
|
||||||
|
|
||||||
Each Benchmark Version defines a set of test configuration files that define the CIS tests to be run by the <a href="https://github.com/aquasecurity/kube-bench" target="_blank">kube-bench</a> tool.
|
Each Benchmark Version defines a set of test configuration files that define the Compliance tests to be run by the <a href="https://github.com/aquasecurity/kube-bench" target="_blank">kube-bench</a> tool.
|
||||||
The `rancher-cis-benchmark` application installs a few default Benchmark Versions which are listed under CIS Benchmark application menu.
|
The `rancher-compliance` application installs a few default Benchmark Versions which are listed under Compliance application menu.
|
||||||
|
|
||||||
But there could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them.
|
|
||||||
|
|
||||||
It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application.
|
But in the following cases, a custom configuration or remediation may be required:
|
||||||
|
|
||||||
When a cluster scan is run, you need to select a Profile which points to a specific Benchmark Version.
|
- Non-standard file locations: When Kubernetes binaries, configuration or certificate paths deviate from upstream benchmark defaults.
|
||||||
|
Example: Unlike traditional Kubernetes, K3s bundles control plane components into a single binary. Therefore,` --anonymous-auth` flag presence and configuration should be verified in K3s' logs (`journalctl`), not via `kube-apiserver` process checks (`ps`).
|
||||||
|
|
||||||
Follow all the steps below to add a custom Benchmark Version and run a scan using it.
|
- Alternative risk mitigations: If a setup doesn't meet a check but has an equally effective compensating control with justification. Or simply is not concerned by the check requirement because of its design.
|
||||||
|
Example: By default, K3s embeds the api server within the k3s process. There is no API server pod specification file, so verifying the latter's file permissions is not required.
|
||||||
|
|
||||||
## 1. Prepare the Custom Benchmark Version ConfigMap
|
## 1. Prepare the Custom Benchmark Version ConfigMap
|
||||||
|
|
||||||
@@ -46,7 +47,7 @@ To prepare a custom benchmark version ConfigMap, suppose we want to add a custom
|
|||||||
|
|
||||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||||
1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**.
|
1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**.
|
||||||
1. In the left navigation bar, click **CIS Benchmark > Benchmark Version**.
|
1. In the left navigation bar, click **Compliance > Benchmark Version**.
|
||||||
1. Click **Create**.
|
1. Click **Create**.
|
||||||
1. Enter the **Name** and a description for your custom benchmark version.
|
1. Enter the **Name** and a description for your custom benchmark version.
|
||||||
1. Choose the cluster provider that your benchmark version applies to.
|
1. Choose the cluster provider that your benchmark version applies to.
|
||||||
@@ -60,7 +61,7 @@ To run a scan using your custom benchmark version, you need to add a new Profile
|
|||||||
|
|
||||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||||
1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**.
|
1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**.
|
||||||
1. In the left navigation bar, click **CIS Benchmark > Profile**.
|
1. In the left navigation bar, click **Compliance > Profile**.
|
||||||
1. Click **Create**.
|
1. Click **Create**.
|
||||||
1. Provide a **Name** and description. In this example, we name it `foo-profile`.
|
1. Provide a **Name** and description. In this example, we name it `foo-profile`.
|
||||||
1. Choose the Benchmark Version from the dropdown.
|
1. Choose the Benchmark Version from the dropdown.
|
||||||
@@ -74,7 +75,7 @@ To run a scan,
|
|||||||
|
|
||||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||||
1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**.
|
1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**.
|
||||||
1. In the left navigation bar, click **CIS Benchmark > Scan**.
|
1. In the left navigation bar, click **Compliance > Scan**.
|
||||||
1. Click **Create**.
|
1. Click **Create**.
|
||||||
1. Choose the new cluster scan profile.
|
1. Choose the new cluster scan profile.
|
||||||
1. Click **Create**.
|
1. Click **Create**.
|
||||||
@@ -0,0 +1,48 @@
|
|||||||
|
---
|
||||||
|
title: Roles-based Access Control
|
||||||
|
---
|
||||||
|
|
||||||
|
<head>
|
||||||
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/integrations-in-rancher/compliance-scans/rbac-for-compliance-scans"/>
|
||||||
|
</head>
|
||||||
|
|
||||||
|
This section describes the permissions required to use the rancher-compliance App.
|
||||||
|
|
||||||
|
The rancher-compliance is a cluster-admin only feature by default.
|
||||||
|
|
||||||
|
However, the `rancher-compliance` chart installs these two default `ClusterRoles`:
|
||||||
|
|
||||||
|
- compliance-admin
|
||||||
|
- compliance-view
|
||||||
|
|
||||||
|
In Rancher, only cluster owners and global administrators have `compliance-admin` access by default.
|
||||||
|
|
||||||
|
## Cluster-Admin Access
|
||||||
|
|
||||||
|
Rancher Compliance Scans is a cluster-admin only feature by default.
|
||||||
|
This means only the Rancher global admins, and the cluster’s cluster-owner can:
|
||||||
|
|
||||||
|
- Install/Uninstall the rancher-compliance App
|
||||||
|
- See the navigation links for Compliance CRDs - ClusterScanBenchmarks, ClusterScanProfiles, ClusterScans
|
||||||
|
- List the default ClusterScanBenchmarks and ClusterScanProfiles
|
||||||
|
- Create/Edit/Delete new ClusterScanProfiles
|
||||||
|
- Create/Edit/Delete a new ClusterScan to run the Compliance scan on the cluster
|
||||||
|
- View and Download the ClusterScanReport created after the ClusterScan is complete
|
||||||
|
|
||||||
|
|
||||||
|
## Summary of Default Permissions for Kubernetes Default Roles
|
||||||
|
|
||||||
|
The rancher-compliance creates three `ClusterRoles` and adds the Compliance CRD access to the following default K8s `ClusterRoles`:
|
||||||
|
|
||||||
|
| ClusterRole created by chart | Default K8s ClusterRole | Permissions given with Role
|
||||||
|
| ------------------------------| ---------------------------| ---------------------------|
|
||||||
|
| `compliance-admin` | `admin`| Ability to CRUD clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR
|
||||||
|
| `compliance-view` | `view `| Ability to List(R) clusterscanbenchmarks, clusterscanprofiles, clusterscans, clusterscanreports CR
|
||||||
|
|
||||||
|
|
||||||
|
By default only cluster-owner role will have ability to manage and use `rancher-compliance` feature.
|
||||||
|
|
||||||
|
The other Rancher roles (cluster-member, project-owner, project-member) do not have any default permissions to manage and use rancher-compliance resources.
|
||||||
|
|
||||||
|
But if a cluster-owner wants to delegate access to other users, they can do so by creating ClusterRoleBindings between these users and the above Compliance ClusterRoles manually.
|
||||||
|
There is no automatic role aggregation supported for the `rancher-compliance` ClusterRoles.
|
||||||
@@ -22,6 +22,14 @@ For private nodes or private clusters, the environment variables need to be set
|
|||||||
|
|
||||||
When adding Fleet agent environment variables for the proxy, replace <PROXY_IP> with your private proxy IP.
|
When adding Fleet agent environment variables for the proxy, replace <PROXY_IP> with your private proxy IP.
|
||||||
|
|
||||||
|
:::caution
|
||||||
|
|
||||||
|
The `NO_PROXY` environment variable is not standardized, and the accepted format of the value can differ between applications. When configuring the `NO_PROXY` variable in Rancher, the value must adhere to the format expected by Golang.
|
||||||
|
|
||||||
|
Specifically, the value should be a comma-delimited string which only contains IP addresses, CIDR notation, domain names, or special DNS labels (e.g. `*`). For a full description of the expected value format, refer to the [**upstream Golang documentation**](https://pkg.go.dev/golang.org/x/net/http/httpproxy#Config)
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
| Variable Name | Value |
|
| Variable Name | Value |
|
||||||
|------------------|--------|
|
|------------------|--------|
|
||||||
| `HTTP_PROXY` | http://<PROXY_IP>:8888 |
|
| `HTTP_PROXY` | http://<PROXY_IP>:8888 |
|
||||||
|
|||||||
+1
-1
@@ -16,4 +16,4 @@ While a managed cluster is disconnected from Rancher, management operations will
|
|||||||
|
|
||||||
- **Cleaning Up Disconnected Clusters**: Regularly remove clusters that will no longer reconnect to Rancher (e.g., clusters that have been decommissioned or destroyed). Keeping such clusters in the Rancher management system consumes unnecessary resources, which could impact Rancher's performance over time.
|
- **Cleaning Up Disconnected Clusters**: Regularly remove clusters that will no longer reconnect to Rancher (e.g., clusters that have been decommissioned or destroyed). Keeping such clusters in the Rancher management system consumes unnecessary resources, which could impact Rancher's performance over time.
|
||||||
|
|
||||||
- **Certificate Rotation Considerations**: When designing processes that involve regularly shutting down clusters, whether connected to Rancher or not, take into account certificate rotation policies. For example, RKE/RKE2/K3s clusters may rotate certificates on startup if they exceeded their lifetime.
|
- **Certificate Rotation Considerations**: When designing processes that involve regularly shutting down clusters, whether connected to Rancher or not, take into account certificate rotation policies. For example, RKE2/K3s clusters may rotate certificates on startup if they exceeded their lifetime.
|
||||||
|
|||||||
+1
-3
@@ -30,9 +30,7 @@ Once you have created these _ClusterOutput_ objects, create a _ClusterFlow_ to c
|
|||||||
|
|
||||||
### Kubernetes Components
|
### Kubernetes Components
|
||||||
|
|
||||||
_ClusterFlows_ have the ability to collect logs from all containers on all hosts in the Kubernetes cluster. This works well in cases where those containers are part of a Kubernetes pod; however, RKE containers exist outside of the scope of Kubernetes.
|
_ClusterFlows_ have the ability to collect logs from all containers on all hosts in the Kubernetes cluster. This works well in cases where those containers are part of a Kubernetes pod.
|
||||||
|
|
||||||
Currently the logs from RKE containers are collected, but are not able to easily be filtered. This is because those logs do not contain information as to the source container (e.g. `etcd` or `kube-apiserver`).
|
|
||||||
|
|
||||||
A future release of Rancher will include the source container name which will enable filtering of these component logs. Once that change is made, you will be able to customize a _ClusterFlow_ to retrieve **only** the Kubernetes component logs, and direct them to an appropriate output.
|
A future release of Rancher will include the source container name which will enable filtering of these component logs. Once that change is made, you will be able to customize a _ClusterFlow_ to retrieve **only** the Kubernetes component logs, and direct them to an appropriate output.
|
||||||
|
|
||||||
|
|||||||
+1
-1
@@ -98,7 +98,7 @@ Monitoring the availability and performance of all your internal workloads is vi
|
|||||||
|
|
||||||
## Security Monitoring
|
## Security Monitoring
|
||||||
|
|
||||||
In addition to monitoring workloads to detect performance, availability or scalability problems, the cluster and the workloads running into it should also be monitored for potential security problems. A good starting point is to frequently run and alert on [CIS Scans](../../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md) which check if the cluster is configured according to security best practices.
|
In addition to monitoring workloads to detect performance, availability or scalability problems, the cluster and the workloads running into it should also be monitored for potential security problems. A good starting point is to frequently run and alert on [Compliance Scans](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) which check if the cluster is configured according to security best practices.
|
||||||
|
|
||||||
For the workloads, you can have a look at Kubernetes and Container security solutions like [NeuVector](https://www.suse.com/products/neuvector/), [Falco](https://falco.org/), [Aqua Kubernetes Security](https://www.aquasec.com/solutions/kubernetes-container-security/), [SysDig](https://sysdig.com/).
|
For the workloads, you can have a look at Kubernetes and Container security solutions like [NeuVector](https://www.suse.com/products/neuvector/), [Falco](https://falco.org/), [Aqua Kubernetes Security](https://www.aquasec.com/solutions/kubernetes-container-security/), [SysDig](https://sysdig.com/).
|
||||||
|
|
||||||
|
|||||||
@@ -54,9 +54,6 @@ Consider the following recommendations based on your needs:
|
|||||||
### Make sure nodes are configured correctly for Kubernetes
|
### Make sure nodes are configured correctly for Kubernetes
|
||||||
It's important to follow K8s and etcd best practices when deploying your nodes, including disabling swap, double checking you have full network connectivity between all machines in the cluster, using unique hostnames, MAC addresses, and product_uuids for every node, checking that all correct ports are opened, and deploying with ssd backed etcd. More details can be found in the [kubernetes docs](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) and [etcd's performance op guide](https://etcd.io/docs/v3.5/op-guide/performance/).
|
It's important to follow K8s and etcd best practices when deploying your nodes, including disabling swap, double checking you have full network connectivity between all machines in the cluster, using unique hostnames, MAC addresses, and product_uuids for every node, checking that all correct ports are opened, and deploying with ssd backed etcd. More details can be found in the [kubernetes docs](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin) and [etcd's performance op guide](https://etcd.io/docs/v3.5/op-guide/performance/).
|
||||||
|
|
||||||
### When using RKE: Back up the Statefile
|
|
||||||
RKE keeps record of the cluster state in a file called `cluster.rkestate`. This file is important for the recovery of a cluster and/or the continued maintenance of the cluster through RKE. Because this file contains certificate material, we strongly recommend encrypting this file before backing up. After each run of `rke up` you should backup the state file.
|
|
||||||
|
|
||||||
### Run All Nodes in the Cluster in the Same Datacenter
|
### Run All Nodes in the Cluster in the Same Datacenter
|
||||||
For best performance, run all three of your nodes in the same geographic datacenter. If you are running nodes in the cloud, such as AWS, run each node in a separate Availability Zone. For example, launch node 1 in us-west-2a, node 2 in us-west-2b, and node 3 in us-west-2c.
|
For best performance, run all three of your nodes in the same geographic datacenter. If you are running nodes in the cloud, such as AWS, run each node in a separate Availability Zone. For example, launch node 1 in us-west-2a, node 2 in us-west-2b, and node 3 in us-west-2c.
|
||||||
|
|
||||||
|
|||||||
+1
-1
@@ -66,7 +66,7 @@ You should remove any remaining legacy apps that appear in the Cluster Manager U
|
|||||||
|
|
||||||
### Using the Authorized Cluster Endpoint (ACE)
|
### Using the Authorized Cluster Endpoint (ACE)
|
||||||
|
|
||||||
An [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) (ACE) provides access to the Kubernetes API of Rancher-provisioned RKE, RKE2, and K3s clusters. When enabled, the ACE adds a context to kubeconfig files generated for the cluster. The context uses a direct endpoint to the cluster, thereby bypassing Rancher. This reduces load on Rancher for cases where unmediated API access is acceptable or preferable. See [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) for more information and configuration instructions.
|
An [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) (ACE) provides access to the Kubernetes API of Rancher-provisioned RKE2 and K3s clusters. When enabled, the ACE adds a context to kubeconfig files generated for the cluster. The context uses a direct endpoint to the cluster, thereby bypassing Rancher. This reduces load on Rancher for cases where unmediated API access is acceptable or preferable. See [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) for more information and configuration instructions.
|
||||||
|
|
||||||
### Reducing Event Handler Executions
|
### Reducing Event Handler Executions
|
||||||
|
|
||||||
|
|||||||
@@ -14,7 +14,6 @@ For information on editing cluster membership, go to [this page.](../../how-to-g
|
|||||||
|
|
||||||
The cluster configuration options depend on the type of Kubernetes cluster:
|
The cluster configuration options depend on the type of Kubernetes cluster:
|
||||||
|
|
||||||
- [RKE Cluster Configuration](rancher-server-configuration/rke1-cluster-configuration.md)
|
|
||||||
- [RKE2 Cluster Configuration](rancher-server-configuration/rke2-cluster-configuration.md)
|
- [RKE2 Cluster Configuration](rancher-server-configuration/rke2-cluster-configuration.md)
|
||||||
- [K3s Cluster Configuration](rancher-server-configuration/k3s-cluster-configuration.md)
|
- [K3s Cluster Configuration](rancher-server-configuration/k3s-cluster-configuration.md)
|
||||||
- [EKS Cluster Configuration](rancher-server-configuration/eks-cluster-configuration.md)
|
- [EKS Cluster Configuration](rancher-server-configuration/eks-cluster-configuration.md)
|
||||||
|
|||||||
+87
@@ -0,0 +1,87 @@
|
|||||||
|
---
|
||||||
|
title: GCE Machine Configuration
|
||||||
|
---
|
||||||
|
|
||||||
|
<head>
|
||||||
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/google-gce"/>
|
||||||
|
</head>
|
||||||
|
|
||||||
|
|
||||||
|
For more information about Google Cloud Platform (GCP) and the Google Compute Engine (GCE), refer to the official [GCP documentation](https://cloud.google.com/docs).
|
||||||
|
|
||||||
|
### Zone
|
||||||
|
|
||||||
|
The GCP Region and Zone that the VM will be deployed to. For example, `us-east1-b`.
|
||||||
|
|
||||||
|
### Machine Image Project
|
||||||
|
|
||||||
|
The image project that the desired image families belong to.
|
||||||
|
|
||||||
|
### Machine Image Family
|
||||||
|
|
||||||
|
The image family that the desired machine operating system belongs to.
|
||||||
|
|
||||||
|
### Machine Image
|
||||||
|
|
||||||
|
The operating system that will be installed onto the VM.
|
||||||
|
|
||||||
|
### Disk Type
|
||||||
|
|
||||||
|
The type of the disk attached to the VM. The available types may differ between regions.
|
||||||
|
|
||||||
|
### Disk Size
|
||||||
|
|
||||||
|
The size of the disk attached to the VM, in Gigabytes.
|
||||||
|
|
||||||
|
### Machine Type
|
||||||
|
|
||||||
|
The type of VM that will be deployed. Machine types determine the number of resources (vCPU, RAM, etc.) allocated for each node.
|
||||||
|
|
||||||
|
### Network
|
||||||
|
|
||||||
|
The VPC network that the VM will be created in. This value cannot be changed once the machine pool has been provisioned.
|
||||||
|
|
||||||
|
### Subnet
|
||||||
|
|
||||||
|
The VPC subnetwork tha the VM will be created in. This value cannot be changed once the machine pool has been provisioned.
|
||||||
|
|
||||||
|
### Username
|
||||||
|
|
||||||
|
A custom username set as the default user of the GCE VM.
|
||||||
|
|
||||||
|
### External Address
|
||||||
|
|
||||||
|
The desired external IP address for the GCE VM.
|
||||||
|
|
||||||
|
### Scopes
|
||||||
|
|
||||||
|
A list of OAuth2 scopes which allow the VM to access other GCP APIs.
|
||||||
|
|
||||||
|
### Allow Internal Communication
|
||||||
|
|
||||||
|
By default, a VPC firewall rule is automatically created to expose a fixed set of ports within the VPC to facilitate communication between cluster nodes. This behavior can be disabled on a per machine pool basis, when clicking the `Show Advanced` option and disabling the `Allow Internal Communication` checkbox.
|
||||||
|
|
||||||
|
### Expose External ports
|
||||||
|
|
||||||
|
A list of ports to be opened _externally_ to the wider internet. Open ports are defined at the machine pool level. Enabling this option will result in the automatic creation of a VPC firewall rule. This rule will be automatically deleted when the cluster or machine pool is deleted.
|
||||||
|
|
||||||
|
### Network Tags
|
||||||
|
|
||||||
|
Tags is a list of _network tags_, which can be used to associate preexisting Firewall Rules with all VMs within a machine pool.
|
||||||
|
|
||||||
|
### Labels
|
||||||
|
|
||||||
|
A comma seperated list of custom labels to be attached to all VMs within a given machine pool. Unlike Tags, Labels do not influence networking behavior and only serve to organize cloud resources.
|
||||||
|
|
||||||
|
## Advanced Options
|
||||||
|
|
||||||
|
When creating clusters via the Rancher UI some options are automatically configured for you. However, when creating machine config objects manually, you must ensure you properly configure the below fields.
|
||||||
|
|
||||||
|
### external-firewall-rule-prefix
|
||||||
|
|
||||||
|
A prefix that will be used when creating the firewall rule to expose ports publicly. Ideally, this should be a concatenation the machine pool name and the cluster name. This field must be set if the machine pool is configured to expose ports publicly, otherwise it can be omitted.
|
||||||
|
|
||||||
|
### internal-firewall-rule-prefix
|
||||||
|
|
||||||
|
A prefix that will be used when creating the internal firewall rule which allows for communication between nodes within the cluster. If this field is omitted, no internal firewall rule will be created.
|
||||||
|
|
||||||
-1
@@ -6,7 +6,6 @@ title: Rancher Server Configuration
|
|||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration"/>
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration"/>
|
||||||
</head>
|
</head>
|
||||||
|
|
||||||
- [RKE1 Cluster Configuration](rke1-cluster-configuration.md)
|
|
||||||
- [RKE2 Cluster Configuration](rke2-cluster-configuration.md)
|
- [RKE2 Cluster Configuration](rke2-cluster-configuration.md)
|
||||||
- [K3s Cluster Configuration](k3s-cluster-configuration.md)
|
- [K3s Cluster Configuration](k3s-cluster-configuration.md)
|
||||||
- [EKS Cluster Configuration](eks-cluster-configuration.md)
|
- [EKS Cluster Configuration](eks-cluster-configuration.md)
|
||||||
|
|||||||
+2
-2
@@ -133,9 +133,9 @@ If the cloud provider you want to use is not listed as an option, you will need
|
|||||||
|
|
||||||
The default [pod security admission configuration template](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) for the cluster.
|
The default [pod security admission configuration template](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) for the cluster.
|
||||||
|
|
||||||
##### Worker CIS Profile
|
##### Worker Compliance Profile
|
||||||
|
|
||||||
Select a [CIS benchmark](../../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md) to validate the system configuration against.
|
Select a [compliance benchmark](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) to validate the system configuration against.
|
||||||
|
|
||||||
##### Project Network Isolation
|
##### Project Network Isolation
|
||||||
|
|
||||||
|
|||||||
@@ -351,29 +351,29 @@ receivers:
|
|||||||
- service_key: 'database-integration-key'
|
- service_key: 'database-integration-key'
|
||||||
```
|
```
|
||||||
|
|
||||||
## Example Route Config for CIS Scan Alerts
|
## Example Route Config for Compliance Scan Alerts
|
||||||
|
|
||||||
While configuring the routes for `rancher-cis-benchmark` alerts, you can specify the matching using the key-value pair `job: rancher-cis-scan`.
|
While configuring the routes for `rancher-compliance` alerts, you can specify the matching using the key-value pair `job: rancher-compliance-scan`.
|
||||||
|
|
||||||
For example, the following example route configuration could be used with a Slack receiver named `test-cis`:
|
For example, the following example route configuration could be used with a Slack receiver named `test-compliance`:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
spec:
|
spec:
|
||||||
receiver: test-cis
|
receiver: test-compliance
|
||||||
group_by:
|
group_by:
|
||||||
# - string
|
# - string
|
||||||
group_wait: 30s
|
group_wait: 30s
|
||||||
group_interval: 30s
|
group_interval: 30s
|
||||||
repeat_interval: 30s
|
repeat_interval: 30s
|
||||||
match:
|
match:
|
||||||
job: rancher-cis-scan
|
job: rancher-compliance-scan
|
||||||
# key: string
|
# key: string
|
||||||
match_re:
|
match_re:
|
||||||
{}
|
{}
|
||||||
# key: string
|
# key: string
|
||||||
```
|
```
|
||||||
|
|
||||||
For more information on enabling alerting for `rancher-cis-benchmark`, see [this section.](../../how-to-guides/advanced-user-guides/cis-scan-guides/enable-alerting-for-rancher-cis-benchmark.md)
|
For more information on enabling alerting for `rancher-compliance-benchmark`, see [this section.](../../how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance.md)
|
||||||
|
|
||||||
## Trusted CA for Notifiers
|
## Trusted CA for Notifiers
|
||||||
|
|
||||||
|
|||||||
@@ -42,8 +42,8 @@ Rancher's integration with Istio was improved in Rancher v2.5.
|
|||||||
|
|
||||||
For more information, refer to the Istio documentation [here.](../integrations-in-rancher/istio/istio.md)
|
For more information, refer to the Istio documentation [here.](../integrations-in-rancher/istio/istio.md)
|
||||||
|
|
||||||
## CIS Scans
|
## Compliance Scans
|
||||||
|
|
||||||
Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark.
|
Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in most recognized Kubernetes Security Benchmarks, such as STIG.
|
||||||
|
|
||||||
For more information, refer to the CIS scan documentation [here.](../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md)
|
For more information, refer to the Compliance scan documentation [here.](../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md)
|
||||||
|
|||||||
@@ -32,14 +32,6 @@ One option for the underlying Kubernetes cluster is to use K3s Kubernetes. K3s i
|
|||||||
|
|
||||||

|

|
||||||
|
|
||||||
### RKE Kubernetes Cluster Installations
|
|
||||||
|
|
||||||
In an RKE installation, the cluster data is replicated on each of three etcd nodes in the cluster, providing redundancy and data duplication in case one of the nodes fails.
|
|
||||||
|
|
||||||
<figcaption>Architecture of an RKE Kubernetes Cluster Running the Rancher Management Server</figcaption>
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
## Recommended Load Balancer Configuration for Kubernetes Installations
|
## Recommended Load Balancer Configuration for Kubernetes Installations
|
||||||
|
|
||||||
We recommend the following configurations for the load balancer and Ingress controllers:
|
We recommend the following configurations for the load balancer and Ingress controllers:
|
||||||
@@ -61,7 +53,7 @@ For the best performance and greater security, we recommend a dedicated Kubernet
|
|||||||
|
|
||||||
## Recommended Node Roles for Kubernetes Installations
|
## Recommended Node Roles for Kubernetes Installations
|
||||||
|
|
||||||
The below recommendations apply when Rancher is installed on a K3s Kubernetes cluster or an RKE Kubernetes cluster.
|
The below recommendations apply when Rancher is installed on a K3s Kubernetes cluster.
|
||||||
|
|
||||||
### K3s Cluster Roles
|
### K3s Cluster Roles
|
||||||
|
|
||||||
@@ -69,38 +61,6 @@ In K3s clusters, there are two types of nodes: server nodes and agent nodes. Bot
|
|||||||
|
|
||||||
For the cluster running the Rancher management server, we recommend using two server nodes. Agent nodes are not required.
|
For the cluster running the Rancher management server, we recommend using two server nodes. Agent nodes are not required.
|
||||||
|
|
||||||
### RKE Cluster Roles
|
|
||||||
|
|
||||||
If Rancher is installed on an RKE Kubernetes cluster, the cluster should have three nodes, and each node should have all three Kubernetes roles: etcd, controlplane, and worker.
|
|
||||||
|
|
||||||
### Contrasting RKE Cluster Architecture for Rancher Server and for Downstream Kubernetes Clusters
|
|
||||||
|
|
||||||
Our recommendation for RKE node roles on the Rancher server cluster contrasts with our recommendations for the downstream user clusters that run your apps and services.
|
|
||||||
|
|
||||||
Rancher uses RKE as a library when provisioning downstream Kubernetes clusters. Note: The capability to provision downstream K3s clusters will be added in a future version of Rancher.
|
|
||||||
|
|
||||||
For downstream Kubernetes clusters, we recommend that each node in a user cluster should have a single role for stability and scalability.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
RKE only requires at least one node with each role and does not require nodes to be restricted to one role. However, for the clusters that run your apps, we recommend separate roles for each node so that workloads on worker nodes don't interfere with the Kubernetes master or cluster data as your services scale.
|
|
||||||
|
|
||||||
We recommend that downstream user clusters should have at least:
|
|
||||||
|
|
||||||
- **Three nodes with only the etcd role** to maintain a quorum if one node is lost, making the state of your cluster highly available
|
|
||||||
- **Two nodes with only the controlplane role** to make the master component highly available
|
|
||||||
- **One or more nodes with only the worker role** to run the Kubernetes node components, as well as the workloads for your apps and services
|
|
||||||
|
|
||||||
With that said, it is safe to use all three roles on three nodes when setting up the Rancher server because:
|
|
||||||
|
|
||||||
* It allows one `etcd` node failure.
|
|
||||||
* It maintains multiple instances of the master components by having multiple `controlplane` nodes.
|
|
||||||
* No other workloads than Rancher itself should be created on this cluster.
|
|
||||||
|
|
||||||
Because no additional workloads will be deployed on the Rancher server cluster, in most cases it is not necessary to use the same architecture that we recommend for the scalability and reliability of downstream clusters.
|
|
||||||
|
|
||||||
For more best practices for downstream clusters, refer to the [production checklist](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/checklist-for-production-ready-clusters.md) or our [best practices guide.](../best-practices/best-practices.md)
|
|
||||||
|
|
||||||
## Architecture for an Authorized Cluster Endpoint (ACE)
|
## Architecture for an Authorized Cluster Endpoint (ACE)
|
||||||
|
|
||||||
If you are using an [authorized cluster endpoint (ACE),](../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) we recommend creating an FQDN pointing to a load balancer which balances traffic across your nodes with the `controlplane` role.
|
If you are using an [authorized cluster endpoint (ACE),](../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) we recommend creating an FQDN pointing to a load balancer which balances traffic across your nodes with the `controlplane` role.
|
||||||
|
|||||||
+5
-14
@@ -41,7 +41,7 @@ There is one cluster controller and one cluster agent for each downstream cluste
|
|||||||
- Watches for resource changes in the downstream cluster
|
- Watches for resource changes in the downstream cluster
|
||||||
- Brings the current state of the downstream cluster to the desired state
|
- Brings the current state of the downstream cluster to the desired state
|
||||||
- Configures access control policies to clusters and projects
|
- Configures access control policies to clusters and projects
|
||||||
- Provisions clusters by calling the required Docker machine drivers and Kubernetes engines, such as RKE and GKE
|
- Provisions clusters by calling the required Docker machine drivers and Kubernetes engines, such as GKE
|
||||||
|
|
||||||
By default, to enable Rancher to communicate with a downstream cluster, the cluster controller connects to the cluster agent. If the cluster agent is not available, the cluster controller can connect to a [node agent](#3-node-agents) instead.
|
By default, to enable Rancher to communicate with a downstream cluster, the cluster controller connects to the cluster agent. If the cluster agent is not available, the cluster controller can connect to a [node agent](#3-node-agents) instead.
|
||||||
|
|
||||||
@@ -62,7 +62,7 @@ The `cattle-node-agent` is deployed using a [DaemonSet](https://kubernetes.io/do
|
|||||||
|
|
||||||
An authorized cluster endpoint (ACE) allows users to connect to the Kubernetes API server of a downstream cluster without having to route their requests through the Rancher authentication proxy.
|
An authorized cluster endpoint (ACE) allows users to connect to the Kubernetes API server of a downstream cluster without having to route their requests through the Rancher authentication proxy.
|
||||||
|
|
||||||
> ACE is available on RKE, RKE2, and K3s clusters that are provisioned or registered with Rancher. It's not available on clusters in a hosted Kubernetes provider, such as Amazon's EKS.
|
> ACE is available on RKE2 and K3s clusters that are provisioned or registered with Rancher. It's not available on clusters in a hosted Kubernetes provider, such as Amazon's EKS.
|
||||||
|
|
||||||
There are two main reasons why a user might need the authorized cluster endpoint:
|
There are two main reasons why a user might need the authorized cluster endpoint:
|
||||||
|
|
||||||
@@ -178,11 +178,8 @@ If you see an error related to "impersonation" in the UI, pay close attention to
|
|||||||
|
|
||||||
The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster:
|
The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster:
|
||||||
|
|
||||||
- `rancher-cluster.yml`: The RKE cluster configuration file.
|
- `config.yaml`: The RKE2 and K3s cluster configuration file.
|
||||||
- `kube_config_rancher-cluster.yml`: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster. You can use this file to authenticate with a Rancher-launched Kubernetes cluster if Rancher goes down.
|
- `rke2.yaml` or `k3s.yaml`: The Kubeconfig file for your RKE2 or K3s cluster. This file contains credentials for full access to the cluster. You can use this file to authenticate with a Rancher-launched Kubernetes cluster if Rancher goes down.
|
||||||
- `rancher-cluster.rkestate`: The Kubernetes cluster state file. This file contains credentials for full access to the cluster. Note: This state file is only created when using RKE v0.2.0 or higher.
|
|
||||||
|
|
||||||
> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
|
|
||||||
|
|
||||||
For more information on connecting to a cluster without the Rancher authentication proxy and other configuration options, refer to the [kubeconfig file](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) documentation.
|
For more information on connecting to a cluster without the Rancher authentication proxy and other configuration options, refer to the [kubeconfig file](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) documentation.
|
||||||
|
|
||||||
@@ -194,13 +191,7 @@ The tools that Rancher uses to provision downstream user clusters depends on the
|
|||||||
|
|
||||||
Rancher can dynamically provision nodes in a provider such as Amazon EC2, DigitalOcean, Azure, or vSphere, then install Kubernetes on them.
|
Rancher can dynamically provision nodes in a provider such as Amazon EC2, DigitalOcean, Azure, or vSphere, then install Kubernetes on them.
|
||||||
|
|
||||||
Rancher provisions this type of cluster using [RKE](https://github.com/rancher/rke) and [docker-machine.](https://github.com/rancher/machine)
|
Rancher provisions this type of cluster using [docker-machine.](https://github.com/rancher/machine)
|
||||||
|
|
||||||
### Rancher Launched Kubernetes for Custom Nodes
|
|
||||||
|
|
||||||
When setting up this type of cluster, Rancher installs Kubernetes on existing nodes, which creates a custom cluster.
|
|
||||||
|
|
||||||
Rancher provisions this type of cluster using [RKE.](https://github.com/rancher/rke)
|
|
||||||
|
|
||||||
### Hosted Kubernetes Providers
|
### Hosted Kubernetes Providers
|
||||||
|
|
||||||
|
|||||||
@@ -12,7 +12,6 @@ Rancher provides specific security hardening guides for each supported Rancher v
|
|||||||
|
|
||||||
Rancher uses the following Kubernetes distributions:
|
Rancher uses the following Kubernetes distributions:
|
||||||
|
|
||||||
- [**RKE**](https://rancher.com/docs/rke/latest/en/), Rancher Kubernetes Engine, is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers.
|
|
||||||
- [**RKE2**](https://docs.rke2.io/) is a fully conformant Kubernetes distribution that focuses on security and compliance within the U.S. Federal Government sector.
|
- [**RKE2**](https://docs.rke2.io/) is a fully conformant Kubernetes distribution that focuses on security and compliance within the U.S. Federal Government sector.
|
||||||
- [**K3s**](https://docs.k3s.io/) is a fully conformant, lightweight Kubernetes distribution. It is easy to install, with half the memory requirement of upstream Kubernetes, all in a binary of less than 100 MB.
|
- [**K3s**](https://docs.k3s.io/) is a fully conformant, lightweight Kubernetes distribution. It is easy to install, with half the memory requirement of upstream Kubernetes, all in a binary of less than 100 MB.
|
||||||
|
|
||||||
@@ -22,12 +21,6 @@ To harden a Kubernetes cluster that's running a distribution other than those li
|
|||||||
|
|
||||||
Each self-assessment guide is accompanied by a hardening guide. These guides were tested alongside the listed Rancher releases. Each self-assessment guides was tested on a specific Kubernetes version and CIS benchmark version. If a CIS benchmark has not been validated for your Kubernetes version, you can use the existing guides until a guide for your version is added.
|
Each self-assessment guide is accompanied by a hardening guide. These guides were tested alongside the listed Rancher releases. Each self-assessment guides was tested on a specific Kubernetes version and CIS benchmark version. If a CIS benchmark has not been validated for your Kubernetes version, you can use the existing guides until a guide for your version is added.
|
||||||
|
|
||||||
### RKE Guides
|
|
||||||
|
|
||||||
| Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides |
|
|
||||||
|--------------------|-----------------------|-----------------------|------------------|
|
|
||||||
| Kubernetes v1.25/v1.26/v1.27 | CIS v1.7 | [Link](rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md) | [Link](rke1-hardening-guide/rke1-hardening-guide.md) |
|
|
||||||
|
|
||||||
### RKE2 Guides
|
### RKE2 Guides
|
||||||
|
|
||||||
| Type | Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides |
|
| Type | Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides |
|
||||||
|
|||||||
@@ -25,6 +25,6 @@ If you require such features, combine Layer 7 firewalls with [external authentic
|
|||||||
You should protect the following ports behind an [external load balancer](../../how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/layer-4-and-layer-7-load-balancing.md#layer-4-load-balancer) that has SSL offload enabled:
|
You should protect the following ports behind an [external load balancer](../../how-to-guides/new-user-guides/kubernetes-resources-setup/load-balancer-and-ingress-controller/layer-4-and-layer-7-load-balancing.md#layer-4-load-balancer) that has SSL offload enabled:
|
||||||
|
|
||||||
- **K3s:** Port 6443, used by the Kubernetes API.
|
- **K3s:** Port 6443, used by the Kubernetes API.
|
||||||
- **RKE and RKE2:** Port 6443, used by the Kubernetes API, and port 9345, used for node registration.
|
- **RKE2:** Port 6443, used by the Kubernetes API, and port 9345, used for node registration.
|
||||||
|
|
||||||
These ports have TLS SAN certificates which list nodes' public IP addresses. An attacker could use that information to gain unauthorized access or monitor activity on the cluster. Protecting these ports helps mitigate against nodes' public IP addresses being disclosed to potential attackers.
|
These ports have TLS SAN certificates which list nodes' public IP addresses. An attacker could use that information to gain unauthorized access or monitor activity on the cluster. Protecting these ports helps mitigate against nodes' public IP addresses being disclosed to potential attackers.
|
||||||
|
|||||||
@@ -31,22 +31,14 @@ On this page, we provide security related documentation along with resources to
|
|||||||
|
|
||||||
NeuVector is an open-source, container-focused security application that is now integrated into Rancher. NeuVector provides production security, DevOps vulnerability protection, and a container firewall, et al. Please see the [Rancher docs](../../integrations-in-rancher/neuvector/neuvector.md) and the [NeuVector docs](https://open-docs.neuvector.com/) for more information.
|
NeuVector is an open-source, container-focused security application that is now integrated into Rancher. NeuVector provides production security, DevOps vulnerability protection, and a container firewall, et al. Please see the [Rancher docs](../../integrations-in-rancher/neuvector/neuvector.md) and the [NeuVector docs](https://open-docs.neuvector.com/) for more information.
|
||||||
|
|
||||||
## Running a CIS Security Scan on a Kubernetes Cluster
|
## Running a Compliance Security Scan on a Kubernetes Cluster
|
||||||
|
|
||||||
Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the [CIS](https://www.cisecurity.org/cis-benchmarks/) (Center for Internet Security) Kubernetes Benchmark.
|
Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices.
|
||||||
|
|
||||||
The CIS Kubernetes Benchmark is a reference document that can be used to establish a secure configuration baseline for Kubernetes.
|
When Rancher runs a Compliance scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests.
|
||||||
|
|
||||||
The Center for Internet Security (CIS) is a 501(c\)(3) non-profit organization, formed in October 2000, with a mission to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace".
|
|
||||||
|
|
||||||
CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team.
|
|
||||||
|
|
||||||
The Benchmark provides recommendations of two types: Automated and Manual. We run tests related to only Automated recommendations.
|
|
||||||
|
|
||||||
When Rancher runs a CIS security scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests.
|
|
||||||
|
|
||||||
For details, refer to the section on [security scans](../../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md).
|
|
||||||
|
|
||||||
|
For details, refer to the section on [security scans](../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md).
|
||||||
|
`
|
||||||
## SELinux RPM
|
## SELinux RPM
|
||||||
|
|
||||||
[Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8.
|
[Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8.
|
||||||
@@ -67,7 +59,7 @@ Each version of the hardening guide is intended to be used with specific version
|
|||||||
|
|
||||||
The benchmark self-assessment is a companion to the Rancher security hardening guide. While the hardening guide shows you how to harden the cluster, the benchmark guide is meant to help you evaluate the level of security of the hardened cluster.
|
The benchmark self-assessment is a companion to the Rancher security hardening guide. While the hardening guide shows you how to harden the cluster, the benchmark guide is meant to help you evaluate the level of security of the hardened cluster.
|
||||||
|
|
||||||
Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher created clusters. The original benchmark documents can be downloaded from the [CIS website](https://www.cisecurity.org/benchmark/kubernetes/).
|
This guide walks through the various controls and provide updated example commands to audit compliance in Rancher created clusters. The original benchmark documents can be downloaded from the [CIS website](https://www.cisecurity.org/benchmark/kubernetes/).
|
||||||
|
|
||||||
Each version of Rancher's self-assessment guide corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark.
|
Each version of Rancher's self-assessment guide corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark.
|
||||||
|
|
||||||
|
|||||||
@@ -20,10 +20,7 @@ Each Rancher version is designed to be compatible with a single version of the w
|
|||||||
|
|
||||||
| Rancher Version | Webhook Version | Availability in Prime | Availability in Community |
|
| Rancher Version | Webhook Version | Availability in Prime | Availability in Community |
|
||||||
|-----------------|-----------------|-----------------------|---------------------------|
|
|-----------------|-----------------|-----------------------|---------------------------|
|
||||||
| v2.11.3 | v0.7.3 | ✓ | ✓ |
|
| v2.12.0 | v0.8.0 | ✗ | ✓ |
|
||||||
| v2.11.2 | v0.7.2 | ✓ | ✓ |
|
|
||||||
| v2.11.1 | v0.7.1 | ✓ | ✓ |
|
|
||||||
| v2.11.0 | v0.7.0 | ✗ | ✓ |
|
|
||||||
|
|
||||||
## Why Do We Need It?
|
## Why Do We Need It?
|
||||||
|
|
||||||
|
|||||||
@@ -10,11 +10,11 @@ If you operate Rancher behind a proxy and you want to access services through th
|
|||||||
|
|
||||||
Make sure `NO_PROXY` contains the network addresses, network address ranges and domains that should be excluded from using the proxy.
|
Make sure `NO_PROXY` contains the network addresses, network address ranges and domains that should be excluded from using the proxy.
|
||||||
|
|
||||||
| Environment variable | Purpose |
|
| Environment variable | Purpose |
|
||||||
| -------------------- | ----------------------------------------------------------------------------------------------------------------------- |
|
|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
| HTTP_PROXY | Proxy address to use when initiating HTTP connection(s) |
|
| HTTP_PROXY | Proxy address to use when initiating HTTP connection(s) |
|
||||||
| HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) |
|
| HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) |
|
||||||
| NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s) |
|
| NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s). <br/><br/> The value must be a comma-delimited string which contains IP addresses, CIDR notation, domain names, or special DNS labels (*). For a full description of the expected value format, refer to the [**upstream Golang documentation**](https://pkg.go.dev/golang.org/x/net/http/httpproxy#Config) |
|
||||||
|
|
||||||
:::note Important:
|
:::note Important:
|
||||||
|
|
||||||
@@ -62,4 +62,4 @@ acl SSL_ports port 2376
|
|||||||
|
|
||||||
acl Safe_ports port 22 # ssh
|
acl Safe_ports port 22 # ssh
|
||||||
acl Safe_ports port 2376 # docker port
|
acl Safe_ports port 2376 # docker port
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -8,7 +8,7 @@
|
|||||||
| [Managing Projects, Namespaces and Workloads](../how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces.md) | ✓ | ✓ | ✓ | ✓ |
|
| [Managing Projects, Namespaces and Workloads](../how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces.md) | ✓ | ✓ | ✓ | ✓ |
|
||||||
| [Using App Catalogs](../how-to-guides/new-user-guides/helm-charts-in-rancher/helm-charts-in-rancher.md) | ✓ | ✓ | ✓ | ✓ |
|
| [Using App Catalogs](../how-to-guides/new-user-guides/helm-charts-in-rancher/helm-charts-in-rancher.md) | ✓ | ✓ | ✓ | ✓ |
|
||||||
| Configuring Tools ([Alerts, Notifiers, Monitoring](../integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md), [Logging](../integrations-in-rancher/logging/logging.md), [Istio](../integrations-in-rancher/istio/istio.md)) | ✓ | ✓ | ✓ | ✓ |
|
| Configuring Tools ([Alerts, Notifiers, Monitoring](../integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md), [Logging](../integrations-in-rancher/logging/logging.md), [Istio](../integrations-in-rancher/istio/istio.md)) | ✓ | ✓ | ✓ | ✓ |
|
||||||
| [Running Security Scans](../how-to-guides/advanced-user-guides/cis-scan-guides/cis-scan-guides.md) | ✓ | ✓ | ✓ | ✓ |
|
| [Running Security Scans](../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) | ✓ | ✓ | ✓ | ✓ |
|
||||||
| [Ability to rotate certificates](../how-to-guides/new-user-guides/manage-clusters/rotate-certificates.md) | ✓ | ✓ | | |
|
| [Ability to rotate certificates](../how-to-guides/new-user-guides/manage-clusters/rotate-certificates.md) | ✓ | ✓ | | |
|
||||||
| Ability to [backup](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher-launched-kubernetes-clusters.md) and [restore](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-rancher-launched-kubernetes-clusters-from-backup.md) Rancher-launched clusters | ✓ | ✓ | | ✓<sup>4</sup> |
|
| Ability to [backup](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher-launched-kubernetes-clusters.md) and [restore](../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-rancher-launched-kubernetes-clusters-from-backup.md) Rancher-launched clusters | ✓ | ✓ | | ✓<sup>4</sup> |
|
||||||
| [Cleaning Kubernetes components when clusters are no longer reachable from Rancher](../how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes.md) | ✓ | | | |
|
| [Cleaning Kubernetes components when clusters are no longer reachable from Rancher](../how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes.md) | ✓ | | | |
|
||||||
|
|||||||
+823
-767
File diff suppressed because it is too large
Load Diff
@@ -35,7 +35,6 @@ title: 参与 Rancher 社区贡献
|
|||||||
| (Rancher) Docker Machine | https://github.com/rancher/machine | 使用主机驱动时使用的 Docker Machine 二进制文件的源码仓库。这是 `docker/machine` 仓库的一个 fork。 |
|
| (Rancher) Docker Machine | https://github.com/rancher/machine | 使用主机驱动时使用的 Docker Machine 二进制文件的源码仓库。这是 `docker/machine` 仓库的一个 fork。 |
|
||||||
| machine-package | https://github.com/rancher/machine-package | 用于构建 Rancher Docker Machine 二进制文件。 |
|
| machine-package | https://github.com/rancher/machine-package | 用于构建 Rancher Docker Machine 二进制文件。 |
|
||||||
| kontainer-engine | https://github.com/rancher/kontainer-engine | kontainer-engine 的源码仓库,它是配置托管 Kubernetes 集群的工具。 |
|
| kontainer-engine | https://github.com/rancher/kontainer-engine | kontainer-engine 的源码仓库,它是配置托管 Kubernetes 集群的工具。 |
|
||||||
| RKE repository | https://github.com/rancher/rke | Rancher Kubernetes Engine 的源码仓库,该工具可在任何主机上配置 Kubernetes 集群。 |
|
|
||||||
| CLI | https://github.com/rancher/cli | Rancher 2.x 中使用的 Rancher CLI 的源码仓库。 |
|
| CLI | https://github.com/rancher/cli | Rancher 2.x 中使用的 Rancher CLI 的源码仓库。 |
|
||||||
| (Rancher) Helm repository | https://github.com/rancher/helm | 打包的 Helm 二进制文件的源码仓库。这是 `helm/helm` 仓库的一个 fork。 |
|
| (Rancher) Helm repository | https://github.com/rancher/helm | 打包的 Helm 二进制文件的源码仓库。这是 `helm/helm` 仓库的一个 fork。 |
|
||||||
| Telemetry repository | https://github.com/rancher/telemetry | Telemetry 二进制文件的源码仓库。 |
|
| Telemetry repository | https://github.com/rancher/telemetry | Telemetry 二进制文件的源码仓库。 |
|
||||||
@@ -106,27 +105,6 @@ title: 参与 Rancher 社区贡献
|
|||||||
-l app=rancher \
|
-l app=rancher \
|
||||||
--timestamps=true
|
--timestamps=true
|
||||||
```
|
```
|
||||||
- 在 RKE 集群的每个节点上使用 `docker` 的 Docker 安装
|
|
||||||
|
|
||||||
```
|
|
||||||
docker logs \
|
|
||||||
--timestamps \
|
|
||||||
$(docker ps | grep -E "rancher/rancher@|rancher_rancher" | awk '{ print $1 }')
|
|
||||||
```
|
|
||||||
- 使用 RKE 附加组件的 Kubernetes 安装
|
|
||||||
|
|
||||||
:::note
|
|
||||||
|
|
||||||
确保你配置了正确的 kubeconfig(例如,如果 Rancher Server 安装在 Kubernetes 集群上,则 `export KUBECONFIG=$PWD/kube_config_cluster.yml`)或通过 UI 使用了嵌入式 kubectl。
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl -n cattle-system \
|
|
||||||
logs \
|
|
||||||
--timestamps=true \
|
|
||||||
-f $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name="cattle-server") | .metadata.name')
|
|
||||||
```
|
|
||||||
- 系统日志记录(可能不存在,取决于操作系统)
|
- 系统日志记录(可能不存在,取决于操作系统)
|
||||||
- `/var/log/messages`
|
- `/var/log/messages`
|
||||||
- `/var/log/syslog`
|
- `/var/log/syslog`
|
||||||
|
|||||||
@@ -16,10 +16,7 @@ Rancher 将在 GitHub 上发布的 Rancher 的[发版说明](https://github.com/
|
|||||||
|
|
||||||
| Patch 版本 | 发布时间 |
|
| Patch 版本 | 发布时间 |
|
||||||
| ----------------------------------------------------------------- | ------------------ |
|
| ----------------------------------------------------------------- | ------------------ |
|
||||||
| [2.11.3](https://github.com/rancher/rancher/releases/tag/v2.11.2) | 2025 年 6 月 25 日 |
|
| [2.12.0](https://github.com/rancher/rancher/releases/tag/v2.12.0) | 2025 年 7 月 31 日 |
|
||||||
| [2.11.2](https://github.com/rancher/rancher/releases/tag/v2.11.2) | 2025 年 5 月 22 日 |
|
|
||||||
| [2.11.1](https://github.com/rancher/rancher/releases/tag/v2.11.1) | 2025 年 4 月 24 日 |
|
|
||||||
| [2.11.0](https://github.com/rancher/rancher/releases/tag/v2.11.0) | 2025 年 3 月 31 日 |
|
|
||||||
|
|
||||||
## 当一个功能被标记为弃用我可以得到什么样的预期?
|
## 当一个功能被标记为弃用我可以得到什么样的预期?
|
||||||
|
|
||||||
|
|||||||
+1
-4
@@ -15,10 +15,7 @@ title: 安装 Adapter
|
|||||||
|
|
||||||
| Rancher 版本 | Adapter 版本 |
|
| Rancher 版本 | Adapter 版本 |
|
||||||
|-----------------|:----------------:|
|
|-----------------|:----------------:|
|
||||||
| v2.11.3 | v106.0.0+up6.0.0 |
|
| v2.12.0 | 107.0.0+up7.0.0 |
|
||||||
| v2.11.2 | v106.0.0+up6.0.0 |
|
|
||||||
| v2.11.1 | v106.0.0+up6.0.0 |
|
|
||||||
| v2.11.0 | v106.0.0+up6.0.0 |
|
|
||||||
|
|
||||||
## 1. 获取对 Local 集群的访问权限
|
## 1. 获取对 Local 集群的访问权限
|
||||||
|
|
||||||
|
|||||||
+1
-1
@@ -16,4 +16,4 @@ While a managed cluster is disconnected from Rancher, management operations will
|
|||||||
|
|
||||||
- **Cleaning Up Disconnected Clusters**: Regularly remove clusters that will no longer reconnect to Rancher (e.g., clusters that have been decommissioned or destroyed). Keeping such clusters in the Rancher management system consumes unnecessary resources, which could impact Rancher's performance over time.
|
- **Cleaning Up Disconnected Clusters**: Regularly remove clusters that will no longer reconnect to Rancher (e.g., clusters that have been decommissioned or destroyed). Keeping such clusters in the Rancher management system consumes unnecessary resources, which could impact Rancher's performance over time.
|
||||||
|
|
||||||
- **Certificate Rotation Considerations**: When designing processes that involve regularly shutting down clusters, whether connected to Rancher or not, take into account certificate rotation policies. For example, RKE/RKE2/K3s clusters may rotate certificates on startup if they exceeded their lifetime.
|
- **Certificate Rotation Considerations**: When designing processes that involve regularly shutting down clusters, whether connected to Rancher or not, take into account certificate rotation policies. For example, RKE2/K3s clusters may rotate certificates on startup if they exceeded their lifetime.
|
||||||
|
|||||||
+1
-3
@@ -26,9 +26,7 @@ Rancher Logging 使用的是 [Logging Operator](https://github.com/kube-logging/
|
|||||||
|
|
||||||
### Kubernetes 组件
|
### Kubernetes 组件
|
||||||
|
|
||||||
_ClusterFlows_ 能够收集 Kubernetes 集群中所有主机上所有容器的日志。如果这些容器包含在 Kubernetes Pod 中,这个方法是适用的。但是,RKE 容器不存在于 Kubernetes 内。
|
_ClusterFlows_ 能够收集 Kubernetes 集群中所有主机上所有容器的日志。如果这些容器包含在 Kubernetes Pod 中,这个方法是适用的。
|
||||||
|
|
||||||
目前,Rancher 能搜集 RKE 容器的日志,但不能轻易过滤。这是因为这些日志不包含源容器的信息(例如 `etcd` 或 `kube-apiserver`)。
|
|
||||||
|
|
||||||
Rancher 的未来版本将包含源容器名称,来支持过滤这些组件的日志。该功能实现之后,你将能够自定义 _ClusterFlow_ 来**仅**检索 Kubernetes 组件日志,并将日志发送到适当的输出位置。
|
Rancher 的未来版本将包含源容器名称,来支持过滤这些组件的日志。该功能实现之后,你将能够自定义 _ClusterFlow_ 来**仅**检索 Kubernetes 组件日志,并将日志发送到适当的输出位置。
|
||||||
|
|
||||||
|
|||||||
-4
@@ -18,10 +18,6 @@ title: Rancher 运行技巧
|
|||||||
|
|
||||||
在部署节点时,请遵循 K8s 和 etcd 的最佳实践,其中包括禁用 swap,检查集群中的所有主机之间是否有良好的网络连接,为每个节点使用唯一的主机名、MAC 地址和 `product_uuids`,检查所需端口是否已经打开,并使用配置 SSD 的 etcd 进行部署。详情请参见 [kubernetes 官方文档](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin)和 [etcd 性能操作指南](https://etcd.io/docs/v3.5/op-guide/performance/)。
|
在部署节点时,请遵循 K8s 和 etcd 的最佳实践,其中包括禁用 swap,检查集群中的所有主机之间是否有良好的网络连接,为每个节点使用唯一的主机名、MAC 地址和 `product_uuids`,检查所需端口是否已经打开,并使用配置 SSD 的 etcd 进行部署。详情请参见 [kubernetes 官方文档](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin)和 [etcd 性能操作指南](https://etcd.io/docs/v3.5/op-guide/performance/)。
|
||||||
|
|
||||||
## 使用 RKE 时:备份状态文件(Statefile)
|
|
||||||
|
|
||||||
RKE 将集群状态记录在一个名为 `cluster.rkestate` 的文件中,该文件对集群的恢复和/或通过 RKE 维护集群非常重要。由于这个文件包含证书材料,我们强烈建议在备份前对该文件进行加密。请在每次运行 `rke up` 后备份状态文件。
|
|
||||||
|
|
||||||
## 在同一个数据中心运行集群中的所有节点
|
## 在同一个数据中心运行集群中的所有节点
|
||||||
|
|
||||||
为达到最佳性能,请在同一地理数据中心运行所有三个节点。如果你在云(如 AWS)上运行节点,请在不同的可用区(AZ)中运行这三个节点。例如,在 us-west-2a 中运行节点 1,在 us-west-2b 中运行节点 2,在 us-west-2c 中运行节点 3。
|
为达到最佳性能,请在同一地理数据中心运行所有三个节点。如果你在云(如 AWS)上运行节点,请在不同的可用区(AZ)中运行这三个节点。例如,在 us-west-2a 中运行节点 1,在 us-west-2b 中运行节点 2,在 us-west-2c 中运行节点 3。
|
||||||
|
|||||||
+1
-1
@@ -76,7 +76,7 @@ Rancher 使用两个 Kubernetes 应用程序资源:`apps.projects.cattle.io`
|
|||||||
|
|
||||||
### 使用授权集群端点 (ACE)
|
### 使用授权集群端点 (ACE)
|
||||||
|
|
||||||
[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点) (ACE) 提供了 Rancher 部署的 RKE、RKE2 和 K3s 集群的 Kubernetes API 访问。启用后,ACE 会为生成的 `kubeconfig` 文件配置直接访问下游集群 Endpoint,从而绕过 Rancher 代理。在可以直接访问下游集群 Kubernetes API 的场景下,可以减少 Rancher 负载。有关更多信息,请参阅[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点)配置说明。
|
[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点) (ACE) 提供了 Rancher 部署的 RKE2 和 K3s 集群的 Kubernetes API 访问。启用后,ACE 会为生成的 `kubeconfig` 文件配置直接访问下游集群 Endpoint,从而绕过 Rancher 代理。在可以直接访问下游集群 Kubernetes API 的场景下,可以减少 Rancher 负载。有关更多信息,请参阅[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点)配置说明。
|
||||||
|
|
||||||
### 减少 Event Handler 执行
|
### 减少 Event Handler 执行
|
||||||
|
|
||||||
|
|||||||
-1
@@ -14,7 +14,6 @@ title: 集群配置
|
|||||||
|
|
||||||
集群配置选项取决于 Kubernetes 集群的类型:
|
集群配置选项取决于 Kubernetes 集群的类型:
|
||||||
|
|
||||||
- [RKE 集群配置](rancher-server-configuration/rke1-cluster-configuration.md)
|
|
||||||
- [RKE2 集群配置](rancher-server-configuration/rke2-cluster-configuration.md)
|
- [RKE2 集群配置](rancher-server-configuration/rke2-cluster-configuration.md)
|
||||||
- [K3s 集群配置](rancher-server-configuration/k3s-cluster-configuration.md)
|
- [K3s 集群配置](rancher-server-configuration/k3s-cluster-configuration.md)
|
||||||
- [EKS 集群配置](rancher-server-configuration/eks-cluster-configuration.md)
|
- [EKS 集群配置](rancher-server-configuration/eks-cluster-configuration.md)
|
||||||
|
|||||||
-1
@@ -6,7 +6,6 @@ title: Rancher Server 配置
|
|||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/zh/reference-guides/cluster-configuration/rancher-server-configuration"/>
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/zh/reference-guides/cluster-configuration/rancher-server-configuration"/>
|
||||||
</head>
|
</head>
|
||||||
|
|
||||||
- [RKE1 集群配置](rke1-cluster-configuration.md)
|
|
||||||
- [RKE2 集群配置](rke2-cluster-configuration.md)
|
- [RKE2 集群配置](rke2-cluster-configuration.md)
|
||||||
- [K3s 集群配置](k3s-cluster-configuration.md)
|
- [K3s 集群配置](k3s-cluster-configuration.md)
|
||||||
- [EKS 集群配置](eks-cluster-configuration.md)
|
- [EKS 集群配置](eks-cluster-configuration.md)
|
||||||
|
|||||||
+1
-41
@@ -28,14 +28,6 @@ title: 架构推荐
|
|||||||
|
|
||||||

|

|
||||||
|
|
||||||
### RKE Kubernetes 集群安装
|
|
||||||
|
|
||||||
在 RKE 安装中,集群数据在集群中的三个 etcd 节点上复制,以在某个节点发生故障时提供冗余和进行数据复制。
|
|
||||||
|
|
||||||
<figcaption>运行 Rancher Management Server 的 RKE Kubernetes 集群的架构</figcaption>
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
## Kubernetes 安装的负载均衡器推荐配置
|
## Kubernetes 安装的负载均衡器推荐配置
|
||||||
|
|
||||||
我们建议你为负载均衡器和 Ingress Controller 使用以下配置:
|
我们建议你为负载均衡器和 Ingress Controller 使用以下配置:
|
||||||
@@ -57,7 +49,7 @@ title: 架构推荐
|
|||||||
|
|
||||||
## Kubernetes 安装的推荐节点角色
|
## Kubernetes 安装的推荐节点角色
|
||||||
|
|
||||||
如果 Rancher 安装在 K3s Kubernetes 或 RKE Kubernetes 集群上,以下建议适用。
|
如果 Rancher 安装在 K3s Kubernetes 上,则适用以下建议。
|
||||||
|
|
||||||
### K3s 集群角色
|
### K3s 集群角色
|
||||||
|
|
||||||
@@ -65,38 +57,6 @@ title: 架构推荐
|
|||||||
|
|
||||||
对于运行 Rancher Management Server 的集群,我们建议使用两个 server 节点。不需要 Agent 节点。
|
对于运行 Rancher Management Server 的集群,我们建议使用两个 server 节点。不需要 Agent 节点。
|
||||||
|
|
||||||
### RKE 集群角色
|
|
||||||
|
|
||||||
如果 Rancher 安装在 RKE Kubernetes 集群上,该集群应具有三个节点,并且每个节点都应具有所有三个 Kubernetes 角色,分别是 etcd,controlplane 和 worker。
|
|
||||||
|
|
||||||
### Rancher Server 和下游 Kubernetes 集群的 RKE 集群架构对比
|
|
||||||
|
|
||||||
我们对 Rancher Server 集群上 RKE 节点角色建议,与对运行你的应用和服务的下游集群的建议相反。
|
|
||||||
|
|
||||||
在配置下游 Kubernetes 集群时,Rancher 使用 RKE 作为创建下游 Kubernetes 集群的工具。注意:Rancher 将在未来的版本中添加配置下游 K3s 集群的功能。
|
|
||||||
|
|
||||||
我们建议下游 Kubernetes 集群中的每个节点都只分配一个角色,以确保稳定性和可扩展性。
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
RKE 每个角色至少需要一个节点,但并不强制每个节点只能有一个角色。但是,我们建议为运行应用的集群中的每个节点,使用单独的角色,以保证在服务拓展时,worker 节点上的工作负载不影响 Kubernetes master 或集群的数据。
|
|
||||||
|
|
||||||
以下是我们对下游集群的最低配置建议:
|
|
||||||
|
|
||||||
- **三个仅使用 etcd 角色的节点** ,以在三个节点中其中一个发生故障时,仍能保障集群的高可用性。
|
|
||||||
- **两个只有 controlplane 角色的节点** ,以保证 master 组件的高可用性。
|
|
||||||
- **一个或多个只有 worker 角色的节点**,用于运行 Kubernetes 节点组件,以及你部署的服务或应用的工作负载。
|
|
||||||
|
|
||||||
在设置 Rancher Server 时,在三个节点上使用全部这三个角色也是安全的,因为:
|
|
||||||
|
|
||||||
* 它允许一个 `etcd` 节点故障。
|
|
||||||
* 它通过多个 `controlplane` 节点来维护 master 组件的多个实例。
|
|
||||||
* 此集群上没有创建除 Rancher 之外的其他工作负载。
|
|
||||||
|
|
||||||
由于 Rancher Server 集群中没有部署其他工作负载,因此在大多数情况下,这个集群都不需要使用我们出于可扩展性和可用性的考虑,而为下游集群推荐的架构。
|
|
||||||
|
|
||||||
有关下游集群的最佳实践,请查看[生产环境清单](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/checklist-for-production-ready-clusters.md)或[最佳实践](../best-practices/best-practices.md)。
|
|
||||||
|
|
||||||
## 授权集群端点架构
|
## 授权集群端点架构
|
||||||
|
|
||||||
如果你使用[授权集群端点(ACE)](../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点),我们建议你创建一个指向负载均衡器的 FQDN,这个负载均衡器把流量转到所有角色为 `controlplane` 的节点。
|
如果你使用[授权集群端点(ACE)](../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点),我们建议你创建一个指向负载均衡器的 FQDN,这个负载均衡器把流量转到所有角色为 `controlplane` 的节点。
|
||||||
|
|||||||
+5
-14
@@ -39,7 +39,7 @@ Rancher 使用 [ServiceAccount](https://kubernetes.io/docs/tasks/configure-pod-c
|
|||||||
- 检测下游集群中的资源变化
|
- 检测下游集群中的资源变化
|
||||||
- 将下游集群的当前状态变更到目标状态
|
- 将下游集群的当前状态变更到目标状态
|
||||||
- 配置集群和项目的访问控制策略
|
- 配置集群和项目的访问控制策略
|
||||||
- 通过调用所需的 Docker Machine 驱动和 Kubernetes 引擎(例如 RKE 和 GKE)来配置集群
|
- 通过调用所需的 Docker Machine 驱动和 Kubernetes 引擎(例如,GKE)来配置集群
|
||||||
|
|
||||||
默认情况下,Cluster Controller 连接到 Cluster Agent,Rancher 才能与下游集群通信。如果 Cluster Agent 不可用,Cluster Controller 可以连接到 [Node Agent](#3-node-agents)。
|
默认情况下,Cluster Controller 连接到 Cluster Agent,Rancher 才能与下游集群通信。如果 Cluster Agent 不可用,Cluster Controller 可以连接到 [Node Agent](#3-node-agents)。
|
||||||
|
|
||||||
@@ -60,7 +60,7 @@ Cluster Agent,也叫做 `cattle-cluster-agent`,是运行在下游集群中
|
|||||||
|
|
||||||
授权集群端点(ACE)可连接到下游集群的 Kubernetes API Server,而不用通过 Rancher 认证代理调度请求。
|
授权集群端点(ACE)可连接到下游集群的 Kubernetes API Server,而不用通过 Rancher 认证代理调度请求。
|
||||||
|
|
||||||
> 授权集群端点仅适用于 Rancher 启动的 Kubernetes 集群,即只适用于 Rancher [使用 RKE](../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) 来配置的集群。它不适用于导入的集群,也不适用于托管在 Kubernetes 提供商中的集群(例如 Amazon 的 EKS)。
|
> 授权集群端点仅适用于 Rancher 启动的 Kubernetes 集群,即 [Rancher 配置的集群](../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) 它不适用于导入的集群,也不适用于托管在 Kubernetes 提供商中的集群(例如 Amazon 的 EKS)。
|
||||||
|
|
||||||
授权集群端点的主要用途:
|
授权集群端点的主要用途:
|
||||||
|
|
||||||
@@ -81,11 +81,8 @@ Cluster Agent,也叫做 `cattle-cluster-agent`,是运行在下游集群中
|
|||||||
|
|
||||||
维护、排除问题和升级集群需要用到以下文件,请妥善保管这些文件:
|
维护、排除问题和升级集群需要用到以下文件,请妥善保管这些文件:
|
||||||
|
|
||||||
- `rancher-cluster.yml`:RKE 集群配置文件。
|
- `config.yaml`: The RKE2 and K3s cluster configuration file.
|
||||||
- `kube_config_rancher-cluster.yml`:集群的 Kubeconfig 文件,包含完全访问集群的凭证。如果 Rancher 出现问题时,你可以使用此文件认证由 Rancher 启动的 Kubernetes 集群。
|
- `rke2.yaml` or `k3s.yaml`: The Kubeconfig file for your RKE2 or K3s cluster. This file contains credentials for full access to the cluster. You can use this file to authenticate with a Rancher-launched Kubernetes cluster if Rancher goes down.
|
||||||
- `rancher-cluster.rkestate`:Kubernetes 集群状态文件,文件包含用于完全访问集群的凭证。注意:仅在使用 RKE v0.2.0 或更高版本时,才会创建此该文件。
|
|
||||||
|
|
||||||
> **注意**:后两个文件名中的 `rancher-cluster` 部分取决于你命名 RKE 集群配置文件的方式。
|
|
||||||
|
|
||||||
有关在没有 Rancher 认证代理和其他配置选项的情况下连接到集群的更多信息,请参见 [kubeconfig 文件](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md)。
|
有关在没有 Rancher 认证代理和其他配置选项的情况下连接到集群的更多信息,请参见 [kubeconfig 文件](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md)。
|
||||||
|
|
||||||
@@ -97,13 +94,7 @@ Rancher 使用什么工具配置下游集群,取决于集群的类型。
|
|||||||
|
|
||||||
Rancher 可以动态启动云上(如 Amazon EC2、DigitalOcean、Azure 或 vSphere 等)的节点,然后在节点上安装 Kubernetes。
|
Rancher 可以动态启动云上(如 Amazon EC2、DigitalOcean、Azure 或 vSphere 等)的节点,然后在节点上安装 Kubernetes。
|
||||||
|
|
||||||
Rancher 使用 [RKE](https://github.com/rancher/rke) 和 [docker-machine](https://github.com/rancher/machine) 来配置这类型的集群。
|
Rancher 使用 [docker-machine](https://github.com/rancher/machine) 来配置这类型的集群。
|
||||||
|
|
||||||
### Rancher 为自定义节点启动 Kubernetes
|
|
||||||
|
|
||||||
在配置此类集群时,Rancher 会在现有节点上安装 Kubernetes,从而创建自定义集群。
|
|
||||||
|
|
||||||
Rancher 使用 [RKE](https://github.com/rancher/rke) 来启动此类集群。
|
|
||||||
|
|
||||||
### 托管的 Kubernetes 提供商
|
### 托管的 Kubernetes 提供商
|
||||||
|
|
||||||
|
|||||||
-7
@@ -12,7 +12,6 @@ Rancher 为每个受支持的 Rancher 版本的 Kubernetes 发行版提供了特
|
|||||||
|
|
||||||
Rancher 使用以下 Kubernetes 发行版:
|
Rancher 使用以下 Kubernetes 发行版:
|
||||||
|
|
||||||
- [**RKE**](https://rancher.com/docs/rke/latest/en/)(Rancher Kubernetes Engine)是经过 CNCF 认证的 Kubernetes 发行版,完全在 Docker 容器中运行。
|
|
||||||
- [**RKE2**](https://docs.rke2.io/) 是一个完全合规的 Kubernetes 发行版,专注于安全和合规性。
|
- [**RKE2**](https://docs.rke2.io/) 是一个完全合规的 Kubernetes 发行版,专注于安全和合规性。
|
||||||
- [**K3s**](https://docs.k3s.io/) 是一个完全合规的,轻量级 Kubernetes 发行版。它易于安装,内存需求只有上游 Kubernetes 的一半,所有组件都在一个小于 100 MB 的二进制文件中。
|
- [**K3s**](https://docs.k3s.io/) 是一个完全合规的,轻量级 Kubernetes 发行版。它易于安装,内存需求只有上游 Kubernetes 的一半,所有组件都在一个小于 100 MB 的二进制文件中。
|
||||||
|
|
||||||
@@ -22,12 +21,6 @@ Rancher 使用以下 Kubernetes 发行版:
|
|||||||
|
|
||||||
每个自我评估指南都附有强化指南。这些指南与列出的 Rancher 版本一起进行了测试。每个自我评估指南都在特定的 Kubernetes 版本和 CIS Benchmark 版本上进行了测试。如果 CIS Benchmark 尚未针对你的 Kubernetes 版本进行验证,你可以使用现有指南,直到添加适合你的版本的指南。
|
每个自我评估指南都附有强化指南。这些指南与列出的 Rancher 版本一起进行了测试。每个自我评估指南都在特定的 Kubernetes 版本和 CIS Benchmark 版本上进行了测试。如果 CIS Benchmark 尚未针对你的 Kubernetes 版本进行验证,你可以使用现有指南,直到添加适合你的版本的指南。
|
||||||
|
|
||||||
### RKE 指南
|
|
||||||
|
|
||||||
| Kubernetes 版本 | CIS Benchmark 版本 | 自我评估指南 | 加固指南 |
|
|
||||||
|--------------------|-----------------------|-----------------------|------------------|
|
|
||||||
| Kubernetes v1.25/v1.26/v1.27 | CIS v1.7 | [链接](rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md) | [链接](rke1-hardening-guide/rke1-hardening-guide.md) |
|
|
||||||
|
|
||||||
### RKE2 指南
|
### RKE2 指南
|
||||||
|
|
||||||
| 类型 | Kubernetes 版本 | CIS Benchmark 版本 | 自我评估指南 | 加固指南 |
|
| 类型 | Kubernetes 版本 | CIS Benchmark 版本 | 自我评估指南 | 加固指南 |
|
||||||
|
|||||||
+1
-1
@@ -67,7 +67,7 @@ Rancher 加固指南基于 <a href="https://www.cisecurity.org/benchmark/kuberne
|
|||||||
|
|
||||||
Benchmark 自我评估是 Rancher 安全加固指南的辅助。加固指南展示了如何加固集群,而 Benchmark 指南旨在帮助你评估加固集群的安全级别。
|
Benchmark 自我评估是 Rancher 安全加固指南的辅助。加固指南展示了如何加固集群,而 Benchmark 指南旨在帮助你评估加固集群的安全级别。
|
||||||
|
|
||||||
由于 Rancher 和 RKE 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多管控验证检查都不适用。本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群的合规性。你可以前往 [CIS 网站](https://www.cisecurity.org/benchmark/kubernetes/)下载原始的 Benchmark 文档。
|
本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群的合规性。你可以前往 [CIS 网站](https://www.cisecurity.org/benchmark/kubernetes/)下载原始的 Benchmark 文档。
|
||||||
|
|
||||||
Rancher 自我评估指南的每个版本都对应于强化指南、Rancher、Kubernetes 和 CIS Benchmark 的特定版本。
|
Rancher 自我评估指南的每个版本都对应于强化指南、Rancher、Kubernetes 和 CIS Benchmark 的特定版本。
|
||||||
|
|
||||||
|
|||||||
@@ -20,10 +20,7 @@ Rancher 将 Rancher-Webhook 作为单独的 deployment 和服务部署在 local
|
|||||||
|
|
||||||
| Rancher Version | Webhook Version | Availability in Prime | Availability in Community |
|
| Rancher Version | Webhook Version | Availability in Prime | Availability in Community |
|
||||||
|-----------------|-----------------|-----------------------|---------------------------|
|
|-----------------|-----------------|-----------------------|---------------------------|
|
||||||
| v2.11.3 | v0.7.3 | ✓ | ✓ |
|
| v2.12.0 | v0.8.0 | ✗ | ✓ |
|
||||||
| v2.11.2 | v0.7.2 | ✓ | ✓ |
|
|
||||||
| v2.11.1 | v0.7.1 | ✓ | ✓ |
|
|
||||||
| v2.11.0 | v0.7.0 | ✗ | ✓ |
|
|
||||||
|
|
||||||
## 为什么我们需要它?
|
## 为什么我们需要它?
|
||||||
|
|
||||||
|
|||||||
@@ -35,7 +35,6 @@ title: 参与 Rancher 社区贡献
|
|||||||
| (Rancher) Docker Machine | https://github.com/rancher/machine | 使用主机驱动时使用的 Docker Machine 二进制文件的源码仓库。这是 `docker/machine` 仓库的一个 fork。 |
|
| (Rancher) Docker Machine | https://github.com/rancher/machine | 使用主机驱动时使用的 Docker Machine 二进制文件的源码仓库。这是 `docker/machine` 仓库的一个 fork。 |
|
||||||
| machine-package | https://github.com/rancher/machine-package | 用于构建 Rancher Docker Machine 二进制文件。 |
|
| machine-package | https://github.com/rancher/machine-package | 用于构建 Rancher Docker Machine 二进制文件。 |
|
||||||
| kontainer-engine | https://github.com/rancher/kontainer-engine | kontainer-engine 的源码仓库,它是配置托管 Kubernetes 集群的工具。 |
|
| kontainer-engine | https://github.com/rancher/kontainer-engine | kontainer-engine 的源码仓库,它是配置托管 Kubernetes 集群的工具。 |
|
||||||
| RKE repository | https://github.com/rancher/rke | Rancher Kubernetes Engine 的源码仓库,该工具可在任何主机上配置 Kubernetes 集群。 |
|
|
||||||
| CLI | https://github.com/rancher/cli | Rancher 2.x 中使用的 Rancher CLI 的源码仓库。 |
|
| CLI | https://github.com/rancher/cli | Rancher 2.x 中使用的 Rancher CLI 的源码仓库。 |
|
||||||
| (Rancher) Helm repository | https://github.com/rancher/helm | 打包的 Helm 二进制文件的源码仓库。这是 `helm/helm` 仓库的一个 fork。 |
|
| (Rancher) Helm repository | https://github.com/rancher/helm | 打包的 Helm 二进制文件的源码仓库。这是 `helm/helm` 仓库的一个 fork。 |
|
||||||
| Telemetry repository | https://github.com/rancher/telemetry | Telemetry 二进制文件的源码仓库。 |
|
| Telemetry repository | https://github.com/rancher/telemetry | Telemetry 二进制文件的源码仓库。 |
|
||||||
@@ -106,27 +105,6 @@ title: 参与 Rancher 社区贡献
|
|||||||
-l app=rancher \
|
-l app=rancher \
|
||||||
--timestamps=true
|
--timestamps=true
|
||||||
```
|
```
|
||||||
- 在 RKE 集群的每个节点上使用 `docker` 的 Docker 安装
|
|
||||||
|
|
||||||
```
|
|
||||||
docker logs \
|
|
||||||
--timestamps \
|
|
||||||
$(docker ps | grep -E "rancher/rancher@|rancher_rancher" | awk '{ print $1 }')
|
|
||||||
```
|
|
||||||
- 使用 RKE 附加组件的 Kubernetes 安装
|
|
||||||
|
|
||||||
:::note
|
|
||||||
|
|
||||||
确保你配置了正确的 kubeconfig(例如,如果 Rancher Server 安装在 Kubernetes 集群上,则 `export KUBECONFIG=$PWD/kube_config_cluster.yml`)或通过 UI 使用了嵌入式 kubectl。
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl -n cattle-system \
|
|
||||||
logs \
|
|
||||||
--timestamps=true \
|
|
||||||
-f $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name="cattle-server") | .metadata.name')
|
|
||||||
```
|
|
||||||
- 系统日志记录(可能不存在,取决于操作系统)
|
- 系统日志记录(可能不存在,取决于操作系统)
|
||||||
- `/var/log/messages`
|
- `/var/log/messages`
|
||||||
- `/var/log/syslog`
|
- `/var/log/syslog`
|
||||||
|
|||||||
@@ -16,9 +16,7 @@ Rancher 将在 GitHub 上发布的 Rancher 的[发版说明](https://github.com/
|
|||||||
|
|
||||||
| Patch 版本 | 发布时间 |
|
| Patch 版本 | 发布时间 |
|
||||||
| ----------------------------------------------------------------- | ------------------ |
|
| ----------------------------------------------------------------- | ------------------ |
|
||||||
| [2.11.2](https://github.com/rancher/rancher/releases/tag/v2.11.2) | 2025 年 5 月 22 日 |
|
| [2.12.0](https://github.com/rancher/rancher/releases/tag/v2.12.0) | 2025 年 7 月 31 日 |
|
||||||
| [2.11.1](https://github.com/rancher/rancher/releases/tag/v2.11.1) | 2025 年 4 月 24 日 |
|
|
||||||
| [2.11.0](https://github.com/rancher/rancher/releases/tag/v2.11.0) | 2025 年 3 月 31 日 |
|
|
||||||
|
|
||||||
## 当一个功能被标记为弃用我可以得到什么样的预期?
|
## 当一个功能被标记为弃用我可以得到什么样的预期?
|
||||||
|
|
||||||
|
|||||||
+1
-3
@@ -15,9 +15,7 @@ title: 安装 Adapter
|
|||||||
|
|
||||||
| Rancher 版本 | Adapter 版本 |
|
| Rancher 版本 | Adapter 版本 |
|
||||||
|-----------------|:----------------:|
|
|-----------------|:----------------:|
|
||||||
| v2.11.2 | v106.0.0+up6.0.0 |
|
| v2.12.0 | 107.0.0+up7.0.0 |
|
||||||
| v2.11.1 | v106.0.0+up6.0.0 |
|
|
||||||
| v2.11.0 | v106.0.0+up6.0.0 |
|
|
||||||
|
|
||||||
## 1. 获取对 Local 集群的访问权限
|
## 1. 获取对 Local 集群的访问权限
|
||||||
|
|
||||||
|
|||||||
+1
-1
@@ -16,4 +16,4 @@ While a managed cluster is disconnected from Rancher, management operations will
|
|||||||
|
|
||||||
- **Cleaning Up Disconnected Clusters**: Regularly remove clusters that will no longer reconnect to Rancher (e.g., clusters that have been decommissioned or destroyed). Keeping such clusters in the Rancher management system consumes unnecessary resources, which could impact Rancher's performance over time.
|
- **Cleaning Up Disconnected Clusters**: Regularly remove clusters that will no longer reconnect to Rancher (e.g., clusters that have been decommissioned or destroyed). Keeping such clusters in the Rancher management system consumes unnecessary resources, which could impact Rancher's performance over time.
|
||||||
|
|
||||||
- **Certificate Rotation Considerations**: When designing processes that involve regularly shutting down clusters, whether connected to Rancher or not, take into account certificate rotation policies. For example, RKE/RKE2/K3s clusters may rotate certificates on startup if they exceeded their lifetime.
|
- **Certificate Rotation Considerations**: When designing processes that involve regularly shutting down clusters, whether connected to Rancher or not, take into account certificate rotation policies. For example, RKE2/K3s clusters may rotate certificates on startup if they exceeded their lifetime.
|
||||||
|
|||||||
+1
-3
@@ -26,9 +26,7 @@ Rancher Logging 使用的是 [Logging Operator](https://github.com/kube-logging/
|
|||||||
|
|
||||||
### Kubernetes 组件
|
### Kubernetes 组件
|
||||||
|
|
||||||
_ClusterFlows_ 能够收集 Kubernetes 集群中所有主机上所有容器的日志。如果这些容器包含在 Kubernetes Pod 中,这个方法是适用的。但是,RKE 容器不存在于 Kubernetes 内。
|
_ClusterFlows_ 能够收集 Kubernetes 集群中所有主机上所有容器的日志。如果这些容器包含在 Kubernetes Pod 中,这个方法是适用的。
|
||||||
|
|
||||||
目前,Rancher 能搜集 RKE 容器的日志,但不能轻易过滤。这是因为这些日志不包含源容器的信息(例如 `etcd` 或 `kube-apiserver`)。
|
|
||||||
|
|
||||||
Rancher 的未来版本将包含源容器名称,来支持过滤这些组件的日志。该功能实现之后,你将能够自定义 _ClusterFlow_ 来**仅**检索 Kubernetes 组件日志,并将日志发送到适当的输出位置。
|
Rancher 的未来版本将包含源容器名称,来支持过滤这些组件的日志。该功能实现之后,你将能够自定义 _ClusterFlow_ 来**仅**检索 Kubernetes 组件日志,并将日志发送到适当的输出位置。
|
||||||
|
|
||||||
|
|||||||
-4
@@ -18,10 +18,6 @@ title: Rancher 运行技巧
|
|||||||
|
|
||||||
在部署节点时,请遵循 K8s 和 etcd 的最佳实践,其中包括禁用 swap,检查集群中的所有主机之间是否有良好的网络连接,为每个节点使用唯一的主机名、MAC 地址和 `product_uuids`,检查所需端口是否已经打开,并使用配置 SSD 的 etcd 进行部署。详情请参见 [kubernetes 官方文档](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin)和 [etcd 性能操作指南](https://etcd.io/docs/v3.5/op-guide/performance/)。
|
在部署节点时,请遵循 K8s 和 etcd 的最佳实践,其中包括禁用 swap,检查集群中的所有主机之间是否有良好的网络连接,为每个节点使用唯一的主机名、MAC 地址和 `product_uuids`,检查所需端口是否已经打开,并使用配置 SSD 的 etcd 进行部署。详情请参见 [kubernetes 官方文档](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin)和 [etcd 性能操作指南](https://etcd.io/docs/v3.5/op-guide/performance/)。
|
||||||
|
|
||||||
## 使用 RKE 时:备份状态文件(Statefile)
|
|
||||||
|
|
||||||
RKE 将集群状态记录在一个名为 `cluster.rkestate` 的文件中,该文件对集群的恢复和/或通过 RKE 维护集群非常重要。由于这个文件包含证书材料,我们强烈建议在备份前对该文件进行加密。请在每次运行 `rke up` 后备份状态文件。
|
|
||||||
|
|
||||||
## 在同一个数据中心运行集群中的所有节点
|
## 在同一个数据中心运行集群中的所有节点
|
||||||
|
|
||||||
为达到最佳性能,请在同一地理数据中心运行所有三个节点。如果你在云(如 AWS)上运行节点,请在不同的可用区(AZ)中运行这三个节点。例如,在 us-west-2a 中运行节点 1,在 us-west-2b 中运行节点 2,在 us-west-2c 中运行节点 3。
|
为达到最佳性能,请在同一地理数据中心运行所有三个节点。如果你在云(如 AWS)上运行节点,请在不同的可用区(AZ)中运行这三个节点。例如,在 us-west-2a 中运行节点 1,在 us-west-2b 中运行节点 2,在 us-west-2c 中运行节点 3。
|
||||||
|
|||||||
+1
-1
@@ -76,7 +76,7 @@ Rancher 使用两个 Kubernetes 应用程序资源:`apps.projects.cattle.io`
|
|||||||
|
|
||||||
### 使用授权集群端点 (ACE)
|
### 使用授权集群端点 (ACE)
|
||||||
|
|
||||||
[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点) (ACE) 提供了 Rancher 部署的 RKE、RKE2 和 K3s 集群的 Kubernetes API 访问。启用后,ACE 会为生成的 `kubeconfig` 文件配置直接访问下游集群 Endpoint,从而绕过 Rancher 代理。在可以直接访问下游集群 Kubernetes API 的场景下,可以减少 Rancher 负载。有关更多信息,请参阅[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点)配置说明。
|
[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点) (ACE) 提供了 Rancher 部署的 RKE2 和 K3s 集群的 Kubernetes API 访问。启用后,ACE 会为生成的 `kubeconfig` 文件配置直接访问下游集群 Endpoint,从而绕过 Rancher 代理。在可以直接访问下游集群 Kubernetes API 的场景下,可以减少 Rancher 负载。有关更多信息,请参阅[授权集群端点](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点)配置说明。
|
||||||
|
|
||||||
### 减少 Event Handler 执行
|
### 减少 Event Handler 执行
|
||||||
|
|
||||||
|
|||||||
-1
@@ -14,7 +14,6 @@ title: 集群配置
|
|||||||
|
|
||||||
集群配置选项取决于 Kubernetes 集群的类型:
|
集群配置选项取决于 Kubernetes 集群的类型:
|
||||||
|
|
||||||
- [RKE 集群配置](rancher-server-configuration/rke1-cluster-configuration.md)
|
|
||||||
- [RKE2 集群配置](rancher-server-configuration/rke2-cluster-configuration.md)
|
- [RKE2 集群配置](rancher-server-configuration/rke2-cluster-configuration.md)
|
||||||
- [K3s 集群配置](rancher-server-configuration/k3s-cluster-configuration.md)
|
- [K3s 集群配置](rancher-server-configuration/k3s-cluster-configuration.md)
|
||||||
- [EKS 集群配置](rancher-server-configuration/eks-cluster-configuration.md)
|
- [EKS 集群配置](rancher-server-configuration/eks-cluster-configuration.md)
|
||||||
|
|||||||
-1
@@ -6,7 +6,6 @@ title: Rancher Server 配置
|
|||||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/zh/reference-guides/cluster-configuration/rancher-server-configuration"/>
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/zh/reference-guides/cluster-configuration/rancher-server-configuration"/>
|
||||||
</head>
|
</head>
|
||||||
|
|
||||||
- [RKE1 集群配置](rke1-cluster-configuration.md)
|
|
||||||
- [RKE2 集群配置](rke2-cluster-configuration.md)
|
- [RKE2 集群配置](rke2-cluster-configuration.md)
|
||||||
- [K3s 集群配置](k3s-cluster-configuration.md)
|
- [K3s 集群配置](k3s-cluster-configuration.md)
|
||||||
- [EKS 集群配置](eks-cluster-configuration.md)
|
- [EKS 集群配置](eks-cluster-configuration.md)
|
||||||
|
|||||||
+1
-41
@@ -28,14 +28,6 @@ title: 架构推荐
|
|||||||
|
|
||||||

|

|
||||||
|
|
||||||
### RKE Kubernetes 集群安装
|
|
||||||
|
|
||||||
在 RKE 安装中,集群数据在集群中的三个 etcd 节点上复制,以在某个节点发生故障时提供冗余和进行数据复制。
|
|
||||||
|
|
||||||
<figcaption>运行 Rancher Management Server 的 RKE Kubernetes 集群的架构</figcaption>
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
## Kubernetes 安装的负载均衡器推荐配置
|
## Kubernetes 安装的负载均衡器推荐配置
|
||||||
|
|
||||||
我们建议你为负载均衡器和 Ingress Controller 使用以下配置:
|
我们建议你为负载均衡器和 Ingress Controller 使用以下配置:
|
||||||
@@ -57,7 +49,7 @@ title: 架构推荐
|
|||||||
|
|
||||||
## Kubernetes 安装的推荐节点角色
|
## Kubernetes 安装的推荐节点角色
|
||||||
|
|
||||||
如果 Rancher 安装在 K3s Kubernetes 或 RKE Kubernetes 集群上,以下建议适用。
|
如果 Rancher 安装在 K3s Kubernetes 上,则适用以下建议。
|
||||||
|
|
||||||
### K3s 集群角色
|
### K3s 集群角色
|
||||||
|
|
||||||
@@ -65,38 +57,6 @@ title: 架构推荐
|
|||||||
|
|
||||||
对于运行 Rancher Management Server 的集群,我们建议使用两个 server 节点。不需要 Agent 节点。
|
对于运行 Rancher Management Server 的集群,我们建议使用两个 server 节点。不需要 Agent 节点。
|
||||||
|
|
||||||
### RKE 集群角色
|
|
||||||
|
|
||||||
如果 Rancher 安装在 RKE Kubernetes 集群上,该集群应具有三个节点,并且每个节点都应具有所有三个 Kubernetes 角色,分别是 etcd,controlplane 和 worker。
|
|
||||||
|
|
||||||
### Rancher Server 和下游 Kubernetes 集群的 RKE 集群架构对比
|
|
||||||
|
|
||||||
我们对 Rancher Server 集群上 RKE 节点角色建议,与对运行你的应用和服务的下游集群的建议相反。
|
|
||||||
|
|
||||||
在配置下游 Kubernetes 集群时,Rancher 使用 RKE 作为创建下游 Kubernetes 集群的工具。注意:Rancher 将在未来的版本中添加配置下游 K3s 集群的功能。
|
|
||||||
|
|
||||||
我们建议下游 Kubernetes 集群中的每个节点都只分配一个角色,以确保稳定性和可扩展性。
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
RKE 每个角色至少需要一个节点,但并不强制每个节点只能有一个角色。但是,我们建议为运行应用的集群中的每个节点,使用单独的角色,以保证在服务拓展时,worker 节点上的工作负载不影响 Kubernetes master 或集群的数据。
|
|
||||||
|
|
||||||
以下是我们对下游集群的最低配置建议:
|
|
||||||
|
|
||||||
- **三个仅使用 etcd 角色的节点** ,以在三个节点中其中一个发生故障时,仍能保障集群的高可用性。
|
|
||||||
- **两个只有 controlplane 角色的节点** ,以保证 master 组件的高可用性。
|
|
||||||
- **一个或多个只有 worker 角色的节点**,用于运行 Kubernetes 节点组件,以及你部署的服务或应用的工作负载。
|
|
||||||
|
|
||||||
在设置 Rancher Server 时,在三个节点上使用全部这三个角色也是安全的,因为:
|
|
||||||
|
|
||||||
* 它允许一个 `etcd` 节点故障。
|
|
||||||
* 它通过多个 `controlplane` 节点来维护 master 组件的多个实例。
|
|
||||||
* 此集群上没有创建除 Rancher 之外的其他工作负载。
|
|
||||||
|
|
||||||
由于 Rancher Server 集群中没有部署其他工作负载,因此在大多数情况下,这个集群都不需要使用我们出于可扩展性和可用性的考虑,而为下游集群推荐的架构。
|
|
||||||
|
|
||||||
有关下游集群的最佳实践,请查看[生产环境清单](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/checklist-for-production-ready-clusters.md)或[最佳实践](../best-practices/best-practices.md)。
|
|
||||||
|
|
||||||
## 授权集群端点架构
|
## 授权集群端点架构
|
||||||
|
|
||||||
如果你使用[授权集群端点(ACE)](../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点),我们建议你创建一个指向负载均衡器的 FQDN,这个负载均衡器把流量转到所有角色为 `controlplane` 的节点。
|
如果你使用[授权集群端点(ACE)](../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-授权集群端点),我们建议你创建一个指向负载均衡器的 FQDN,这个负载均衡器把流量转到所有角色为 `controlplane` 的节点。
|
||||||
|
|||||||
+5
-13
@@ -39,7 +39,7 @@ Rancher 使用 [ServiceAccount](https://kubernetes.io/docs/tasks/configure-pod-c
|
|||||||
- 检测下游集群中的资源变化
|
- 检测下游集群中的资源变化
|
||||||
- 将下游集群的当前状态变更到目标状态
|
- 将下游集群的当前状态变更到目标状态
|
||||||
- 配置集群和项目的访问控制策略
|
- 配置集群和项目的访问控制策略
|
||||||
- 通过调用所需的 Docker Machine 驱动和 Kubernetes 引擎(例如 RKE 和 GKE)来配置集群
|
- 通过调用所需的 Docker Machine 驱动和 Kubernetes 引擎(例如,GKE)来配置集群
|
||||||
|
|
||||||
默认情况下,Cluster Controller 连接到 Cluster Agent,Rancher 才能与下游集群通信。如果 Cluster Agent 不可用,Cluster Controller 可以连接到 [Node Agent](#3-node-agents)。
|
默认情况下,Cluster Controller 连接到 Cluster Agent,Rancher 才能与下游集群通信。如果 Cluster Agent 不可用,Cluster Controller 可以连接到 [Node Agent](#3-node-agents)。
|
||||||
|
|
||||||
@@ -60,7 +60,7 @@ Cluster Agent,也叫做 `cattle-cluster-agent`,是运行在下游集群中
|
|||||||
|
|
||||||
授权集群端点(ACE)可连接到下游集群的 Kubernetes API Server,而不用通过 Rancher 认证代理调度请求。
|
授权集群端点(ACE)可连接到下游集群的 Kubernetes API Server,而不用通过 Rancher 认证代理调度请求。
|
||||||
|
|
||||||
> 授权集群端点仅适用于 Rancher 启动的 Kubernetes 集群,即只适用于 Rancher [使用 RKE](../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) 来配置的集群。它不适用于导入的集群,也不适用于托管在 Kubernetes 提供商中的集群(例如 Amazon 的 EKS)。
|
> 授权集群端点仅适用于 Rancher 启动的 Kubernetes 集群,即 [Rancher 配置的集群](../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) 它不适用于导入的集群,也不适用于托管在 Kubernetes 提供商中的集群(例如 Amazon 的 EKS)。
|
||||||
|
|
||||||
授权集群端点的主要用途:
|
授权集群端点的主要用途:
|
||||||
|
|
||||||
@@ -81,11 +81,9 @@ Cluster Agent,也叫做 `cattle-cluster-agent`,是运行在下游集群中
|
|||||||
|
|
||||||
维护、排除问题和升级集群需要用到以下文件,请妥善保管这些文件:
|
维护、排除问题和升级集群需要用到以下文件,请妥善保管这些文件:
|
||||||
|
|
||||||
- `rancher-cluster.yml`:RKE 集群配置文件。
|
|
||||||
- `kube_config_rancher-cluster.yml`:集群的 Kubeconfig 文件,包含完全访问集群的凭证。如果 Rancher 出现问题时,你可以使用此文件认证由 Rancher 启动的 Kubernetes 集群。
|
|
||||||
- `rancher-cluster.rkestate`:Kubernetes 集群状态文件,文件包含用于完全访问集群的凭证。注意:仅在使用 RKE v0.2.0 或更高版本时,才会创建此该文件。
|
|
||||||
|
|
||||||
> **注意**:后两个文件名中的 `rancher-cluster` 部分取决于你命名 RKE 集群配置文件的方式。
|
- `config.yaml`: The RKE2 and K3s cluster configuration file.
|
||||||
|
- `rke2.yaml` or `k3s.yaml`: The Kubeconfig file for your RKE2 or K3s cluster. This file contains credentials for full access to the cluster. You can use this file to authenticate with a Rancher-launched Kubernetes cluster if Rancher goes down.
|
||||||
|
|
||||||
有关在没有 Rancher 认证代理和其他配置选项的情况下连接到集群的更多信息,请参见 [kubeconfig 文件](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md)。
|
有关在没有 Rancher 认证代理和其他配置选项的情况下连接到集群的更多信息,请参见 [kubeconfig 文件](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md)。
|
||||||
|
|
||||||
@@ -97,13 +95,7 @@ Rancher 使用什么工具配置下游集群,取决于集群的类型。
|
|||||||
|
|
||||||
Rancher 可以动态启动云上(如 Amazon EC2、DigitalOcean、Azure 或 vSphere 等)的节点,然后在节点上安装 Kubernetes。
|
Rancher 可以动态启动云上(如 Amazon EC2、DigitalOcean、Azure 或 vSphere 等)的节点,然后在节点上安装 Kubernetes。
|
||||||
|
|
||||||
Rancher 使用 [RKE](https://github.com/rancher/rke) 和 [docker-machine](https://github.com/rancher/machine) 来配置这类型的集群。
|
Rancher 使用 [docker-machine](https://github.com/rancher/machine) 来配置这类型的集群。
|
||||||
|
|
||||||
### Rancher 为自定义节点启动 Kubernetes
|
|
||||||
|
|
||||||
在配置此类集群时,Rancher 会在现有节点上安装 Kubernetes,从而创建自定义集群。
|
|
||||||
|
|
||||||
Rancher 使用 [RKE](https://github.com/rancher/rke) 来启动此类集群。
|
|
||||||
|
|
||||||
### 托管的 Kubernetes 提供商
|
### 托管的 Kubernetes 提供商
|
||||||
|
|
||||||
|
|||||||
-7
@@ -12,7 +12,6 @@ Rancher 为每个受支持的 Rancher 版本的 Kubernetes 发行版提供了特
|
|||||||
|
|
||||||
Rancher 使用以下 Kubernetes 发行版:
|
Rancher 使用以下 Kubernetes 发行版:
|
||||||
|
|
||||||
- [**RKE**](https://rancher.com/docs/rke/latest/en/)(Rancher Kubernetes Engine)是经过 CNCF 认证的 Kubernetes 发行版,完全在 Docker 容器中运行。
|
|
||||||
- [**RKE2**](https://docs.rke2.io/) 是一个完全合规的 Kubernetes 发行版,专注于安全和合规性。
|
- [**RKE2**](https://docs.rke2.io/) 是一个完全合规的 Kubernetes 发行版,专注于安全和合规性。
|
||||||
- [**K3s**](https://docs.k3s.io/) 是一个完全合规的,轻量级 Kubernetes 发行版。它易于安装,内存需求只有上游 Kubernetes 的一半,所有组件都在一个小于 100 MB 的二进制文件中。
|
- [**K3s**](https://docs.k3s.io/) 是一个完全合规的,轻量级 Kubernetes 发行版。它易于安装,内存需求只有上游 Kubernetes 的一半,所有组件都在一个小于 100 MB 的二进制文件中。
|
||||||
|
|
||||||
@@ -22,12 +21,6 @@ Rancher 使用以下 Kubernetes 发行版:
|
|||||||
|
|
||||||
每个自我评估指南都附有强化指南。这些指南与列出的 Rancher 版本一起进行了测试。每个自我评估指南都在特定的 Kubernetes 版本和 CIS Benchmark 版本上进行了测试。如果 CIS Benchmark 尚未针对你的 Kubernetes 版本进行验证,你可以使用现有指南,直到添加适合你的版本的指南。
|
每个自我评估指南都附有强化指南。这些指南与列出的 Rancher 版本一起进行了测试。每个自我评估指南都在特定的 Kubernetes 版本和 CIS Benchmark 版本上进行了测试。如果 CIS Benchmark 尚未针对你的 Kubernetes 版本进行验证,你可以使用现有指南,直到添加适合你的版本的指南。
|
||||||
|
|
||||||
### RKE 指南
|
|
||||||
|
|
||||||
| Kubernetes 版本 | CIS Benchmark 版本 | 自我评估指南 | 加固指南 |
|
|
||||||
|--------------------|-----------------------|-----------------------|------------------|
|
|
||||||
| Kubernetes v1.25/v1.26/v1.27 | CIS v1.7 | [链接](rke1-hardening-guide/rke1-self-assessment-guide-with-cis-v1.7-k8s-v1.25-v1.26-v1.27.md) | [链接](rke1-hardening-guide/rke1-hardening-guide.md) |
|
|
||||||
|
|
||||||
### RKE2 指南
|
### RKE2 指南
|
||||||
|
|
||||||
| 类型 | Kubernetes 版本 | CIS Benchmark 版本 | 自我评估指南 | 加固指南 |
|
| 类型 | Kubernetes 版本 | CIS Benchmark 版本 | 自我评估指南 | 加固指南 |
|
||||||
|
|||||||
+1
-1
@@ -67,7 +67,7 @@ Rancher 加固指南基于 <a href="https://www.cisecurity.org/benchmark/kuberne
|
|||||||
|
|
||||||
Benchmark 自我评估是 Rancher 安全加固指南的辅助。加固指南展示了如何加固集群,而 Benchmark 指南旨在帮助你评估加固集群的安全级别。
|
Benchmark 自我评估是 Rancher 安全加固指南的辅助。加固指南展示了如何加固集群,而 Benchmark 指南旨在帮助你评估加固集群的安全级别。
|
||||||
|
|
||||||
由于 Rancher 和 RKE 将 Kubernetes 服务安装为 Docker 容器,因此 CIS Kubernetes Benchmark 中的许多管控验证检查都不适用。本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群的合规性。你可以前往 [CIS 网站](https://www.cisecurity.org/benchmark/kubernetes/)下载原始的 Benchmark 文档。
|
本指南将介绍各种 controls,并提供更新的示例命令来审计 Rancher 创建的集群的合规性。你可以前往 [CIS 网站](https://www.cisecurity.org/benchmark/kubernetes/)下载原始的 Benchmark 文档。
|
||||||
|
|
||||||
Rancher 自我评估指南的每个版本都对应于强化指南、Rancher、Kubernetes 和 CIS Benchmark 的特定版本。
|
Rancher 自我评估指南的每个版本都对应于强化指南、Rancher、Kubernetes 和 CIS Benchmark 的特定版本。
|
||||||
|
|
||||||
|
|||||||
+1
-3
@@ -20,9 +20,7 @@ Rancher 将 Rancher-Webhook 作为单独的 deployment 和服务部署在 local
|
|||||||
|
|
||||||
| Rancher Version | Webhook Version | Availability in Prime | Availability in Community |
|
| Rancher Version | Webhook Version | Availability in Prime | Availability in Community |
|
||||||
|-----------------|-----------------|-----------------------|---------------------------|
|
|-----------------|-----------------|-----------------------|---------------------------|
|
||||||
| v2.11.2 | v0.7.2 | ✓ | ✓ |
|
| v2.12.0 | v0.8.0 | ✗ | ✓ |
|
||||||
| v2.11.1 | v0.7.1 | ✓ | ✓ |
|
|
||||||
| v2.11.0 | v0.7.0 | ✗ | ✓ |
|
|
||||||
|
|
||||||
## 为什么我们需要它?
|
## 为什么我们需要它?
|
||||||
|
|
||||||
|
|||||||
@@ -1,10 +1,10 @@
|
|||||||
<!-- releaseTask -->
|
<!-- releaseTask -->
|
||||||
The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity levels. This data was collected in June 2025.
|
The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity levels. This data was collected in July 2025.
|
||||||
|
|
||||||
| Provider | Project | Stars | Forks | Contributors |
|
| Provider | Project | Stars | Forks | Contributors |
|
||||||
| ---- | ---- | ---- | ---- | ---- |
|
| ---- | ---- | ---- | ---- | ---- |
|
||||||
| Canal | https://github.com/projectcalico/canal | 720 | 99 | 20 |
|
| Canal | https://github.com/projectcalico/canal | 720 | 99 | 20 |
|
||||||
| Flannel | https://github.com/flannel-io/flannel | 9.2k | 2.9k | 239 |
|
| Flannel | https://github.com/flannel-io/flannel | 9.2k | 2.9k | 242 |
|
||||||
| Calico | https://github.com/projectcalico/calico | 6.5k | 1.4k | 378 |
|
| Calico | https://github.com/projectcalico/calico | 6.7k | 1.5k | 380 |
|
||||||
| Weave | https://github.com/weaveworks/weave | 6.6k | 681 | 84 |
|
| Weave | https://github.com/weaveworks/weave | 6.6k | 681 | 84 |
|
||||||
| Cilium | https://github.com/cilium/cilium | 21.9k | 3.3k | 948 |
|
| Cilium | https://github.com/cilium/cilium | 21.1k | 3.3k | 959 |
|
||||||
|
|||||||
+464
-475
File diff suppressed because it is too large
Load Diff
@@ -5,6 +5,27 @@ title: Rancher Documentation Versions
|
|||||||
<!-- releaseTask -->
|
<!-- releaseTask -->
|
||||||
### Current Versions
|
### Current Versions
|
||||||
|
|
||||||
|
Here you can find links to supporting documentation for the current released version of Rancher v2.12, and its availability for [Rancher Prime](/v2.12/getting-started/quick-start-guides/deploy-rancher-manager/prime) and the Community version of Rancher:
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<tr>
|
||||||
|
<th>Version</th>
|
||||||
|
<th>Documentation</th>
|
||||||
|
<th>Release Notes</th>
|
||||||
|
<th>Support Matrix</th>
|
||||||
|
<th>Prime</th>
|
||||||
|
<th>Community</th>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><b>v2.12.0</b></td>
|
||||||
|
<td><a href="https://ranchermanager.docs.rancher.com/v2.12">Documentation</a></td>
|
||||||
|
<td><a href="https://github.com/rancher/rancher/releases/tag/v2.12.0">Release Notes</a></td>
|
||||||
|
<td><center>N/A</center></td>
|
||||||
|
<td><center>N/A</center></td>
|
||||||
|
<td><center>✓</center></td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
|
||||||
Here you can find links to supporting documentation for the current released version of Rancher v2.11, and its availability for [Rancher Prime](/v2.11/getting-started/quick-start-guides/deploy-rancher-manager/prime) and the Community version of Rancher:
|
Here you can find links to supporting documentation for the current released version of Rancher v2.11, and its availability for [Rancher Prime](/v2.11/getting-started/quick-start-guides/deploy-rancher-manager/prime) and the Community version of Rancher:
|
||||||
|
|
||||||
<table>
|
<table>
|
||||||
|
|||||||
+8
@@ -12,6 +12,14 @@ The steps to set up RKE, RKE2, or K3s are shown below.
|
|||||||
|
|
||||||
For convenience, export the IP address and port of your proxy into an environment variable and set up the HTTP_PROXY variables for your current shell on every node:
|
For convenience, export the IP address and port of your proxy into an environment variable and set up the HTTP_PROXY variables for your current shell on every node:
|
||||||
|
|
||||||
|
:::caution
|
||||||
|
|
||||||
|
The `NO_PROXY` environment variable is not standardized, and the accepted format of the value can differ between applications. When configuring the `NO_PROXY` variable for Rancher, the value must adhere to the format expected by Golang.
|
||||||
|
|
||||||
|
Specifically, the value should be a comma-delimited string which only contains IP addresses, CIDR notation, domain names, or special DNS labels (e.g. `*`). For a full description of the expected value format, refer to the [**upstream Golang documentation**](https://pkg.go.dev/golang.org/x/net/http/httpproxy#Config)
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
```
|
```
|
||||||
export proxy_host="10.0.0.5:8888"
|
export proxy_host="10.0.0.5:8888"
|
||||||
export HTTP_PROXY=http://${proxy_host}
|
export HTTP_PROXY=http://${proxy_host}
|
||||||
|
|||||||
+10
-1
@@ -56,6 +56,15 @@ GKE Autopilot clusters aren't supported. See [Compare GKE Autopilot and Standard
|
|||||||
9. If you are using self-signed certificates, you will receive the message `certificate signed by unknown authority`. To work around this validation, copy the command starting with `curl` displayed in Rancher to your clipboard. Then run the command on a node where kubeconfig is configured to point to the cluster you want to import.
|
9. If you are using self-signed certificates, you will receive the message `certificate signed by unknown authority`. To work around this validation, copy the command starting with `curl` displayed in Rancher to your clipboard. Then run the command on a node where kubeconfig is configured to point to the cluster you want to import.
|
||||||
10. When you finish running the command(s) on your node, click **Done**.
|
10. When you finish running the command(s) on your node, click **Done**.
|
||||||
|
|
||||||
|
:::important
|
||||||
|
|
||||||
|
The `NO_PROXY` environment variable is not standardized, and the accepted format of the value can differ between applications. When configuring the `NO_PROXY` variable in Rancher, the value must adhere to the format expected by Golang.
|
||||||
|
|
||||||
|
Specifically, the value should be a comma-delimited string which only contains IP addresses, CIDR notation, domain names, or special DNS labels (e.g. `*`). For a full description of the expected value format, refer to the [**upstream Golang documentation**](https://pkg.go.dev/golang.org/x/net/http/httpproxy#Config)
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
|
||||||
**Result:**
|
**Result:**
|
||||||
|
|
||||||
- Your cluster is registered and assigned a state of **Pending**. Rancher is deploying resources to manage your cluster.
|
- Your cluster is registered and assigned a state of **Pending**. Rancher is deploying resources to manage your cluster.
|
||||||
@@ -295,4 +304,4 @@ This section lists some of the most common errors that may occur when importing
|
|||||||
|
|
||||||
```sh
|
```sh
|
||||||
az aks update --resource-group <resource-group> --name <cluster-name> --enable-local-accounts
|
az aks update --resource-group <resource-group> --name <cluster-name> --enable-local-accounts
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -22,6 +22,14 @@ For private nodes or private clusters, the environment variables need to be set
|
|||||||
|
|
||||||
When adding Fleet agent environment variables for the proxy, replace <PROXY_IP> with your private proxy IP.
|
When adding Fleet agent environment variables for the proxy, replace <PROXY_IP> with your private proxy IP.
|
||||||
|
|
||||||
|
:::caution
|
||||||
|
|
||||||
|
The `NO_PROXY` environment variable is not standardized, and the accepted format of the value can differ between applications. When configuring the `NO_PROXY` variable in Rancher, the value must adhere to the format expected by Golang.
|
||||||
|
|
||||||
|
Specifically, the value should be a comma-delimited string which only contains IP addresses, CIDR notation, domain names, or special DNS labels (e.g. `*`). For a full description of the expected value format, refer to the [**upstream Golang documentation**](https://pkg.go.dev/golang.org/x/net/http/httpproxy#Config)
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
| Variable Name | Value |
|
| Variable Name | Value |
|
||||||
|------------------|--------|
|
|------------------|--------|
|
||||||
| `HTTP_PROXY` | http://<PROXY_IP>:8888 |
|
| `HTTP_PROXY` | http://<PROXY_IP>:8888 |
|
||||||
|
|||||||
+6
-7
@@ -10,12 +10,11 @@ If you operate Rancher behind a proxy and you want to access services through th
|
|||||||
|
|
||||||
Make sure `NO_PROXY` contains the network addresses, network address ranges and domains that should be excluded from using the proxy.
|
Make sure `NO_PROXY` contains the network addresses, network address ranges and domains that should be excluded from using the proxy.
|
||||||
|
|
||||||
| Environment variable | Purpose |
|
| Environment variable | Purpose |
|
||||||
| -------------------- | ----------------------------------------------------------------------------------------------------------------------- |
|
|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
| HTTP_PROXY | Proxy address to use when initiating HTTP connection(s) |
|
| HTTP_PROXY | Proxy address to use when initiating HTTP connection(s) |
|
||||||
| HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) |
|
| HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) |
|
||||||
| NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s) |
|
| NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s). <br/><br/> The value must be a comma-delimited string which contains IP addresses, CIDR notation, domain names, or special DNS labels (*). For a full description of the expected value format, refer to the [**upstream Golang documentation**](https://pkg.go.dev/golang.org/x/net/http/httpproxy#Config) |
|
||||||
|
|
||||||
:::note Important:
|
:::note Important:
|
||||||
|
|
||||||
NO_PROXY must be in uppercase to use network range (CIDR) notation.
|
NO_PROXY must be in uppercase to use network range (CIDR) notation.
|
||||||
@@ -62,4 +61,4 @@ acl SSL_ports port 2376
|
|||||||
|
|
||||||
acl Safe_ports port 22 # ssh
|
acl Safe_ports port 22 # ssh
|
||||||
acl Safe_ports port 2376 # docker port
|
acl Safe_ports port 2376 # docker port
|
||||||
```
|
```
|
||||||
|
|||||||
+8
@@ -12,6 +12,14 @@ The steps to set up RKE, RKE2, or K3s are shown below.
|
|||||||
|
|
||||||
For convenience, export the IP address and port of your proxy into an environment variable and set up the HTTP_PROXY variables for your current shell on every node:
|
For convenience, export the IP address and port of your proxy into an environment variable and set up the HTTP_PROXY variables for your current shell on every node:
|
||||||
|
|
||||||
|
:::caution
|
||||||
|
|
||||||
|
The `NO_PROXY` environment variable is not standardized, and the accepted format of the value can differ between applications. When configuring the `NO_PROXY` variable for Rancher, the value must adhere to the format expected by Golang.
|
||||||
|
|
||||||
|
Specifically, the value should be a comma-delimited string which only contains IP addresses, CIDR notation, domain names, or special DNS labels (e.g. `*`). For a full description of the expected value format, refer to the [**upstream Golang documentation**](https://pkg.go.dev/golang.org/x/net/http/httpproxy#Config)
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
```
|
```
|
||||||
export proxy_host="10.0.0.5:8888"
|
export proxy_host="10.0.0.5:8888"
|
||||||
export HTTP_PROXY=http://${proxy_host}
|
export HTTP_PROXY=http://${proxy_host}
|
||||||
|
|||||||
+43
@@ -53,6 +53,10 @@ You can obtain `<RANCHER_CONTAINER_TAG>` and `<RANCHER_CONTAINER_NAME>` by loggi
|
|||||||
|
|
||||||
## Upgrade
|
## Upgrade
|
||||||
|
|
||||||
|
:::danger
|
||||||
|
Rancher upgrades to version 2.12.0 and later will be blocked if any RKE1-related resources are detected, as the Rancher Kubernetes Engine (RKE/RKE1) is end of life as of **July 31, 2025**. For detailed cleanup and recovery steps, refer to the [RKE1 Resource Validation and Upgrade Requirements in Rancher v2.12](#rke1-resource-validation-and-upgrade-requirements-in-rancher-v212).
|
||||||
|
:::
|
||||||
|
|
||||||
During upgrade, you create a copy of the data from your current Rancher container and a backup in case something goes wrong. Then you deploy the new version of Rancher in a new container using your existing data.
|
During upgrade, you create a copy of the data from your current Rancher container and a backup in case something goes wrong. Then you deploy the new version of Rancher in a new container using your existing data.
|
||||||
### 1. Create a copy of the data from your Rancher server container
|
### 1. Create a copy of the data from your Rancher server container
|
||||||
|
|
||||||
@@ -388,6 +392,45 @@ See [Restoring Cluster Networking](https://github.com/rancher/rancher-docs/tree/
|
|||||||
|
|
||||||
Remove the previous Rancher server container. If you only stop the previous Rancher server container (and don't remove it), the container may restart after the next server reboot.
|
Remove the previous Rancher server container. If you only stop the previous Rancher server container (and don't remove it), the container may restart after the next server reboot.
|
||||||
|
|
||||||
|
## RKE1 Resource Validation and Upgrade Requirements in Rancher v2.12
|
||||||
|
|
||||||
|
Rancher v2.12.0 and later has removed support for the Rancher Kubernetes Engine (RKE/RKE1). During upgrade, Rancher validates the cluster resources and blocks the upgrade if any RKE1-related resources are detected.
|
||||||
|
|
||||||
|
This validation affects the following resource types:
|
||||||
|
|
||||||
|
- Clusters with `rkeConfig` (`clusters.management.cattle.io`)
|
||||||
|
- NodeTemplates (`nodetemplates.management.cattle.io`)
|
||||||
|
- ClusterTemplates (`clustertemplates.management.cattle.io`)
|
||||||
|
|
||||||
|
This is particularly relevant for single-node Docker installations, where Rancher is not running during the upgrade. In such cases, controllers are not available to automatically clean up deprecated resources, and the upgrade process will fail early with an error listing the blocking resources.
|
||||||
|
|
||||||
|
### 1. Pre-Upgrade (Recommended)
|
||||||
|
|
||||||
|
Before upgrading, while Rancher is still running:
|
||||||
|
|
||||||
|
- Run the `pre-upgrade-hook` cleanup script to delete all RKE1 clusters and templates. You can find the script in the Rancher GitHub repository: [pre-upgrade-hook.sh](https://github.com/rancher/rancher/blob/v2.12.0/chart/scripts/pre-upgrade-hook.sh).
|
||||||
|
- This allows Rancher to clean up associated resources and finalizers.
|
||||||
|
|
||||||
|
### 2. Post-Upgrade Failure Due to Residual RKE1 Resources
|
||||||
|
|
||||||
|
If the upgrade to Rancher v2.12.0 or later is attempted without prior cleanup of RKE1 resources:
|
||||||
|
|
||||||
|
- The upgrade will fail and display an error listing the resource names that are preventing the upgrade.
|
||||||
|
- This occurs because Rancher includes validation to detect and block upgrades when unsupported RKE1 resources are still present.
|
||||||
|
- To proceed, [rollback](#rolling-back) to the previous Rancher version, delete the identified resources, and then retry after [manual cleanup](#manual-cleanup-after-rollback).
|
||||||
|
|
||||||
|
:::note Helm-based Rancher
|
||||||
|
Helm-based Rancher installations are not affected by this issue, as Rancher remains available during the upgrade and can perform resource cleanup as needed.
|
||||||
|
:::
|
||||||
|
|
||||||
|
### Manual Cleanup After Rollback
|
||||||
|
|
||||||
|
Users should perform the following steps after rolling back to a previous Rancher version:
|
||||||
|
|
||||||
|
- **Manually delete** the resources listed in the upgrade error message (e.g., RKE1 clusters, NodeTemplates, ClusterTemplates).
|
||||||
|
- If deletion is blocked due to **finalizers**, edit the resources and remove the `metadata.finalizers` field.
|
||||||
|
- If a **validating webhook** prevents deletion (e.g., for the `system-project`), please refer to the [Bypassing the Webhook](../../../../reference-guides/rancher-webhook.md#bypassing-the-webhook) documentation.
|
||||||
|
|
||||||
## Rolling Back
|
## Rolling Back
|
||||||
|
|
||||||
If your upgrade does not complete successfully, you can roll back Rancher server and its data back to its last healthy state. For more information, see [Docker Rollback](roll-back-docker-installed-rancher.md).
|
If your upgrade does not complete successfully, you can roll back Rancher server and its data back to its last healthy state. For more information, see [Docker Rollback](roll-back-docker-installed-rancher.md).
|
||||||
|
|||||||
+8
@@ -57,6 +57,14 @@ GKE Autopilot clusters aren't supported. See [Compare GKE Autopilot and Standard
|
|||||||
9. If you are using self-signed certificates, you will receive the message `certificate signed by unknown authority`. To work around this validation, copy the command starting with `curl` displayed in Rancher to your clipboard. Then run the command on a node where kubeconfig is configured to point to the cluster you want to import.
|
9. If you are using self-signed certificates, you will receive the message `certificate signed by unknown authority`. To work around this validation, copy the command starting with `curl` displayed in Rancher to your clipboard. Then run the command on a node where kubeconfig is configured to point to the cluster you want to import.
|
||||||
10. When you finish running the command(s) on your node, click **Done**.
|
10. When you finish running the command(s) on your node, click **Done**.
|
||||||
|
|
||||||
|
:::important
|
||||||
|
|
||||||
|
The `NO_PROXY` environment variable is not standardized, and the accepted format of the value can differ between applications. When configuring the `NO_PROXY` variable in Rancher, the value must adhere to the format expected by Golang.
|
||||||
|
|
||||||
|
Specifically, the value should be a comma-delimited string which only contains IP addresses, CIDR notation, domain names, or special DNS labels (e.g. `*`). For a full description of the expected value format, refer to the [**upstream Golang documentation**](https://pkg.go.dev/golang.org/x/net/http/httpproxy#Config)
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
**Result:**
|
**Result:**
|
||||||
|
|
||||||
- Your cluster is registered and assigned a state of **Pending**. Rancher is deploying resources to manage your cluster.
|
- Your cluster is registered and assigned a state of **Pending**. Rancher is deploying resources to manage your cluster.
|
||||||
|
|||||||
@@ -22,6 +22,14 @@ For private nodes or private clusters, the environment variables need to be set
|
|||||||
|
|
||||||
When adding Fleet agent environment variables for the proxy, replace <PROXY_IP> with your private proxy IP.
|
When adding Fleet agent environment variables for the proxy, replace <PROXY_IP> with your private proxy IP.
|
||||||
|
|
||||||
|
:::caution
|
||||||
|
|
||||||
|
The `NO_PROXY` environment variable is not standardized, and the accepted format of the value can differ between applications. When configuring the `NO_PROXY` variable in Rancher, the value must adhere to the format expected by Golang.
|
||||||
|
|
||||||
|
Specifically, the value should be a comma-delimited string which only contains IP addresses, CIDR notation, domain names, or special DNS labels (e.g. `*`). For a full description of the expected value format, refer to the [**upstream Golang documentation**](https://pkg.go.dev/golang.org/x/net/http/httpproxy#Config)
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
| Variable Name | Value |
|
| Variable Name | Value |
|
||||||
|------------------|--------|
|
|------------------|--------|
|
||||||
| `HTTP_PROXY` | http://<PROXY_IP>:8888 |
|
| `HTTP_PROXY` | http://<PROXY_IP>:8888 |
|
||||||
|
|||||||
+6
-7
@@ -10,12 +10,11 @@ If you operate Rancher behind a proxy and you want to access services through th
|
|||||||
|
|
||||||
Make sure `NO_PROXY` contains the network addresses, network address ranges and domains that should be excluded from using the proxy.
|
Make sure `NO_PROXY` contains the network addresses, network address ranges and domains that should be excluded from using the proxy.
|
||||||
|
|
||||||
| Environment variable | Purpose |
|
| Environment variable | Purpose |
|
||||||
| -------------------- | ----------------------------------------------------------------------------------------------------------------------- |
|
|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
| HTTP_PROXY | Proxy address to use when initiating HTTP connection(s) |
|
| HTTP_PROXY | Proxy address to use when initiating HTTP connection(s) |
|
||||||
| HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) |
|
| HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s) |
|
||||||
| NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s) |
|
| NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s). <br/><br/> The value must be a comma-delimited string which contains IP addresses, CIDR notation, domain names, or special DNS labels (*). For a full description of the expected value format, refer to the [**upstream Golang documentation**](https://pkg.go.dev/golang.org/x/net/http/httpproxy#Config) |
|
||||||
|
|
||||||
:::note Important:
|
:::note Important:
|
||||||
|
|
||||||
NO_PROXY must be in uppercase to use network range (CIDR) notation.
|
NO_PROXY must be in uppercase to use network range (CIDR) notation.
|
||||||
@@ -62,4 +61,4 @@ acl SSL_ports port 2376
|
|||||||
|
|
||||||
acl Safe_ports port 22 # ssh
|
acl Safe_ports port 22 # ssh
|
||||||
acl Safe_ports port 2376 # docker port
|
acl Safe_ports port 2376 # docker port
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -29,8 +29,7 @@ kubectl patch feature ext-kubeconfigs -p '{"spec":{"value":false}}'
|
|||||||
|
|
||||||
## Creating a Kubeconfig
|
## Creating a Kubeconfig
|
||||||
|
|
||||||
Admins can delete any Kubeconfig, while regular users can only delete their own. When a Kubeconfig is deleted, the kubeconfig tokens are also deleted.
|
Only a **valid and active** Rancher user can create a Kubeconfig. For example, trying to create a Kubeconfig using a `system:admin` service account will lead to an error:
|
||||||
E.g. using a service account `system:admin` will lead to the following error:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
||||||
|
|||||||
@@ -0,0 +1,114 @@
|
|||||||
|
---
|
||||||
|
title: Tokens
|
||||||
|
---
|
||||||
|
|
||||||
|
<head>
|
||||||
|
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/api/workflows/tokens"/>
|
||||||
|
</head>
|
||||||
|
|
||||||
|
## Feature Flag
|
||||||
|
|
||||||
|
The Tokens Public API is available for Rancher v2.12.0 and later, and is enabled by default. You can disable the Tokens Public API by setting the `ext-tokens` feature flag to `false` as shown in the example `kubectl` command below:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl patch feature ext-tokens -p '{"spec":{"value":false}}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Creating a Token
|
||||||
|
|
||||||
|
Only a **valid and active** Rancher user can create a Token. Otherwise, you will get an error displayed (`Error from server (Forbidden)...`) when attempting to create a Token.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
||||||
|
apiVersion: ext.cattle.io/v1
|
||||||
|
kind: Token
|
||||||
|
EOF
|
||||||
|
Error from server (Forbidden): error when creating "STDIN": tokens.ext.cattle.io is forbidden: user system:admin is not a Rancher user
|
||||||
|
```
|
||||||
|
|
||||||
|
A Token is always created for the user making the request. Attempting to create a Token for a different user, by specifying a different `spec.userID`, is forbidden and will fail.
|
||||||
|
|
||||||
|
- The `spec.description` field can be set to an arbitrary human-readable description of the Token's purpose. The default value is empty.
|
||||||
|
|
||||||
|
- The `spec.kind` field can be set to the kind of Token. The value `session` indicates a login Token. All other values, including the default empty string, indicate a kind of derived Token.
|
||||||
|
|
||||||
|
- The `metadata.name` and `metadata.generateName` fields are ignored, and the name of the new Token is automatically generated using the prefix `token-`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
||||||
|
apiVersion: ext.cattle.io/v1
|
||||||
|
kind: Token
|
||||||
|
spec:
|
||||||
|
description: My Token
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
- If the `spec.ttl` is not specified, the Token is created with the expiration time defined in the `auth-token-max-ttl-minutes` setting. The default expiration time is 90 days. If `spec.ttl` is specified, it should be greater than 0 and less than or equal to the value of the `auth-token-max-ttl-minutes` setting expressed in milliseconds.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
||||||
|
apiVersion: ext.cattle.io/v1
|
||||||
|
kind: Token
|
||||||
|
spec:
|
||||||
|
ttl: 7200000 # 2 hours
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
## Listing Tokens
|
||||||
|
|
||||||
|
Listing previously generated Tokens can help clean up tokens that are no longer needed (e.g., they were issued temporarily). Admins can list all Tokens, while regular users can only see their own.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl get tokens.ext.cattle.io
|
||||||
|
NAME KIND TTL AGE
|
||||||
|
token-chjc9 90d 18s
|
||||||
|
token-6fzgj 90d 16s
|
||||||
|
token-8nbrm 90d 14s
|
||||||
|
```
|
||||||
|
|
||||||
|
Use `-o wide` to get more details:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl get tokens.ext.cattle.io -o wide
|
||||||
|
NAME USER KIND TTL AGE DESCRIPTION
|
||||||
|
token-chjc9 user-jtghh 90d 24s example
|
||||||
|
token-6fzgj user-jtghh 90d 22s box
|
||||||
|
token-8nbrm user-jtghh 90d 20s jinx
|
||||||
|
```
|
||||||
|
|
||||||
|
## Viewing a Token
|
||||||
|
|
||||||
|
Admins can get any Token, while regular users can only get their own.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl get tokens.ext.cattle.io token-chjc9
|
||||||
|
NAME KIND TTL AGE
|
||||||
|
token-chjc9 90d 18s
|
||||||
|
```
|
||||||
|
|
||||||
|
Use `-o wide` to get more details:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl get tokens.ext.cattle.io token-chjc9 -o wide
|
||||||
|
NAME USER KIND TTL AGE DESCRIPTION
|
||||||
|
token-chjc9 user-jtghh 90d 24s example
|
||||||
|
```
|
||||||
|
|
||||||
|
## Deleting a Token
|
||||||
|
|
||||||
|
Admins can delete any Token, while regular users can only delete their own.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl delete tokens.ext.cattle.io token-chjc9
|
||||||
|
token.ext.cattle.io "token-chjc9" deleted
|
||||||
|
```
|
||||||
|
|
||||||
|
## Updating a Token
|
||||||
|
|
||||||
|
Only the metadata fields `spec.description`, `spec.ttl`, and `spec.enabled` can be updated. All other `spec` fields are immutable. Admins can extend the `spec.ttl` field, while regular users can only reduce the value.
|
||||||
|
|
||||||
|
An example `kubectl` command to edit a Token:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl edit tokens.ext.cattle.io token-zp786
|
||||||
|
```
|
||||||
@@ -39,7 +39,6 @@ User Interface | https://github.com/rancher/dashboard/ | This repository is the
|
|||||||
(Rancher) Docker Machine | https://github.com/rancher/machine | This repository is the source of the Docker Machine binary used when using Node Drivers. This is a fork of the `docker/machine` repository.
|
(Rancher) Docker Machine | https://github.com/rancher/machine | This repository is the source of the Docker Machine binary used when using Node Drivers. This is a fork of the `docker/machine` repository.
|
||||||
machine-package | https://github.com/rancher/machine-package | This repository is used to build the Rancher Docker Machine binary.
|
machine-package | https://github.com/rancher/machine-package | This repository is used to build the Rancher Docker Machine binary.
|
||||||
kontainer-engine | https://github.com/rancher/kontainer-engine | This repository is the source of kontainer-engine, the tool to provision hosted Kubernetes clusters.
|
kontainer-engine | https://github.com/rancher/kontainer-engine | This repository is the source of kontainer-engine, the tool to provision hosted Kubernetes clusters.
|
||||||
RKE repository | https://github.com/rancher/rke | This repository is the source of Rancher Kubernetes Engine, the tool to provision Kubernetes clusters on any machine.
|
|
||||||
CLI | https://github.com/rancher/cli | This repository is the source code for the Rancher CLI used in Rancher 2.x.
|
CLI | https://github.com/rancher/cli | This repository is the source code for the Rancher CLI used in Rancher 2.x.
|
||||||
(Rancher) Helm repository | https://github.com/rancher/helm | This repository is the source of the packaged Helm binary. This is a fork of the `helm/helm` repository.
|
(Rancher) Helm repository | https://github.com/rancher/helm | This repository is the source of the packaged Helm binary. This is a fork of the `helm/helm` repository.
|
||||||
loglevel repository | https://github.com/rancher/loglevel | This repository is the source of the loglevel binary, used to dynamically change log levels.
|
loglevel repository | https://github.com/rancher/loglevel | This repository is the source of the loglevel binary, used to dynamically change log levels.
|
||||||
@@ -109,27 +108,6 @@ Please remove any sensitive data as it will be publicly viewable.
|
|||||||
-l app=rancher \
|
-l app=rancher \
|
||||||
--timestamps=true
|
--timestamps=true
|
||||||
```
|
```
|
||||||
- Docker install using `docker` on each of the nodes in the RKE cluster
|
|
||||||
|
|
||||||
```
|
|
||||||
docker logs \
|
|
||||||
--timestamps \
|
|
||||||
$(docker ps | grep -E "rancher/rancher@|rancher_rancher" | awk '{ print $1 }')
|
|
||||||
```
|
|
||||||
- Kubernetes Install with RKE Add-On
|
|
||||||
|
|
||||||
:::note
|
|
||||||
|
|
||||||
Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml` if the Rancher server is installed on a Kubernetes cluster) or are using the embedded kubectl via the UI.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl -n cattle-system \
|
|
||||||
logs \
|
|
||||||
--timestamps=true \
|
|
||||||
-f $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name="cattle-server") | .metadata.name')
|
|
||||||
```
|
|
||||||
- System logging (these might not all exist, depending on operating system)
|
- System logging (these might not all exist, depending on operating system)
|
||||||
- `/var/log/messages`
|
- `/var/log/messages`
|
||||||
- `/var/log/syslog`
|
- `/var/log/syslog`
|
||||||
|
|||||||
@@ -16,9 +16,7 @@ Rancher will publish deprecated features as part of the [release notes](https://
|
|||||||
|
|
||||||
| Patch Version | Release Date |
|
| Patch Version | Release Date |
|
||||||
|---------------|---------------|
|
|---------------|---------------|
|
||||||
| [2.11.2](https://github.com/rancher/rancher/releases/tag/v2.11.2) | May 22, 2025 |
|
| [2.12.0](https://github.com/rancher/rancher/releases/tag/v2.12.0) | July 31, 2025 |
|
||||||
| [2.11.1](https://github.com/rancher/rancher/releases/tag/v2.11.1) | Apr 24, 2025 |
|
|
||||||
| [2.11.0](https://github.com/rancher/rancher/releases/tag/v2.11.0) | Mar 31, 2025 |
|
|
||||||
|
|
||||||
## What can I expect when a feature is marked for deprecation?
|
## What can I expect when a feature is marked for deprecation?
|
||||||
|
|
||||||
|
|||||||
+9
-1
@@ -10,7 +10,15 @@ Once the infrastructure is ready, you can continue with setting up a Kubernetes
|
|||||||
|
|
||||||
The steps to set up RKE, RKE2, or K3s are shown below.
|
The steps to set up RKE, RKE2, or K3s are shown below.
|
||||||
|
|
||||||
For convenience, export the IP address and port of your proxy into an environment variable and set up the HTTP_PROXY variables for your current shell on every node:
|
For convenience, export the IP address and port of your proxy into an environment variable and set up the `HTTP_PROXY` variables for your current shell on every node:
|
||||||
|
|
||||||
|
:::caution
|
||||||
|
|
||||||
|
The `NO_PROXY` environment variable is not standardized, and the accepted format of the value can differ between applications. When configuring the `NO_PROXY` variable for Rancher, the value must adhere to the format expected by Golang.
|
||||||
|
|
||||||
|
Specifically, the value should be a comma-delimited string which only contains IP addresses, CIDR notation, domain names, or special DNS labels (e.g. `*`). For a full description of the expected value format, refer to the [**upstream Golang documentation**](https://pkg.go.dev/golang.org/x/net/http/httpproxy#Config)
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
```
|
```
|
||||||
export proxy_host="10.0.0.5:8888"
|
export proxy_host="10.0.0.5:8888"
|
||||||
|
|||||||
+43
@@ -53,6 +53,10 @@ You can obtain `<RANCHER_CONTAINER_TAG>` and `<RANCHER_CONTAINER_NAME>` by loggi
|
|||||||
|
|
||||||
## Upgrade
|
## Upgrade
|
||||||
|
|
||||||
|
:::danger
|
||||||
|
Rancher upgrades to version 2.12.0 and later will be blocked if any RKE1-related resources are detected, as the Rancher Kubernetes Engine (RKE/RKE1) is end of life as of **July 31, 2025**. For detailed cleanup and recovery steps, refer to the [RKE1 Resource Validation and Upgrade Requirements in Rancher v2.12](#rke1-resource-validation-and-upgrade-requirements-in-rancher-v212).
|
||||||
|
:::
|
||||||
|
|
||||||
During upgrade, you create a copy of the data from your current Rancher container and a backup in case something goes wrong. Then you deploy the new version of Rancher in a new container using your existing data.
|
During upgrade, you create a copy of the data from your current Rancher container and a backup in case something goes wrong. Then you deploy the new version of Rancher in a new container using your existing data.
|
||||||
### 1. Create a copy of the data from your Rancher server container
|
### 1. Create a copy of the data from your Rancher server container
|
||||||
|
|
||||||
@@ -388,6 +392,45 @@ See [Restoring Cluster Networking](https://github.com/rancher/rancher-docs/tree/
|
|||||||
|
|
||||||
Remove the previous Rancher server container. If you only stop the previous Rancher server container (and don't remove it), the container may restart after the next server reboot.
|
Remove the previous Rancher server container. If you only stop the previous Rancher server container (and don't remove it), the container may restart after the next server reboot.
|
||||||
|
|
||||||
|
## RKE1 Resource Validation and Upgrade Requirements in Rancher v2.12
|
||||||
|
|
||||||
|
Rancher v2.12.0 and later has removed support for the Rancher Kubernetes Engine (RKE/RKE1). During upgrade, Rancher validates the cluster resources and blocks the upgrade if any RKE1-related resources are detected.
|
||||||
|
|
||||||
|
This validation affects the following resource types:
|
||||||
|
|
||||||
|
- Clusters with `rkeConfig` (`clusters.management.cattle.io`)
|
||||||
|
- NodeTemplates (`nodetemplates.management.cattle.io`)
|
||||||
|
- ClusterTemplates (`clustertemplates.management.cattle.io`)
|
||||||
|
|
||||||
|
This is particularly relevant for single-node Docker installations, where Rancher is not running during the upgrade. In such cases, controllers are not available to automatically clean up deprecated resources, and the upgrade process will fail early with an error listing the blocking resources.
|
||||||
|
|
||||||
|
### 1. Pre-Upgrade (Recommended)
|
||||||
|
|
||||||
|
Before upgrading, while Rancher is still running:
|
||||||
|
|
||||||
|
- Run the `pre-upgrade-hook` cleanup script to delete all RKE1 clusters and templates. You can find the script in the Rancher GitHub repository: [pre-upgrade-hook.sh](https://github.com/rancher/rancher/blob/v2.12.0/chart/scripts/pre-upgrade-hook.sh).
|
||||||
|
- This allows Rancher to clean up associated resources and finalizers.
|
||||||
|
|
||||||
|
### 2. Post-Upgrade Failure Due to Residual RKE1 Resources
|
||||||
|
|
||||||
|
If the upgrade to Rancher v2.12.0 or later is attempted without prior cleanup of RKE1 resources:
|
||||||
|
|
||||||
|
- The upgrade will fail and display an error listing the resource names that are preventing the upgrade.
|
||||||
|
- This occurs because Rancher includes validation to detect and block upgrades when unsupported RKE1 resources are still present.
|
||||||
|
- To proceed, [rollback](#rolling-back) to the previous Rancher version, delete the identified resources, and then retry after [manual cleanup](#manual-cleanup-after-rollback).
|
||||||
|
|
||||||
|
:::note Helm-based Rancher
|
||||||
|
Helm-based Rancher installations are not affected by this issue, as Rancher remains available during the upgrade and can perform resource cleanup as needed.
|
||||||
|
:::
|
||||||
|
|
||||||
|
### Manual Cleanup After Rollback
|
||||||
|
|
||||||
|
Users should perform the following steps after rolling back to a previous Rancher version:
|
||||||
|
|
||||||
|
- **Manually delete** the resources listed in the upgrade error message (e.g., RKE1 clusters, NodeTemplates, ClusterTemplates).
|
||||||
|
- If deletion is blocked due to **finalizers**, edit the resources and remove the `metadata.finalizers` field.
|
||||||
|
- If a **validating webhook** prevents deletion (e.g., for the `system-project`), please refer to the [Bypassing the Webhook](../../../../reference-guides/rancher-webhook.md#bypassing-the-webhook) documentation.
|
||||||
|
|
||||||
## Rolling Back
|
## Rolling Back
|
||||||
|
|
||||||
If your upgrade does not complete successfully, you can roll back Rancher server and its data back to its last healthy state. For more information, see [Docker Rollback](roll-back-docker-installed-rancher.md).
|
If your upgrade does not complete successfully, you can roll back Rancher server and its data back to its last healthy state. For more information, see [Docker Rollback](roll-back-docker-installed-rancher.md).
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user