Merge release v2.13.0 to main (#2091)

* Sync main to v2.13.0 (#2065)

* It's bad form to ask users to pass something they just curled from the internet directly to sh

Updated the instructions for uninstalling the rancher-system-agent to use a temporary script file instead of piping directly to sh.

* doc(rancher-security): improve structure and content to latest, v2.13-preview and v2.12 (#2024)

- add Rancher Kubernetes Distributions (K3s/RKE2) Self-Assessment and Hardening Guide section
- add kubernetes cluster security best practices link to rancher-security section
- add k3s-selinux and update selinux-rpm details
- remove rhel/centos 7 support

Signed-off-by: Andy Pitcher <andy.pitcher@suse.com>

* Updating across supported versions and translations.

Signed-off-by: Sunil Singh <sunil.singh@suse.com>

---------

Signed-off-by: Andy Pitcher <andy.pitcher@suse.com>
Signed-off-by: Sunil Singh <sunil.singh@suse.com>
Co-authored-by: Tejeev <tj@rancher.com>
Co-authored-by: Andy Pitcher <andy.pitcher@suse.com>
Co-authored-by: Sunil Singh <sunil.singh@suse.com>

* Update roletemplate aggregation doc and version information

* Add versioned docs

* Remove ext token and kubeconfig feature flag sections and document bearer Token

* Update corresponding v2.13 pages

* update doc for pni in gke

* Adding reverted session idle information from PR 1653

Signed-off-by: Sunil Singh <sunil.singh@suse.com>

* [2.13.0] Add versions table entry

* [2.13.0] Add webhook version

* [2.13.0] Add CSP Adapter version

* [2.13.0] Add deprecated feature table entry

* [2.13.0] Update CNI popularity stats

* Update GKE Cluster Configuration for Project Network Isolation instructions

* Fix link and port to 2.13

* [2.13.0] Add Swagger JSON

* [v2.13.0] Add info about Azure AD Roles claims (#2079)

* Add info about Azure AD roles claims compatibility

* Apply suggestions from code review

Co-authored-by: Sunil Singh <sunil.singh@suse.com>

* Add suggestions to v2.13

---------

Co-authored-by: Sunil Singh <sunil.singh@suse.com>

* [2.13.0] Remove preview designation

* user public api docs (#2069)

* user public api docs

* Apply suggestions from code review

Co-authored-by: Andreas Kupries <akupries@suse.com>

* Apply suggestions from code review

Co-authored-by: Peter Matseykanets <pmatseykanets@gmail.com>

* explain plaintext is never stored

* add users 2.13 versioned docs

* remove extra ```

* Apply suggestions from code review

Co-authored-by: Lucas Saintarbor <lucas.saintarbor@suse.com>

* add space before code block

---------

Co-authored-by: Andreas Kupries <akupries@suse.com>
Co-authored-by: Peter Matseykanets <pmatseykanets@gmail.com>
Co-authored-by: Lucas Saintarbor <lucas.saintarbor@suse.com>

* support IPv6 (#2041)

* [v2.13.0] Add Configure GitHub App page (#2081)

* Add Configure GitHub App page

* Apply suggestions from code review

Co-authored-by: Billy Tat <btat@suse.com>

* Fix header/GH URL & add suggestions to v2.13

* Apply suggestions from code review

Co-authored-by: Petr Kovar <pknbe@volny.cz>

* Apply suggestions from code review to v2.13

* Add note describing why to use Installation ID

* Apply suggestions from code review

Co-authored-by: Billy Tat <btat@suse.com>

---------

Co-authored-by: Billy Tat <btat@suse.com>
Co-authored-by: Petr Kovar <pknbe@volny.cz>

* [v2.13.0] Add info about Generic OIDC Custom Mapping (#2080)

* Add info about Generic OIDC Custom Mapping

* Apply suggestions from code review

Co-authored-by: Sunil Singh <sunil.singh@suse.com>
Co-authored-by: Billy Tat <btat@suse.com>

* Apply suggestions from code review

Co-authored-by: Sunil Singh <sunil.singh@suse.com>
Co-authored-by: Billy Tat <btat@suse.com>

* Add suggestions to v2.13

* Remove repetitive statement in intro

* Move Prereq intro/note to appropriate section

* Fix formatting, UI typo, add Custom Claims section under Configuration Reference section

* Add section about how a custom groups claim works / note about search limitations for groups in RBAC

---------

Co-authored-by: Sunil Singh <sunil.singh@suse.com>
Co-authored-by: Billy Tat <btat@suse.com>

* [v2.13.0] Add info about OIDC SLO support (#2086)

* Add shared file covering OIDC SLO support to OIDC auth pages

* Ad How to get the End Session Endpoint steps

* Add generic curl exampleto retrieve end_session_endpoint

* [2.13.0] Bump release date

---------

Signed-off-by: Andy Pitcher <andy.pitcher@suse.com>
Signed-off-by: Sunil Singh <sunil.singh@suse.com>
Co-authored-by: Lucas Saintarbor <lucas.saintarbor@suse.com>
Co-authored-by: Tejeev <tj@rancher.com>
Co-authored-by: Andy Pitcher <andy.pitcher@suse.com>
Co-authored-by: Sunil Singh <sunil.singh@suse.com>
Co-authored-by: Jonathan Crowther <jonathan.crowther@suse.com>
Co-authored-by: Peter Matseykanets <peter.matseykanets@suse.com>
Co-authored-by: Petr Kovar <petr.kovar@suse.com>
Co-authored-by: Krunal Hingu <krunal.hingu222@gmail.com>
Co-authored-by: Raul Cabello Martin <raul.cabello@suse.com>
Co-authored-by: Andreas Kupries <akupries@suse.com>
Co-authored-by: Peter Matseykanets <pmatseykanets@gmail.com>
Co-authored-by: Jack Luo <jiaqi.luo@suse.com>
Co-authored-by: Petr Kovar <pknbe@volny.cz>
This commit is contained in:
Billy Tat
2025-11-25 10:51:39 -08:00
committed by GitHub
parent 94197793cb
commit 24fc5a657c
87 changed files with 11352 additions and 564 deletions

View File

@@ -15,4 +15,4 @@ At this time, not all Rancher resources are available through the Rancher Kubern
import ApiDocMdx from '@theme/ApiDocMdx';
<ApiDocMdx id="rancher-api-v2-12" />
<ApiDocMdx id="rancher-api-v2-13" />

View File

@@ -60,17 +60,23 @@ This feature affects all tokens which include, but are not limited to, the follo
These global settings affect Rancher token behavior.
| Setting | Description |
| ------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes) | TTL in minutes on a user auth session token. |
| [`kubeconfig-default-token-ttl-minutes`](#kubeconfig-default-token-ttl-minutes) | Default TTL applied to all kubeconfig tokens except for tokens [generated by Rancher CLI](#disable-tokens-in-generated-kubeconfigs). |
| [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes) | Max TTL for all tokens except those controlled by [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes). |
| [`kubeconfig-generate-token`](#kubeconfig-generate-token) | If true, automatically generate tokens when a user downloads a kubeconfig. |
| Setting | Description |
| ------- | ----------- |
| [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes) | TTL in minutes on a user auth session token. |
| [`auth-user-session-idle-ttl-minutes`](#auth-user-session-idle-ttl-minutes) | TTL in minutes on a user auth session token, without user activity. |
| [`kubeconfig-default-token-ttl-minutes`](#kubeconfig-default-token-ttl-minutes) | Default TTL applied to all kubeconfig tokens except for tokens [generated by Rancher CLI](#disable-tokens-in-generated-kubeconfigs). |
| [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes) | Max TTL for all tokens except those controlled by [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes). |
| [`kubeconfig-generate-token`](#kubeconfig-generate-token) | If true, automatically generate tokens when a user downloads a kubeconfig. |
### auth-user-session-ttl-minutes
Time to live (TTL) duration in minutes, used to determine when a user auth session token expires. When expired, the user must log in and obtain a new token. This setting is not affected by [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes). Session tokens are created when a user logs into Rancher.
### auth-user-session-idle-ttl-minutes
Time to live (TTL) without user activity for login sessions tokens, in minutes.
By default, [`auth-user-session-idle-ttl-minutes`](#auth-user-session-idle-ttl-minutes) is set to the same value as [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes) (for backward compatibility). It must never exceed the value of `auth-user-session-ttl-minutes`.
### kubeconfig-default-token-ttl-minutes
Time to live (TTL) duration in minutes, used to determine when a kubeconfig token expires. When the token is expired, the API rejects the token. This setting can't be larger than [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes). This setting applies to tokens generated in a requested kubeconfig file, except for tokens [generated by Rancher CLI](#disable-tokens-in-generated-kubeconfigs). As of Rancher v2.8, the default duration is `43200`, which means that tokens expire in 30 days.

View File

@@ -20,14 +20,6 @@ To get a description of the fields and structure of the Kubeconfig resource, run
kubectl explain kubeconfigs.ext.cattle.io
```
## Feature Flag
The Kubeconfigs Public API is available since Rancher v2.12.0 and is enabled by default. It can be disabled by setting the `ext-kubeconfigs` feature flag to `false`.
```sh
kubectl patch feature ext-kubeconfigs -p '{"spec":{"value":false}}'
```
## Creating a Kubeconfig
Only a **valid and active** Rancher user can create a Kubeconfig. For example, trying to create a Kubeconfig using a `system:admin` service account will lead to an error:

View File

@@ -20,20 +20,14 @@ To get a description of the fields and structure of the Token resource, run:
kubectl explain tokens.ext.cattle.io
```
## Feature Flag
The Tokens Public API is available for Rancher v2.12.0 and later, and is enabled by default. You can disable the Tokens Public API by setting the `ext-tokens` feature flag to `false` as shown in the example `kubectl` command below:
```sh
kubectl patch feature ext-tokens -p '{"spec":{"value":false}}'
```
## Creating a Token
:::caution
The Token value is only returned once in the `status.value` field.
:::
Since Rancher v2.13.0 the `status.bearerToken` now contains a fully formed and ready-to-use Bearer token that can be used to authenticate to [Rancher API](../v3-rancher-api-guide.md).
Only a **valid and active** Rancher user can create a Token. Otherwise, you will get an error displayed (`Error from server (Forbidden)...`) when attempting to create a Token.
```bash

187
docs/api/workflows/users.md Normal file
View File

@@ -0,0 +1,187 @@
---
title: Users
---
## User Resource
The `User` resource (users.management.cattle.io) represents a user account in Rancher.
To get a description of the fields and structure of the `User` resource, run:
```sh
kubectl explain users.management.cattle.io
```
## Creating a User
Creating a local user is a two-step process: you must create the `User` resource, then provide a password via a Kubernetes `Secret`.
Only a user with sufficient permissions can create a `User` resource.
```bash
kubectl create -f -<<EOF
apiVersion: management.cattle.io/v3
kind: User
metadata:
name: testuser
displayName: "Test User"
username: "testuser"
EOF
```
The user's password must be provided in a `Secret` object within the `cattle-local-user-passwords` namespace. The Rancher webhook will automatically hash the password and update the `Secret`.
:::important
Important: The `Secret` must have the same name as the metadata.name (and username) of the `User` resource.
:::
```bash
kubectl create -f -<<EOF
apiVersion: v1
kind: Secret
metadata:
name: testuser
namespace: cattle-local-user-passwords
type: Opaque
stringData:
password: Pass1234567!
EOF
```
After the plaintext password is submitted, the Rancher-Webhook automatically hashes it, replacing the content of the `Secret`, ensuring that the plaintext password is never stored:
```yaml
apiVersion: v1
data:
password: 1c1Y4CdjlehGWFz26F414x2qoj4gch5L5OXsx35MAa8=
salt: m8Co+CfMDo5XwVl0FqYzGcRIOTgRrwFSqW8yurh5DcE=
kind: Secret
metadata:
annotations:
cattle.io/password-hash: pbkdf2sha3512
name: testuser
namespace: cattle-local-user-passwords
ownerReferences:
- apiVersion: management.cattle.io/v3
kind: User
name: testuser
uid: 663ffb4f-8178-46c8-85a3-337f4d5cbc2e
uid: bade9f0a-b06f-4a77-9a39-4284dc2349c5
type: Opaque
```
## Updating User's Password
To change a user's password, use the `PasswordChangeRequest` resource, which handles secure password updates.
```yaml
kubectl create -f -<<EOF
apiVersion: ext.cattle.io/v1
kind: PasswordChangeRequest
spec:
userID: "testuser"
currentPassword: "Pass1234567!"
newPassword: "NewPass1234567!"
EOF
```
## Listing Users
List all `User` resources in the cluster:
```sh
kubectl get users
NAME AGE
testuser 3m54s
user-4n5ws 12m
```
## Viewing a User
View a specific `User` resource by name:
```sh
kubectl get user testuser
NAME AGE
testuser 3m54s
```
## Deleting a User
Deleting a user will automatically delete the corresponding password `Secret`.
```sh
kubectl delete user testuser
user.management.cattle.io "testuser" deleted
```
## Get a Current User's Information
A client uses the `SelfUser` resource to retrieve information about the currently authenticated user without knowing their ID. The user ID is returned in the `.status.userID` field.
```bash
kubectl create -o jsonpath='{.status.userID}' -f -<<EOF
apiVersion: ext.cattle.io/v1
kind: SelfUser
EOF
testuser
```
## Refreshing a User's Group Membership
Updates to user group memberships are triggered by the `GroupMembershipRefreshRequest` resource.
:::note
Group membership is only supported for external authentication providers.
:::
### For a Single User
```bash
kubectl create -o jsonpath='{.status}' -f -<<EOF
apiVersion: ext.cattle.io/v1
kind: GroupMembershipRefreshRequest
spec:
userId: testuser
EOF
{
"conditions": [
{
"lastTransitionTime": "2025-11-10T12:01:03Z",
"message": "",
"reason": "",
"status": "True",
"type": "UserRefreshInitiated"
}
],
"summary": "Completed"
}
```
### For All Users
```bash
kubectl create -o jsonpath='{.status}' -f -<<EOF
apiVersion: ext.cattle.io/v1
kind: GroupMembershipRefreshRequest
spec:
userId: "*"
EOF
{
"conditions": [
{
"lastTransitionTime": "2025-11-10T12:01:59Z",
"message": "",
"reason": "",
"status": "True",
"type": "UserRefreshInitiated"
}
],
"summary": "Completed"
}
```

View File

@@ -16,10 +16,7 @@ Rancher will publish deprecated features as part of the [release notes](https://
| Patch Version | Release Date |
|---------------|---------------|
| [2.12.3](https://github.com/rancher/rancher/releases/tag/v2.12.3) | October 23, 2025 |
| [2.12.2](https://github.com/rancher/rancher/releases/tag/v2.12.2) | September 25, 2025 |
| [2.12.1](https://github.com/rancher/rancher/releases/tag/v2.12.1) | August 28, 2025 |
| [2.12.0](https://github.com/rancher/rancher/releases/tag/v2.12.0) | July 30, 2025 |
| [2.13.0](https://github.com/rancher/rancher/releases/tag/v2.13.0) | November 25, 2025 |
## What can I expect when a feature is marked for deprecation?

View File

@@ -18,7 +18,7 @@ Some feature flags require a restart of the Rancher container. Features that req
The following is a list of feature flags available in Rancher. If you've upgraded from a previous Rancher version, you may see additional flags in the Rancher UI, such as `proxy` or `dashboard` (both [discontinued](https://github.com/rancher/rancher-docs/tree/main/archived_docs/en/version-2.5/reference-guides/installation-references/feature-flags.md)):
- `aggregated-roletemplates`: Use cluster role aggregation architecture for RoleTemplates, ProjectRoleTemplateBindings, and ClusterRoleTemplateBindings. See [Cluster Role Aggregation](../../../how-to-guides/advanced-user-guides/enable-experimental-features/cluster-role-aggregation.md) for more information.
- `aggregated-roletemplates`: Use cluster role aggregation architecture for RoleTemplates, ProjectRoleTemplateBindings, and ClusterRoleTemplateBindings. See [RoleTemplate Aggregation](../../../how-to-guides/advanced-user-guides/enable-experimental-features/role-template-aggregation.md) for more information.
- `clean-stale-secrets`: Removes stale secrets from the `cattle-impersonation-system` namespace. This slowly cleans up old secrets which are no longer being used by the impersonation system.
- `continuous-delivery`: Allows Fleet GitOps to be disabled separately from Fleet. See [Continuous Delivery.](../../../how-to-guides/advanced-user-guides/enable-experimental-features/continuous-delivery.md) for more information.
- `fleet`: The Rancher provisioning framework in v2.6 and later requires Fleet. The flag will be automatically enabled when you upgrade, even if you disabled this flag in an earlier version of Rancher. See [Continuous Delivery with Fleet](../../../integrations-in-rancher/fleet/fleet.md) for more information.
@@ -61,7 +61,7 @@ The following table shows the availability and default values for some feature f
| Feature Flag Name | Default Value | Status | Available As Of | Additional Information |
| ----------------------------- | ------------- | ------------ | --------------- | ---------------------- |
| `aggregated-roletemplates` | `Disabled` | Highly experimental | v2.11.0 | This flag value is locked on install and can't be changed. |
| `aggregated-roletemplates` | `Disabled` | Experimental | v2.11.0 | This flag value is locked on install and can't be changed. |
| `clean-stale-secrets` | `Active` | GA | v2.10.2 | |
| `continuous-delivery` | `Active` | GA | v2.6.0 | |
| `external-rules` | v2.7.14: `Disabled`, v2.8.5: `Active` | Removed | v2.7.14, v2.8.5 | This flag affected [external `RoleTemplate` behavior](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#external-roletemplate-behavior). It is removed in Rancher v2.9.0 and later as the behavior is enabled by default. |

View File

@@ -6,4 +6,4 @@ title: Installation References
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/installation-references"/>
</head>
Please see the following reference guides for other installation resources: [Rancher Helm chart options](helm-chart-options.md), [TLS settings](tls-settings.md), and [feature flags](feature-flags.md).
Please see the following reference guides for other installation resources: [Rancher Helm chart options](helm-chart-options.md), [TLS settings](tls-settings.md), and [feature flags](feature-flags.md).

View File

@@ -25,10 +25,16 @@ Rancher needs to be installed on a supported Kubernetes version. Consult the [Ra
Regardless of version and distribution, the Kubernetes cluster must have the aggregation API layer properly configured to support the [extension API](../../../api/extension-apiserver.md) used by Rancher.
### Install Rancher on a Hardened Kubernetes cluster
### Install Rancher on a Hardened Kubernetes Cluster
If you install Rancher on a hardened Kubernetes cluster, check the [Exempting Required Rancher Namespaces](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md#exempting-required-rancher-namespaces) section for detailed requirements.
### Install Rancher on an IPv6-only or Dual-stack Kubernetes Cluster
You can deploy Rancher on an IPv6-only or dual-stack Kubernetes cluster.
For details on Ranchers IPv6-only and dual-stack support, see the [IPv4/IPv6 Dual-stack](../../../reference-guides/dual-stack.md) page.
## Operating Systems and Container Runtime Requirements
All supported operating systems are 64-bit x86. Rancher should work with any modern Linux distribution.

View File

@@ -238,21 +238,23 @@ In these cases, you have to explicitly allow this traffic in your host firewall,
When using the [AWS EC2 node driver](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md) to provision cluster nodes in Rancher, you can choose to let Rancher create a security group called `rancher-nodes`. The following rules are automatically added to this security group.
| Type | Protocol | Port Range | Source/Destination | Rule Type |
| Type | Protocol | Port Range | Source/Destination | Rule Type |
|-----------------|:--------:|:-----------:|------------------------|:---------:|
| SSH | TCP | 22 | 0.0.0.0/0 | Inbound |
| HTTP | TCP | 80 | 0.0.0.0/0 | Inbound |
| Custom TCP Rule | TCP | 443 | 0.0.0.0/0 | Inbound |
| Custom TCP Rule | TCP | 2376 | 0.0.0.0/0 | Inbound |
| Custom TCP Rule | TCP | 2379-2380 | sg-xxx (rancher-nodes) | Inbound |
| Custom UDP Rule | UDP | 4789 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 6443 | 0.0.0.0/0 | Inbound |
| Custom UDP Rule | UDP | 8472 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 10250-10252 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 10256 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 30000-32767 | 0.0.0.0/0 | Inbound |
| Custom UDP Rule | UDP | 30000-32767 | 0.0.0.0/0 | Inbound |
| All traffic | All | All | 0.0.0.0/0 | Outbound |
| SSH | TCP | 22 | 0.0.0.0/0 and ::/0 | Inbound |
| HTTP | TCP | 80 | 0.0.0.0/0 and ::/0 | Inbound |
| Custom TCP Rule | TCP | 443 | 0.0.0.0/0 and ::/0 | Inbound |
| Custom TCP Rule | TCP | 2376 | 0.0.0.0/0 and ::/0 | Inbound |
| Custom TCP Rule | TCP | 6443 | 0.0.0.0/0 and ::/0 | Inbound |
| Custom TCP Rule | TCP | 179 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 9345 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 2379-2380 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 10250-10252 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 10256 | sg-xxx (rancher-nodes) | Inbound |
| Custom UDP Rule | UDP | 4789 | sg-xxx (rancher-nodes) | Inbound |
| Custom UDP Rule | UDP | 8472 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 30000-32767 | 0.0.0.0/0 and ::/0 | Inbound |
| Custom UDP Rule | UDP | 30000-32767 | 0.0.0.0/0 and ::/0 | Inbound |
| All traffic | All | All | 0.0.0.0/0 and ::/0 | Outbound |
### Opening SUSE Linux Ports

View File

@@ -1,19 +0,0 @@
---
title: ClusterRole Aggregation
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features/cluster-role-aggregation"/>
</head>
:::caution
ClusterRole aggregation is a highly experimental feature that changes the RBAC architecture used for RoleTemplates, ClusterRoleTemplateBindings and ProjectRoleTemplateBindings. **It is not supported for production environments**. This feature is meant exclusively for internal testing in v2.11 and v2.12. It is expected to be available as a beta for users in v2.13.
:::
ClusterRole aggregation implements RoleTemplates, ClusterRoleTemplateBindings and ProjectRoleTemplateBindings using the Kubernetes feature [Aggregated ClusterRoles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles). The new architecture results in a net reduction in RBAC objects (Roles, RoleBindings, ClusterRoles and ClusterRoleBindings) both in the Rancher cluster and the downstream clusters.
| Environment Variable Key | Default Value | Description |
| --- | --- | --- |
| `aggregated-roletemplates` | `false` | [Experimental] Make RoleTemplates use aggregation for generated RBAC roles. |
The value of this feature flag is locked on installation, which shows up in the UI as a lock symbol beside the feature flag. That means the feature can only be set on the first ever installation of Rancher. After that, attempting to modify the value will be denied.

View File

@@ -0,0 +1,21 @@
---
title: RoleTemplate Aggregation
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features/role-template-aggregation"/>
</head>
:::caution
RoleTemplate aggregation is an experimental feature in v2.13 that changes the RBAC architecture used for RoleTemplates, ClusterRoleTemplateBindings and ProjectRoleTemplateBindings. **It is not supported for production environments**. Breaking changes may occur between v2.13 and v2.14.
:::
RoleTemplate aggregation implements RoleTemplates, ClusterRoleTemplateBindings and ProjectRoleTemplateBindings using the Kubernetes feature [Aggregated ClusterRoles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles). The new architecture results in a net reduction in RBAC objects (Roles, RoleBindings, ClusterRoles and ClusterRoleBindings) both in the Rancher cluster and the downstream clusters.
For more information on how the feature can improve scalability and performance, please see the [Rancher Blog post](https://www.suse.com/c/rancher_blog/fewer-bindings-more-power-ranchers-rbac-boost-for-enhanced-performance-and-scalability/).
| Environment Variable Key | Default Value | Description |
| --- | --- | --- |
| `aggregated-roletemplates` | `false` | [Beta] Make RoleTemplates use aggregation for generated RBAC roles. |
The value of this feature flag is locked on installation, which shows up in the UI as a lock symbol beside the feature flag. That means the feature can only be set on the first ever installation of Rancher. After that, attempting to modify the value will be denied.

View File

@@ -58,3 +58,7 @@ if the user has not yet logged in to Rancher. However, if the user has previousl
### You are not redirected to your authentication provider
If you fill out the **Configure an Amazon Cognito account** form and click on **Enable**, and you are not redirected to Amazon Cognito, verify your Amazon Cognito configuration.
## Configuring OIDC Single Logout (SLO)
<ConfigureSLOOidc />

View File

@@ -363,3 +363,22 @@ Since the filter prevents Rancher from seeing that the user belongs to an exclud
>- If you don't wish to upgrade to v2.7.0+ after the Azure AD Graph API is retired, you'll need to either:
- Use the built-in Rancher auth or
- Use another third-party auth system and set that up in Rancher. Please see the [authentication docs](authentication-config.md) to learn how to configure other open authentication providers.
## Azure AD Roles Claims
Rancher supports the Roles claim provided by the Azure AD OIDC provider token, allowing for complete delegation of Role-Based Access Control (RBAC) to Azure AD. Previously, Rancher only processed the `Groups` claim to determine a user's `group` membership. This enhancement extends the logic to also include the Roles claim within the user's OIDC token.
By including the Roles claim, administrators can:
- Define specific high-level roles in Azure AD.
- Bind these Azure AD Roles directly to ProjectRoles or ClusterRoles within Rancher.
- Centralize and fully delegate access control decisions to the external OIDC provider.
For example, consider the following role structure in Azure AD:
| Azure AD Role Name | Members |
|--------------------|----------------|
| project-alpha-dev | User A, User C |
User A logs into Rancher via Azure AD. The OIDC token includes a Roles claim, [`project-alpha-dev`]. The Rancher logic processes the token, and the internal list of `groups`/roles for User A which includes `project-alpha-dev`. An administrator has created a Project Role Binding that maps the Azure AD Role `project-alpha-dev` to the Project Role `Dev Member` for Project Alpha. User A is automatically granted the `Dev Member` role in Project Alpha.

View File

@@ -7,60 +7,69 @@ description: Create an OpenID Connect (OIDC) client and configure Rancher to wor
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-generic-oidc"/>
</head>
If your organization uses an OIDC provider for user authentication, you can configure Rancher to allow login using Identity Provider (IdP) credentials. Rancher supports integration with the OpenID Connect (OIDC) protocol and the SAML protocol. Both implementations are functionally equivalent when used with Rancher. The following instructions describe how to configure Rancher to work using the OIDC protocol.
Generic OpenID Connect (OIDC) allows users to sign in to Rancher using their credentials from their existing account at an OIDC Identity Provider (IdP). Rancher supports integration with the OIDC protocol and the SAML protocol. Both implementations are functionally equivalent when used with Rancher. The following instructions describe how to create an OIDC client and configure Rancher to work with your authentication provider. Users can then sign into Rancher using their login from the OIDC IdP.
## Prerequisites
- In Rancher:
- Generic OIDC is disabled.
### Identity Provider
In Rancher, Generic OIDC is disabled.
:::note
Consult the documentation for your specific IdP to complete the listed prerequisites.
:::
- In your IdP:
- Create a new client with the settings below:
#### OIDC Client
In your IdP, create a new client with the settings below:
Setting | Value
------------|------------
`Client ID` | <CLIENT_ID> (e.g. `rancher`)
`Name` | <CLIENT_NAME> (e.g. `rancher`)
`Client Protocol` | `openid-connect`
`Access Type` | `confidential`
`Valid Redirect URI` | `https://yourRancherHostURL/verify-auth`
In the new OIDC client, create mappers to expose the user's fields.
1. Create a new `Groups Mapper` with the settings below:
Setting | Value
------------|------------
`Client ID` | <CLIENT_ID> (e.g. `rancher`)
`Name` | <CLIENT_NAME> (e.g. `rancher`)
`Client Protocol` | `openid-connect`
`Access Type` | `confidential`
`Valid Redirect URI` | `https://yourRancherHostURL/verify-auth`
`Name` | `Groups Mapper`
`Mapper Type` | `Group Membership`
`Token Claim Name` | `groups`
`Add to ID token` | `OFF`
`Add to access token` | `OFF`
`Add to user info` | `ON`
- In the new OIDC client, create mappers to expose the users fields.
- Create a new Groups Mapper with the settings below:
1. Create a new `Client Audience` with the settings below:
Setting | Value
------------|------------
`Name` | `Groups Mapper`
`Mapper Type` | `Group Membership`
`Token Claim Name` | `groups`
`Add to ID token` | `OFF`
`Add to access token` | `OFF`
`Add to user info` | `ON`
Setting | Value
------------|------------
`Name` | `Client Audience`
`Mapper Type` | `Audience`
`Included Client Audience` | `CLIENT_NAME`
`Add to access token` | `ON`
- Create a new Client Audience with the settings below:
1. Create a new `Groups Path` with the settings below.
Setting | Value
------------|------------
`Name` | `Client Audience`
`Mapper Type` | `Audience`
`Included Client Audience` | <CLIENT_NAME>
`Add to access token` | `ON`
Setting | Value
------------|------------
`Name` | `Group Path`
`Mapper Type` | `Group Membership`
`Token Claim Name` | `full_group_path`
`Full group path` | `ON`
`Add to user info` | `ON`
- Create a new "Groups Path" with the settings below.
:::warning
Setting | Value
------------|------------
`Name` | `Group Path`
`Mapper Type` | `Group Membership`
`Token Claim Name` | `full_group_path`
`Full group path` | `ON`
`Add to user info` | `ON`
Rancher uses the value received in the "sub" claim to form the PrincipalID which is the unique identifier in Rancher. It is important to make this a value that is unique and immutable.
- Important: Rancher will use the value received in the "sub" claim to form the PrincipalID which is the unique identifier in Rancher. It is important to make this a value that will be unique and immutable.
:::
## Configuring Generic OIDC in Rancher
@@ -80,7 +89,31 @@ Consult the documentation for your specific IdP to complete the listed prerequis
**Result:** Rancher is configured to work with your provider using the OIDC protocol. Your users can now sign into Rancher using their IdP logins.
## Configuration Reference
### Custom Claim Mapping
Custom claim mapping within the Generic OIDC configuration is supported for `name`, `email` and `groups` claims. This allows you to manually map these OIDC claims when your IdP doesn't use standard names in tokens.
#### How a Custom Groups Claim Works
A custom groups claim influences how user groups work:
- If both the standard OIDC `groups` claim and the custom groups claim are present in the user's token, the custom claim supplements the list of groups provided by the standard claim.
- If there is no standard groups claim in the token, the groups listed in the custom claim will form the user's only groups.
:::note
There is no search functionality available for groups sourced from a custom claim. To assign a role to one of these groups, you must manually enter the group's exact name into the RBAC field.
:::
#### Configuring Custom Claims
When on the **Configure an OIDC account** form:
1. Select **Add custom claims**.
1. Add your custom `name`, `email` or `groups` claims to the appropriate **Custom Claims** field.
For example, if your IdP sends `groups` in a claim called `custom_roles`, enter `custom_roles` into the **Custom Groups Claim** field. Rancher then supplements the standard OIDC `groups` claim or looks for that specific claim when processing the user's token.
### Configuration Reference
| Field | Description |
| ------------------------- |----------------------------------------------------------------------------------------------------------------------------------------------------|
@@ -91,6 +124,15 @@ Consult the documentation for your specific IdP to complete the listed prerequis
| Rancher URL | The URL for your Rancher Server. |
| Issuer | The URL of your IdP. If your provider has discovery enabled, Rancher uses the Issuer URL to fetch all of the required URLs. |
| Auth Endpoint | The URL where users are redirected to authenticate. |
#### Custom Claims
| Custom Claim Field | Default OIDC Claim | Custom Claim Description |
| ------------- | ------------------ | ------------------------ |
| Custom Name Claim | `name` | The name of the claim in the OIDC token that contains the user's full name or display name. |
| Custom Email Claim | `email` | The name of the claim in the OIDC token that contains the user's email address. |
| Custom Groups Claim | `groups` | The name of the claim in the OIDC token that contains the user's group memberships (used for RBAC). |
## Troubleshooting
If you are experiencing issues while testing the connection to the OIDC server, first double-check the configuration options of your OIDC client. You can also inspect the Rancher logs to help pinpoint what's causing issues. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging](../../../../faq/technical-items.md#how-can-i-enable-debug-logging) in this documentation.
@@ -108,3 +150,7 @@ If the `Issuer` and `Auth Endpoint` are generated incorrectly, open the **Config
### Error: "Invalid grant_type"
In some cases, the "Invalid grant_type" error message may be misleading and is actually caused by setting the `Valid Redirect URI` incorrectly.
## Configuring OIDC Single Logout (SLO)
<ConfigureSLOOidc />

View File

@@ -0,0 +1,84 @@
---
title: Configure GitHub App
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-github-app"/>
</head>
In environments using GitHub, you can configure the new GitHub App authentication provider in Rancher, which allows users to authenticate against a GitHub Organization account using a dedicated [GitHub App](https://docs.github.com/en/apps/overview). This new provider runs alongside the existing standard GitHub authentication provider, offering increased security and better management of permissions based on GitHub Organization teams.
## Prerequisites
:::warning
The GitHub App authentication provider only works with [GitHub Organization accounts](https://docs.github.com/en/get-started/learning-about-github/types-of-github-accounts#organization-accounts). It does not function with individual [GitHub User accounts](https://docs.github.com/en/get-started/learning-about-github/types-of-github-accounts#user-accounts).
:::
Before configuring the provider in Rancher, you must first create a GitHub App for your organization, generate a client secret for your GitHub App and generate a private key for your GitHub App. Refer to [Registering a GitHub App](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app) for details.
### Create GitHub App
1. Open your [GitHub organization settings](https://github.com/settings/organizations).
1. To the right of the organization, select **Settings**.
1. In the left sidebar, click **Developer settings** > **GitHub Apps**.
1. Click **New Github App**.
1. Fill in the GitHub App configuration form with these values:
- **GitHub App name**: Anything you like, e.g. `My Rancher`.
- **Application description**: Optional, can be left blank.
- **Homepage URL**: `https://localhost:8443`.
- **Callback URL**: `https://localhost:8443/verify-auth`.
1. Select **Create Github App**.
### Generate a Client Secret
Generate a [client secret](https://docs.github.com/en/rest/authentication/authenticating-to-the-rest-api#using-basic-authentication) on the settings page for your app.
1. Go to your GitHub App.
1. Next to **Client Secrets**, select **Generate a new client secret**.
### Generate a Private Key
Generate a [private key](https://docs.github.com/en/enterprise-server/apps/creating-github-apps/authenticating-with-a-github-app/managing-private-keys-for-github-apps#generating-private-keys) on the settings page for your app.
1. Go to your GitHub App.
1. Next to **Private Keys**, click **Generate a private key**.
## GitHub App Auth Provider Configuration
To set up the GitHub App Auth Provider in Rancher, follow these steps:
1. Navigate to the **Users & Authentication** section in the Rancher UI.
1. Select **Auth Providers**.
1. Select the **GitHub App** tile.
1. Gather and enter the details of your GitHub App into the configuration form fields.
| Field Name | Description |
| ---------- | ----------- |
| **Client ID** (Required) | The client ID of your GitHub App. |
| **Client Secret** (Required) | The client secret of your GitHub App. |
| **GitHub App ID** (Required) | The numeric ID associated with your GitHub App. |
| **Installation ID** (Optional) | If you want to restrict authentication to a single installation of the App, provide its specific numeric Installation ID. |
| **Private Key** (Required) | The contents of the Private Key file (in PEM format) generated by GitHub for your App. |
:::note
A GitHub App can be installed across multiple Organizations, and each installation has a unique Installation ID. If you want to restrict authentication to a single App installation and GitHub Organization, provide the Installation ID during configuration. If you do not provide an Installation ID, the user's permissions are aggregated across all installations.
:::
1. Select **Enable**. Rancher attempts to validate the credentials and, upon success, activates the GitHub App provider.
After it is enabled, users logging in via the GitHub App provider are automatically identified and you can leverage your GitHub Organization's teams and users to configure Role-Based Access Control (RBAC) and to assign permissions to projects and clusters.
:::note
Ensure that the users and teams you intend to use for authorization exist within the GitHub organization managed by the App.
:::
- **Users**: Individual GitHub users who are members of the GitHub Organization where the App is installed can log in.
- **Groups**: GitHub Organization teams are mapped to Rancher Groups, allowing you to assign entire teams permissions within Rancher projects and clusters.

View File

@@ -203,3 +203,7 @@ To resolve this, you can either:
3. Save your changes.
2. Reconfigure your Keycloak OIDC setup using a user that is assigned to at least one group in Keycloak.
## Configuring OIDC Single Logout (SLO)
<ConfigureSLOOidc />

View File

@@ -120,6 +120,18 @@ For a breakdown of the port requirements for etcd nodes, controlplane nodes, and
Details on which ports are used in each situation are found under [Downstream Cluster Port Requirements](../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#downstream-kubernetes-cluster-nodes).
### IPv6 Address Requirements
Rancher supports clusters configured with IPv4-only, IPv6-only, or dual-stack networking.
You must provision each node with at least one valid IPv4 address, one IPv6 address, or both, according to the cluster networking configuration.
For IPv6-only environments, ensure you correctly configure the operating system and that the `/etc/hosts` file includes a valid localhost entry, for example:
```
::1 localhost
```
:::caution
You should never register a node with the same hostname or IP address as an existing node. Doing so causes RKE to prevent the node from joining, and provisioning to hang. This can occur for both node driver and custom clusters. If a node must reuse a hostname or IP of an existing node, you must set the `hostname_override` [RKE option](https://rke.docs.rancher.com/config-options/nodes#overriding-the-hostname) before registering the node, so that it can join correctly.

View File

@@ -299,7 +299,7 @@ rancher_kubernetes_engine_config:
useInstanceMetadataHostname: true
```
You must not enable `useInstanceMetadataHostname` when setting custom values for `hostname-override` for custom clusters. When you create a [custom cluster](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/use-existing-nodes.md), add [`--node-name`](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options.md) to the `docker run` node registration command to set `hostname-override` — for example, `"$(hostname -f)"`. This can be done manually or by using **Show Advanced Options** in the Rancher UI to add **Node Name**.
You must not enable `useInstanceMetadataHostname` when setting custom values for `hostname-override` for custom clusters. When you create a [custom cluster](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/use-existing-nodes.md), add `--node-name` to the `docker run` node registration command to set `hostname-override` — for example, `"$(hostname -f)"`. This can be done manually or by using **Show Advanced Options** in the Rancher UI to add **Node Name**.
2. Select the cloud provider.

View File

@@ -103,11 +103,11 @@ The `worker` nodes, which is where your workloads will be deployed on, will typi
We recommend the minimum three-node architecture listed in the table below, but you can always add more Linux and Windows workers to scale up your cluster for redundancy:
| Node | Operating System | Kubernetes Cluster Role(s) | Purpose |
| ------ | --------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- |
| Node 1 | Linux (Ubuntu Server 18.04 recommended) | Control plane, etcd, worker | Manage the Kubernetes cluster |
| Node 2 | Linux (Ubuntu Server 18.04 recommended) | Worker | Support the Rancher Cluster agent, Metrics server, DNS, and Ingress for the cluster |
| Node 3 | Windows (Windows Server core version 1809 or above) | Worker | Run your Windows containers |
| Node | Operating System | Kubernetes Cluster Role(s) | Purpose |
|--------|----------------------------------------------------------------------------------------|-----------------------------|-------------------------------------------------------------------------------------|
| Node 1 | Linux (Ubuntu Server 18.04 recommended) | Control plane, etcd, worker | Manage the Kubernetes cluster |
| Node 2 | Linux (Ubuntu Server 18.04 recommended) | Worker | Support the Rancher Cluster agent, Metrics server, DNS, and Ingress for the cluster |
| Node 3 | Windows (Windows Server core version 1809 or above required, version 2022 recommended) | Worker | Run your Windows containers |
### Container Requirements
@@ -126,8 +126,6 @@ If you are using the GCE (Google Compute Engine) cloud provider, you must do the
This tutorial describes how to create a Rancher-provisioned cluster with the three nodes in the [recommended architecture.](#recommended-architecture)
When you provision a cluster with Rancher on existing nodes, you add nodes to the cluster by installing the [Rancher agent](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options.md) on each one. To create or edit your cluster from the Rancher UI, run the **Registration Command** on each server to add it to your cluster.
To set up a cluster with support for Windows nodes and containers, you will need to complete the tasks below.
### 1. Provision Hosts
@@ -142,15 +140,15 @@ Your hosts can be:
You will provision three nodes:
- One Linux node, which manages the Kubernetes control plane and stores your `etcd`
- One Linux node, which manages the Kubernetes control plane, stores your `etcd`, and optionally be a worker node
- A second Linux node, which will be another worker node
- The Windows node, which will run your Windows containers as a worker node
| Node | Operating System |
| ------ | ------------------------------------------------------------ |
| Node 1 | Linux (Ubuntu Server 18.04 recommended) |
| Node 2 | Linux (Ubuntu Server 18.04 recommended) |
| Node 3 | Windows (Windows Server core version 1809 or above required) |
| Node | Operating System |
|--------|----------------------------------------------------------------------------------------|
| Node 1 | Linux (Ubuntu Server 18.04 recommended) |
| Node 2 | Linux (Ubuntu Server 18.04 recommended) |
| Node 3 | Windows (Windows Server core version 1809 or above required, version 2022 recommended) |
If your nodes are hosted by a **Cloud Provider** and you want automation support such as loadbalancers or persistent storage devices, your nodes have additional configuration requirements. For details, see [Selecting Cloud Providers.](../set-up-cloud-providers/set-up-cloud-providers.md)
@@ -164,11 +162,11 @@ The instructions for creating a Windows cluster on existing nodes are very simil
1. Enter a name for your cluster in the **Cluster Name** field.
1. In the **Kubernetes Version** dropdown menu, select a supported Kubernetes version.
1. In the **Container Network** field, select either **Calico** or **Flannel**.
1. Click **Next**.
1. Click **Create**.
### 3. Add Nodes to the Cluster
This section describes how to register your Linux and Worker nodes to your cluster. You will run a command on each node, which will install the Rancher agent and allow Rancher to manage each node.
This section describes how to register your Linux and Worker nodes to your cluster. You will run a command on each node, which will install the rancher system agent and allow Rancher to manage each node.
#### Add Linux Master Node
@@ -177,23 +175,18 @@ In this section, we fill out a form on the Rancher UI to get a custom command to
The first node in your cluster should be a Linux host that has both the **Control Plane** and **etcd** roles. At a minimum, both of these roles must be enabled for this node, and this node must be added to your cluster before you can add Windows hosts.
1. After cluster creation, navigate to the **Registration** tab.
1. In **Step 1** under the **Node Role** section, select at least **etcd** and **Control Plane**. We recommend selecting all three.
1. Optional: If you click **Show advanced options,** you can customize the settings for the [Rancher agent](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options.md) and [node labels.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
1. In **Step 1** under the **Node Role** section, select all three roles. Although you can choose only the **etcd** and **Control Plane** roles, we recommend selecting all three.
1. Optional: If you click **Show Advanced**, you can configure additional settings such as specifying the IP address(es), overriding the node hostname, or adding [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node.
1. In **Step 2**, under the **Registration** section, copy the command displayed on the screen to your clipboard.
1. SSH into your Linux host and run the command that you copied to your clipboard.
**Result:**
**Results:**
Your cluster is created and assigned a state of **Provisioning**. Rancher is standing up your cluster.
Your cluster is created and assigned a state of **Updating**. Rancher is standing up your cluster.
You can access your cluster after its state is updated to **Active**.
It may take a few minutes for the node to register and appear under the **Machines** tab.
**Active** clusters are assigned two Projects:
- `Default`, containing the `default` namespace
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
It may take a few minutes for the node to be registered in your cluster.
Youll be able to access the cluster once its state changes to **Active**.
#### Add Linux Worker Node
@@ -203,11 +196,13 @@ After the initial provisioning of your cluster, your cluster only has a single L
1. After cluster creation, navigate to the **Registration** tab.
1. In **Step 1** under the **Node Role** section, select **Worker**.
1. Optional: If you click **Show advanced options,** you can customize the settings for the [Rancher agent](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options.md) and [node labels.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
1. Optional: If you click **Show Advanced**, you can configure additional settings such as specifying the IP address(es), overriding the node hostname, or to add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node.
1. In **Step 2**, under the **Registration** section, copy the command displayed on the screen to your clipboard.
1. SSH into your Linux host and run the command that you copied to your clipboard.
**Result:** The **Worker** role is installed on your Linux host, and the node registers with Rancher. It may take a few minutes for the node to be registered in your cluster.
**Results:**
The **Worker** role is installed on your Linux host, and the node registers with Rancher. It may take a few minutes for the node to be registered in your cluster.
:::note
@@ -216,7 +211,7 @@ Taints on Linux Worker Nodes
For each Linux worker node added into the cluster, the following taints will be added to Linux worker node. By adding this taint to the Linux worker node, any workloads added to the Windows cluster will be automatically scheduled to the Windows worker node. If you want to schedule workloads specifically onto the Linux worker node, you will need to add tolerations to those workloads.
| Taint Key | Taint Value | Taint Effect |
| -------------- | ----------- | ------------ |
|----------------|-------------|--------------|
| `cattle.io/os` | `linux` | `NoSchedule` |
:::
@@ -231,12 +226,16 @@ The registration command to add the Windows workers only appears after the clust
1. After cluster creation, navigate to the **Registration** tab.
1. In **Step 1** under the **Node Role** section, select **Worker**.
1. Optional: If you click **Show advanced options,** you can customize the settings for the [Rancher agent](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options.md) and [node labels.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
1. Optional: If you click **Show Advanced**, you can configure additional settings such as specifying the IP address(es), overriding the node hostname, or adding [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node.
1. In **Step 2**, under the **Registration** section, copy the command for Windows workers displayed on the screen to your clipboard.
1. Log in to your Windows host using your preferred tool, such as [Microsoft Remote Desktop](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients). Run the command copied to your clipboard in the **Command Prompt (CMD)**.
1. Log in to your Windows host using your preferred tool, such as [Microsoft Remote Desktop](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients). Run the command copied to your clipboard in the **PowerShell Console** as an Administrator.
1. Optional: Repeat these instructions if you want to add more Windows nodes to your cluster.
**Result:** The **Worker** role is installed on your Windows host, and the node registers with Rancher. It may take a few minutes for the node to be registered in your cluster. You now have a Windows Kubernetes cluster.
**Results:**
The **Worker** role is installed on your Windows host, and the node registers with Rancher. It may take a few minutes for the node to be registered in your cluster.
You now have a Windows Kubernetes cluster.
### Optional Next Steps

View File

@@ -20,7 +20,8 @@ Then you will create an EC2 cluster in Rancher, and when configuring the new clu
- [Example IAM Policy](#example-iam-policy)
- [Example IAM Policy with PassRole](#example-iam-policy-with-passrole) (needed if you want to use [Kubernetes Cloud Provider](../../kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md) or want to pass an IAM Profile to an instance)
- [Example IAM Policy to allow encrypted EBS volumes](#example-iam-policy-to-allow-encrypted-ebs-volumes)
- **IAM Policy added as Permission** to the user. See [Amazon Documentation: Adding Permissions to a User (Console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) how to attach it to an user.
- **IAM Policy added as Permission** to the user. See [Amazon Documentation: Adding Permissions to a User (Console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) how to attach it to a user.
- **IPv4-only or IPv6-only or dual-stack subnet and/or VPC** where nodes can be provisioned and assigned IPv4 and/or IPv6 addresses. See [Amazon Documentation: IPv6 support for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html).
## Creating an EC2 Cluster

View File

@@ -19,10 +19,7 @@ In order to deploy and run the adapter successfully, you need to ensure its vers
| Rancher Version | Adapter Version |
|-----------------|------------------|
| v2.12.3 | 107.0.0+up7.0.0 |
| v2.12.2 | 107.0.0+up7.0.0 |
| v2.12.1 | 107.0.0+up7.0.0 |
| v2.12.0 | 107.0.0+up7.0.0 |
| v2.13.0 | 108.0.0+up8.0.0 |
### 1. Gain Access to the Local Cluster

View File

@@ -80,3 +80,18 @@ Use [Instance Metadata Service Version 2 (IMDSv2)](https://docs.aws.amazon.com/A
Add metadata using [tags](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) to categorize resources.
### IPv6 Address Count
Specify how many IPv6 addresses to assign to the instances network interface.
### IPv6 Address Only
Enable this option if the instance should use IPv6 exclusively. IPv6-only VPCs or subnets require this. When enabled, the instance will have IPv6 as its sole address, and the IPv6 Address Count must be greater than zero.
### HTTP Protocol IPv6
Enable or disable IPv6 endpoints for the instance metadata service.
### Enable Primary IPv6
Enable this option to designate the first assigned IPv6 address as the primary address. This ensures a consistent, non-changing IPv6 address for the instance. It does not control whether IPv6 addresses are assigned.

View File

@@ -28,6 +28,8 @@ Enable the DigitalOcean agent for additional [monitoring](https://docs.digitaloc
Enable IPv6 for Droplets.
For more information, refer to the [Digital Ocean IPv6 documentation](https://docs.digitalocean.com/products/networking/ipv6).
### Private Networking
Enable private networking for Droplets.

View File

@@ -71,7 +71,7 @@ Tags is a list of _network tags_, which can be used to associate preexisting Fir
### Labels
A comma seperated list of custom labels to be attached to all VMs within a given machine pool. Unlike Tags, Labels do not influence networking behavior and only serve to organize cloud resources.
A comma separated list of custom labels to be attached to all VMs within a given machine pool. Unlike Tags, Labels do not influence networking behavior and only serve to organize cloud resources.
## Advanced Options

View File

@@ -6,4 +6,4 @@ title: Machine Configuration
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration"/>
</head>
Machine configuration is the arrangement of resources assigned to a virtual machine. Please see the docs for [Amazon EC2](amazon-ec2.md), [DigitalOcean](digitalocean.md), and [Azure](azure.md) to learn more.
Machine configuration is the arrangement of resources assigned to a virtual machine. Please see the docs for [Amazon EC2](amazon-ec2.md), [DigitalOcean](digitalocean.md), [Google GCE](google-gce.md), and [Azure](azure.md) to learn more.

View File

@@ -6,4 +6,6 @@ title: Node Template Configuration
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration"/>
</head>
<EOLRKE1Warning />
To learn about node template config, refer to [EC2 Node Template Configuration](amazon-ec2.md), [DigitalOcean Node Template Configuration](digitalocean.md), [Azure Node Template Configuration](azure.md), [vSphere Node Template Configuration](vsphere.md), and [Nutanix Node Template Configuration](nutanix.md).

View File

@@ -63,7 +63,15 @@ Enable network policy enforcement on the cluster. A network policy defines the l
_Mutable: yes_
choose whether to enable or disable inter-project communication. Note that enabling Project Network Isolation will automatically enable Network Policy and Network Policy Config, but not vice versa.
Choose whether to enable or disable inter-project communication.
#### Imported Clusters
For imported clusters, Project Network Isolation (PNI) requires Kubernetes Network Policy to be enabled on the cluster beforehand.
For clusters created by Rancher, Rancher enables Kubernetes Network Policy automatically.
1. In GKE, enable Network Policy at the cluster level. Refer to the [official GKE guide](https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy) for instructions.
1. After enabling Network Policy, import the cluster into Rancher and enable PNI for project-level isolation.
### Node Ipv4 CIDR Block

View File

@@ -13,7 +13,7 @@ This section covers the configuration options that are available in Rancher for
You can configure the Kubernetes options one of two ways:
- [Rancher UI](#configuration-options-in-the-rancher-ui): Use the Rancher UI to select options that are commonly customized when setting up a Kubernetes cluster.
- [Cluster Config File](#cluster-config-file-reference): Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create a K3s config file. Using a config file allows you to set any of the [options](https://rancher.com/docs/k3s/latest/en/installation/install-options/) available in an K3s installation.
- [Cluster Config File](#cluster-config-file-reference): Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create a K3s config file. Using a config file lets you set any of the [options](https://rancher.com/docs/k3s/latest/en/installation/install-options/) available during a K3s installation.
## Editing Clusters in the Rancher UI
@@ -32,7 +32,7 @@ To edit your cluster,
### Editing Clusters in YAML
For a complete reference of configurable options for K3s clusters in YAML, see the [K3s documentation.](https://rancher.com/docs/k3s/latest/en/installation/install-options/)
For a complete reference of configurable options for K3s clusters in YAML, see the [K3s documentation](https://docs.k3s.io/installation/configuration).
To edit your cluster with YAML:
@@ -48,7 +48,8 @@ This subsection covers generic machine pool configurations. For specific infrast
- [Azure](../downstream-cluster-configuration/machine-configuration/azure.md)
- [DigitalOcean](../downstream-cluster-configuration/machine-configuration/digitalocean.md)
- [EC2](../downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [Amazon EC2](../downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [Google GCE](../downstream-cluster-configuration/machine-configuration/google-gce.md)
##### Pool Name
@@ -86,9 +87,9 @@ Add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-tolerat
#### Basics
##### Kubernetes Version
The version of Kubernetes installed on your cluster nodes. Rancher packages its own version of Kubernetes based on [hyperkube](https://github.com/rancher/hyperkube).
The version of Kubernetes installed on your cluster nodes.
For more detail, see [Upgrading Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
For details on upgrading or rolling back Kubernetes, refer to [this guide](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
##### Pod Security Admission Configuration Template
@@ -108,7 +109,7 @@ Option to enable or disable [SELinux](https://rancher.com/docs/k3s/latest/en/adv
##### CoreDNS
By default, [CoreDNS](https://coredns.io/) is installed as the default DNS provider. If CoreDNS is not installed, an alternate DNS provider must be installed yourself. Refer to the [K3s documentation](https://rancher.com/docs/k3s/latest/en/networking/#coredns) for details..
By default, [CoreDNS](https://coredns.io/) is installed as the default DNS provider. If CoreDNS is not installed, an alternate DNS provider must be installed yourself. Refer to the [K3s documentation](https://rancher.com/docs/k3s/latest/en/networking/#coredns) for details.
##### Klipper Service LB
@@ -148,15 +149,49 @@ Option to choose whether to expose etcd metrics to the public or only within the
##### Cluster CIDR
IPv4/IPv6 network CIDRs to use for pod IPs (default: 10.42.0.0/16).
IPv4/IPv6 network CIDRs to use for pod IPs (default: `10.42.0.0/16`).
Example values:
- IPv4-only: `10.42.0.0/16`
- IPv6-only: `2001:cafe:42::/56`
- Dual-stack: `10.42.0.0/16,2001:cafe:42::/56`
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [K3s documentation: Dual-stack (IPv4 + IPv6) Networking](https://docs.k3s.io/networking/basic-network-options#dual-stack-ipv4--ipv6-networking)
- [K3s documentation: Single-stack IPv6 Networking](https://docs.k3s.io/networking/basic-network-options#single-stack-ipv6-networking)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
##### Service CIDR
IPv4/IPv6 network CIDRs to use for service IPs (default: 10.43.0.0/16).
IPv4/IPv6 network CIDRs to use for service IPs (default: `10.43.0.0/16`).
Example values:
- IPv4-only: `10.43.0.0/16`
- IPv6-only: `2001:cafe:43::/112`
- Dual-stack: `10.43.0.0/16,2001:cafe:43::/112`
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [K3s documentation: Dual-stack (IPv4 + IPv6) Networking](https://docs.k3s.io/networking/basic-network-options#dual-stack-ipv4--ipv6-networking)
- [K3s documentation: Single-stack IPv6 Networking](https://docs.k3s.io/networking/basic-network-options#single-stack-ipv6-networking)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
##### Cluster DNS
IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: 10.43.0.10).
IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: `10.43.0.10`).
##### Cluster Domain
@@ -168,11 +203,11 @@ Option to change the range of ports that can be used for [NodePort services](htt
##### Truncate Hostnames
Option to truncate hostnames to 15 characters or less. You can only set this field during the initial creation of the cluster. You can't enable or disable the 15 character limit after cluster creation.
Option to truncate hostnames to 15 characters or fewer. You can only set this field during the initial creation of the cluster. You can't enable or disable the 15-character limit after cluster creation.
This setting only affects machine-provisioned clusters. Since custom clusters set hostnames during their own node creation process, which occurs outside of Rancher, this field doesn't restrict custom cluster hostname length.
Truncating hostnames in a cluster improves compatibility with Windows-based systems. Although Kubernetes allows hostnames up to 63 characters in length, systems that use NetBIOS restrict hostnames to 15 characters or less.
Truncating hostnames in a cluster improves compatibility with Windows-based systems. Although Kubernetes allows hostnames up to 63 characters in length, systems that use NetBIOS restrict hostnames to 15 characters or fewer.
##### TLS Alternate Names
@@ -186,6 +221,33 @@ For more detail on how an authorized cluster endpoint works and why it is used,
We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.](../../rancher-manager-architecture/architecture-recommendations.md#architecture-for-an-authorized-cluster-endpoint-ace)
##### Stack Preference
Choose the networking stack for the cluster. This option affects:
- The address used for health and readiness probes of components such as Calico, etcd, kube-apiserver, kube-scheduler, kube-controller-manager, and kubelet.
- The server URL in the `authentication-token-webhook-config-file` for the Authorized Cluster Endpoint.
- The `advertise-client-urls` setting for etcd during snapshot restoration.
Options are `ipv4`, `ipv6`, `dual`:
- When set to `ipv4`, the cluster uses `127.0.0.1`
- When set to `ipv6`, the cluster uses `[::1]`
- When set to `dual`, the cluster uses `localhost`
The stack preference must match the clusters networking configuration:
- Set to `ipv4` for IPv4-only clusters
- Set to `ipv6` for IPv6-only clusters
- Set to `dual` for dual-stack clusters
:::caution
Ensuring the loopback address configuration is correct is critical for successful cluster provisioning.
For more information, refer to the [Node Requirements](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md) page.
:::
#### Registries
Select the image repository to pull Rancher images from. For more details and configuration options, see the [K3s documentation](https://rancher.com/docs/k3s/latest/en/installation/private-registry/).

View File

@@ -32,7 +32,7 @@ To edit your cluster,
### Editing Clusters in YAML
For a complete reference of configurable options for K3s clusters in YAML, see the [K3s documentation.](https://rancher.com/docs/k3s/latest/en/installation/install-options/)
For a complete reference of configurable options for RKE2 clusters in YAML, see the [RKE2 documentation](https://docs.rke2.io/install/configuration).
To edit your cluster in YAML:
@@ -48,7 +48,8 @@ This subsection covers generic machine pool configurations. For specific infrast
- [Azure](../downstream-cluster-configuration/machine-configuration/azure.md)
- [DigitalOcean](../downstream-cluster-configuration/machine-configuration/digitalocean.md)
- [EC2](../downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [Amazon EC2](../downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [Google GCE](../downstream-cluster-configuration/machine-configuration/google-gce.md)
##### Pool Name
@@ -86,9 +87,9 @@ Add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-tolerat
#### Basics
##### Kubernetes Version
The version of Kubernetes installed on your cluster nodes. Rancher packages its own version of Kubernetes based on [hyperkube](https://github.com/rancher/hyperkube).
The version of Kubernetes installed on your cluster nodes.
For more detail, see [Upgrading Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
For details on upgrading or rolling back Kubernetes, refer to [this guide](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
##### Container Network Provider
@@ -105,20 +106,19 @@ Out of the box, Rancher is compatible with the following network providers:
- [Canal](https://github.com/projectcalico/canal)
- [Cilium](https://cilium.io/)*
- [Calico](https://docs.projectcalico.org/v3.11/introduction/)
- [Flannel](https://github.com/flannel-io/flannel)
- [Multus](https://github.com/k8snetworkplumbingwg/multus-cni)
\* When using [project network isolation](#project-network-isolation) in the [Cilium CNI](../../../faq/container-network-interface-providers.md#cilium), it is possible to enable cross-node ingress routing. Click the [CNI provider docs](../../../faq/container-network-interface-providers.md#ingress-routing-across-nodes-in-cilium) to learn more.
For more details on the different networking providers and how to configure them, please view our [RKE2 documentation](https://docs.rke2.io/install/network_options).
For more details on the different networking providers and how to configure them, please view our [RKE2 documentation](https://docs.rke2.io/networking/basic_network_options).
###### Dual-stack Networking
[Dual-stack](https://docs.rke2.io/install/network_options#dual-stack-configuration) networking is supported for all CNI providers. To configure RKE2 in dual-stack mode, set valid IPv4/IPv6 CIDRs for your [Cluster CIDR](#cluster-cidr) and/or [Service CIDR](#service-cidr).
###### Dual-stack Additional Configuration
:::caution
When using `cilium` or `multus,cilium` as your container network interface provider, ensure the **Enable IPv6 Support** option is also enabled.
:::
##### Cloud Provider
You can configure a [Kubernetes cloud provider](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md). If you want to use dynamically provisioned [volumes and storage](../../../how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md) in Kubernetes, typically you must select the specific cloud provider in order to use it. For example, if you want to use Amazon EBS, you would need to select the `aws` cloud provider.
@@ -181,27 +181,62 @@ Option to choose whether to expose etcd metrics to the public or only within the
##### Cluster CIDR
IPv4 and/or IPv6 network CIDRs to use for pod IPs (default: 10.42.0.0/16).
IPv4 and/or IPv6 network CIDRs to use for pod IPs (default: `10.42.0.0/16`).
###### Dual-stack Networking
Example values:
To configure [dual-stack](https://docs.rke2.io/install/network_options#dual-stack-configuration) mode, enter a valid IPv4/IPv6 CIDR. For example `10.42.0.0/16,2001:cafe:42:0::/56`.
- IPv4-only: `10.42.0.0/16`
- IPv6-only: `2001:cafe:42::/56`
- Dual-stack: `10.42.0.0/16,2001:cafe:42::/56`
[Additional configuration](#dual-stack-additional-configuration) is required when using `cilium` or `multus,cilium` as your [container network](#container-network-provider) interface provider.
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [RKE2 documentation: Dual-stack configuration](https://docs.rke2.io/networking/basic_network_options#dual-stack-configuration)
- [RKE2 documentation: IPv6-only setup](https://docs.rke2.io/networking/basic_network_options#ipv6-setup)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
:::caution
When using `cilium` or `multus,cilium` as your container network interface provider, ensure the **Enable IPv6 Support** option is also enabled.
:::
##### Service CIDR
IPv4/IPv6 network CIDRs to use for service IPs (default: 10.43.0.0/16).
IPv4/IPv6 network CIDRs to use for service IPs (default: `10.43.0.0/16`).
###### Dual-stack Networking
Example values:
To configure [dual-stack](https://docs.rke2.io/install/network_options#dual-stack-configuration) mode, enter a valid IPv4/IPv6 CIDR. For example `10.42.0.0/16,2001:cafe:42:0::/56`.
- IPv4-only: `10.43.0.0/16`
- IPv6-only: `2001:cafe:43::/112`
- Dual-stack: `10.43.0.0/16,2001:cafe:43::/112`
[Additional configuration](#dual-stack-additional-configuration) is required when using `cilium ` or `multus,cilium` as your [container network](#container-network-provider) interface provider.
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [RKE2 documentation: Dual-stack configuration](https://docs.rke2.io/networking/basic_network_options#dual-stack-configuration)
- [RKE2 documentation: IPv6-only setup](https://docs.rke2.io/networking/basic_network_options#ipv6-setup)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
:::caution
When using `cilium` or `multus,cilium` as your container network interface provider, ensure the **Enable IPv6 Support** option is also enabled.
:::
##### Cluster DNS
IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: 10.43.0.10).
IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: `10.43.0.10`).
##### Cluster Domain
@@ -213,11 +248,11 @@ Option to change the range of ports that can be used for [NodePort services](htt
##### Truncate Hostnames
Option to truncate hostnames to 15 characters or less. You can only set this field during the initial creation of the cluster. You can't enable or disable the 15 character limit after cluster creation.
Option to truncate hostnames to 15 characters or fewer. You can only set this field during the initial creation of the cluster. You can't enable or disable the 15-character limit after cluster creation.
This setting only affects machine-provisioned clusters. Since custom clusters set hostnames during their own node creation process, which occurs outside of Rancher, this field doesn't restrict custom cluster hostname length.
Truncating hostnames in a cluster improves compatibility with Windows-based systems. Although Kubernetes allows hostnames up to 63 characters in length, systems that use NetBIOS restrict hostnames to 15 characters or less.
Truncating hostnames in a cluster improves compatibility with Windows-based systems. Although Kubernetes allows hostnames up to 63 characters in length, systems that use NetBIOS restrict hostnames to 15 characters or fewer.
##### TLS Alternate Names
@@ -233,6 +268,33 @@ For more detail on how an authorized cluster endpoint works and why it is used,
We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.](../../rancher-manager-architecture/architecture-recommendations.md#architecture-for-an-authorized-cluster-endpoint-ace)
##### Stack Preference
Choose the networking stack for the cluster. This option affects:
- The address used for health and readiness probes of components such as Calico, etcd, kube-apiserver, kube-scheduler, kube-controller-manager, and kubelet.
- The server URL in the `authentication-token-webhook-config-file` for the Authorized Cluster Endpoint.
- The `advertise-client-urls` setting for etcd during snapshot restoration.
Options are `ipv4`, `ipv6`, `dual`:
- When set to `ipv4`, the cluster uses `127.0.0.1`
- When set to `ipv6`, the cluster uses `[::1]`
- When set to `dual`, the cluster uses `localhost`
The stack preference must match the clusters networking configuration:
- Set to `ipv4` for IPv4-only clusters
- Set to `ipv6` for IPv6-only clusters
- Set to `dual` for dual-stack clusters
:::caution
Ensuring the loopback address configuration is correct is critical for successful cluster provisioning.
For more information, refer to the [Node Requirements](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md) page.
:::
#### Registries
Select the image repository to pull Rancher images from. For more details and configuration options, see the [RKE2 documentation](https://docs.rke2.io/install/private_registry).

View File

@@ -1,57 +0,0 @@
---
title: Rancher Agent Options
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options"/>
</head>
Rancher deploys an agent on each node to communicate with the node. This pages describes the options that can be passed to the agent. To use these options, you will need to [create a cluster with custom nodes](use-existing-nodes.md) and add the options to the generated `docker run` command when adding a node.
For an overview of how Rancher communicates with downstream clusters using node agents, refer to the [architecture section.](../../../rancher-manager-architecture/communicating-with-downstream-user-clusters.md#3-node-agents)
## General options
| Parameter | Environment variable | Description |
| ---------- | -------------------- | ----------- |
| `--server` | `CATTLE_SERVER` | The configured Rancher `server-url` setting which the agent connects to |
| `--token` | `CATTLE_TOKEN` | Token that is needed to register the node in Rancher |
| `--ca-checksum` | `CATTLE_CA_CHECKSUM` | The SHA256 checksum of the configured Rancher `cacerts` setting to validate |
| `--node-name` | `CATTLE_NODE_NAME` | Override the hostname that is used to register the node (defaults to `hostname -s`) |
| `--label` | `CATTLE_NODE_LABEL` | Add node labels to the node. For multiple labels, pass additional `--label` options. (`--label key=value`) |
| `--taints` | `CATTLE_NODE_TAINTS` | Add node taints to the node. For multiple taints, pass additional `--taints` options. (`--taints key=value:effect`) |
## Role options
| Parameter | Environment variable | Description |
| ---------- | -------------------- | ----------- |
| `--all-roles` | `ALL=true` | Apply all roles (`etcd`,`controlplane`,`worker`) to the node |
| `--etcd` | `ETCD=true` | Apply the role `etcd` to the node |
| `--controlplane` | `CONTROL=true` | Apply the role `controlplane` to the node |
| `--worker` | `WORKER=true` | Apply the role `worker` to the node |
## IP address options
| Parameter | Environment variable | Description |
| ---------- | -------------------- | ----------- |
| `--address` | `CATTLE_ADDRESS` | The IP address the node will be registered with (defaults to the IP used to reach `8.8.8.8`) |
| `--internal-address` | `CATTLE_INTERNAL_ADDRESS` | The IP address used for inter-host communication on a private network |
### Dynamic IP address options
For automation purposes, you can't have a specific IP address in a command as it has to be generic to be used for every node. For this, we have dynamic IP address options. They are used as a value to the existing IP address options. This is supported for `--address` and `--internal-address`.
| Value | Example | Description |
| ---------- | -------------------- | ----------- |
| Interface name | `--address eth0` | The first configured IP address will be retrieved from the given interface |
| `ipify` | `--address ipify` | Value retrieved from `https://api.ipify.org` will be used |
| `awslocal` | `--address awslocal` | Value retrieved from `http://169.254.169.254/latest/meta-data/local-ipv4` will be used |
| `awspublic` | `--address awspublic` | Value retrieved from `http://169.254.169.254/latest/meta-data/public-ipv4` will be used |
| `doprivate` | `--address doprivate` | Value retrieved from `http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address` will be used |
| `dopublic` | `--address dopublic` | Value retrieved from `http://169.254.169.254/metadata/v1/interfaces/public/0/ipv4/address` will be used |
| `azprivate` | `--address azprivate` | Value retrieved from `http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/privateIpAddress?api-version=2017-08-01&format=text` will be used |
| `azpublic` | `--address azpublic` | Value retrieved from `http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/publicIpAddress?api-version=2017-08-01&format=text` will be used |
| `gceinternal` | `--address gceinternal` | Value retrieved from `http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip` will be used |
| `gceexternal` | `--address gceexternal` | Value retrieved from `http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip` will be used |
| `packetlocal` | `--address packetlocal` | Value retrieved from `https://metadata.packet.net/2009-04-04/meta-data/local-ipv4` will be used |
| `packetpublic` | `--address packetlocal` | Value retrieved from `https://metadata.packet.net/2009-04-04/meta-data/public-ipv4` will be used |

View File

@@ -9,7 +9,7 @@ description: To create a cluster with custom nodes, youll need to access serv
When you create a custom cluster, Rancher can use RKE2/K3s to create a Kubernetes cluster in on-prem bare-metal servers, on-prem virtual machines, or in any node hosted by an infrastructure provider.
To use this option you'll need access to servers you intend to use in your Kubernetes cluster. Provision each server according to the [requirements](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md), which includes some hardware specifications and Docker. After you install Docker on each server, you willl also run the command provided in the Rancher UI on each server to turn each one into a Kubernetes node.
To use this option, you need access to the servers that will be part of your Kubernetes cluster. Provision each server according to the [requirements](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md). Then, run the command provided in the Rancher UI on each server to convert it into a Kubernetes node.
This section describes how to set up a custom cluster.
@@ -33,7 +33,15 @@ If you want to reuse a node from a previous custom cluster, [clean the node](../
Provision the host according to the [installation requirements](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md) and the [checklist for production-ready clusters.](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/checklist-for-production-ready-clusters.md)
If you're using Amazon EC2 as your host and want to use the [dual-stack](https://kubernetes.io/docs/concepts/services-networking/dual-stack/) feature, there are additional [requirements](https://rancher.com/docs/rke//latest/en/config-options/dual-stack#requirements) when provisioning the host.
:::note IPv6-only cluster
For an IPv6-only cluster, ensure that your operating system correctly configures the `/etc/hosts` file.
```
::1 localhost
```
:::
### 2. Create the Custom Cluster
@@ -41,39 +49,43 @@ If you're using Amazon EC2 as your host and want to use the [dual-stack](https:/
1. On the **Clusters** page, click **Create**.
1. Click **Custom**.
1. Enter a **Cluster Name**.
1. Use **Cluster Configuration** section to choose the version of Kubernetes, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on **Show advanced options**.
1. Use the **Cluster Configuration** section to set up the cluster. For more information, see [RKE2 Cluster Configuration Reference](../rke2-cluster-configuration.md) and [K3s Cluster Configuration Reference](../k3s-cluster-configuration.md).
:::note Using Windows nodes as Kubernetes workers?
:::note Windows nodes
- See [Enable the Windows Support Option](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters.md).
- The only Network Provider available for clusters with Windows support is Flannel.
To learn more about using Windows nodes as Kubernetes workers, see [Launching Kubernetes on Windows Clusters](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters.md).
:::
:::
:::note Dual-stack on Amazon EC2:
1. Click **Create**.
If you're using Amazon EC2 as your host and want to use the [dual-stack](https://kubernetes.io/docs/concepts/services-networking/dual-stack/) feature, there are additional [requirements](https://rancher.com/docs/rke//latest/en/config-options/dual-stack#requirements) when configuring RKE.
**Result:** The UI redirects to the **Registration** page, where you can generate the registration command for your nodes.
:::
1. From **Node Role**, select the roles you want a cluster node to fill. You must provision at least one node for each role: etcd, worker, and control plane. A custom cluster requires all three roles to finish provisioning. For more information on roles, see [Roles for Nodes in Kubernetes Clusters](../../../kubernetes-concepts.md#roles-for-nodes-in-kubernetes-clusters).
6. Click **Next**.
:::note Bare-Metal Server
4. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
If you plan to dedicate bare-metal servers to each role, you must provision a bare-metal server for each role (i.e., provision multiple bare-metal servers).
7. From **Node Role**, choose the roles that you want filled by a cluster node. You must provision at least one node for each role: `etcd`, `worker`, and `control plane`. All three roles are required for a custom cluster to finish provisioning. For more information on roles, see [this section.](../../../kubernetes-concepts.md#roles-for-nodes-in-kubernetes-clusters)
:::note
:::note
1. **Optional**: Click **Show Advanced** to configure additional settings such as specifying the IP address(es), overriding the node hostname, or adding [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node
- Using Windows nodes as Kubernetes workers? See [this section](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters.md).
- Bare-Metal Server Reminder: If you plan on dedicating bare-metal servers to each role, you must provision a bare-metal server for each role (i.e. provision multiple bare-metal servers).
:::note
:::
The **Node Public IP** and **Node Private IP** fields can accept either a single address or a comma-separated list of addresses (for example: `10.0.0.5,2001:db8::1`).
8. **Optional**: Click **[Show advanced options](rancher-agent-options.md)** to specify IP address(es) to use when registering the node, override the hostname of the node, or to add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node.
:::
9. Copy the command displayed on screen to your clipboard.
:::note Ipv6-only or Dual-stack Cluster
10. Log in to your Linux host using your preferred shell, such as PuTTy or a remote Terminal connection. Run the command copied to your clipboard.
In both IPv6-only and dual-stack clusters, you should specify the nodes **IPv6 address** as the **Node Private IP**.
:::
1. Copy the command displayed on screen to your clipboard.
1. Log in to your Linux host using your preferred shell, such as PuTTy or a remote Terminal connection. Run the command copied to your clipboard.
:::note
@@ -81,11 +93,9 @@ Repeat steps 7-10 if you want to dedicate specific hosts to specific node roles.
:::
11. When you finish running the command(s) on your Linux host(s), click **Done**.
**Result:**
Your cluster is created and assigned a state of **Provisioning**. Rancher is standing up your cluster.
The cluster is created and transitions to the **Updating** state while Rancher initializes and provisions cluster components.
You can access your cluster after its state is updated to **Active**.

View File

@@ -0,0 +1,122 @@
---
title: IPv4/IPv6 Dual-stack
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/dual-stack/"/>
</head>
Kubernetes supports IPv4-only, IPv6-only, and dual-stack networking configurations.
For more details, refer to the official [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/dual-stack/).
## Installing Rancher on IPv6-Only or Dual-Stack Clusters
Rancher can run on clusters using:
- IPv4-only
- IPv6-only
- Dual-stack (IPv4 + IPv6)
When you install Rancher on an **IPv6-only cluster**, it can communicate externally **only over IPv6**. This means it can provision:
- IPv6-only clusters
- Dual-stack clusters
_(IPv4-only downstream clusters are not possible in this case)_
When you install Rancher on a **dual-stack cluster**, it can communicate over both IPv4 and IPv6, and can therefore provision:
- IPv4-only clusters
- IPv6-only clusters
- Dual-stack clusters
For installation steps, see the guide: **[Installing and Upgrading Rancher](../getting-started/installation-and-upgrade/installation-and-upgrade.md)**.
### Requirement for the Rancher Server URL
When provisioning IPv6-only downstream clusters, the **Rancher Server URL must be reachable over IPv6** because downstream nodes connect back to the Rancher server using IPv6.
## Provisioning IPv6-Only or Dual-Stack Clusters
You can provision RKE2 and K3s **Node driver** (machine pools) or **Custom cluster** (existing hosts) clusters using IPv4-only, IPv6-only, or dual-stack networking.
### Network Configuration
To enable IPv6-only or dual-stack networking, you must configure:
- Cluster CIDR
- Service CIDR
- Stack Preference
Configuration references:
- [K3s Cluster Configuration Reference](cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md)
- [RKE2 Cluster Configuration Reference](cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md)
### Support for Windows
Kubernetes on Windows:
| Feature | Support Status |
|---------------------|-------------------------------|
| IPv6-only clusters | Not supported |
| Dual-stack clusters | Supported |
| Services | Limited to a single IP family |
For more information, see the [Kubernetes Documentation](https://kubernetes.io/docs/concepts/services-networking/dual-stack/#windows-support).
K3s does **not** support Windows ([FAQ](https://docs.k3s.io/faq#does-k3s-support-windows))
RKE2 supports Windows, but requires using either `Calico` or `Flannel` as the CNI.
Note that Windows installations of RKE2 do not support dual-stack clusters using BGP.
For more details, see [RKE2 Network Options](https://docs.rke2.io/networking/basic_network_options).
### Provisioning Node Driver Clusters
Rancher currently supports assigning IPv6 addresses in **node driver** clusters with:
- [Amazon EC2](../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md)
- [DigitalOcean](../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-digitalocean-cluster.md)
Support for additional providers will be introduced in future releases.
:::note DigitalOcean Limitation
Creating an **IPv6-only cluster** using the DigitalOcean node driver is currently **not supported**.
For more details, please see [rancher/rancher#52523](https://github.com/rancher/rancher/issues/52523#issuecomment-3457803572).
:::
#### Infrastructure Requirements
Cluster nodes must meet the requirements listed in the [Node Requirements for Rancher Managed Clusters](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md).
Machine pool configuration guides:
- [Amazon EC2 Configuration](cluster-configuration/downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [DigitalOcean Configuration](cluster-configuration/downstream-cluster-configuration/machine-configuration/digitalocean.md)
### Provisioning Custom Clusters
To provision on your own nodes, follow the instructions in [Provision Kubernetes on Existing Nodes](cluster-configuration/rancher-server-configuration/use-existing-nodes/use-existing-nodes.md).
:::note
- **Node Public IP** and **Node Private IP** fields accept IPv4, IPv6, or both (comma-separated).
> Example: `10.0.0.5,2001:db8::1`
- In **IPv6-only** and **dual-stack** clusters, specify the nodes **IPv6 address** as the **Private IP**.
:::
#### Infrastructure Requirements
Infrastructure requirements are the same as above for node-driver clusters.
## Other Limitations
### GitHub.com
GitHub.com does **not** support IPv6. As a result:
- Any application repositories ( `ClusterRepo.catalog.cattle.io/v1` CR) hosted on GitHub.com will **not be reachable** from IPv6-only clusters.
- Similarly, any **non-builtin node drivers** hosted on GitHub.com will also **not be accessible** in IPv6-only environments.

View File

@@ -20,10 +20,7 @@ Each Rancher version is designed to be compatible with a single version of the w
| Rancher Version | Webhook Version | Availability in Prime | Availability in Community |
|-----------------|-----------------|-----------------------|---------------------------|
| v2.12.3 | v0.8.3 | &check; | &check; |
| v2.12.2 | v0.8.2 | &check; | &check; |
| v2.12.1 | v0.8.1 | &check; | &check; |
| v2.12.0 | v0.8.0 | &cross; | &check; |
| v2.13.0 | v0.9.0 | &cross; | &check; |
## Why Do We Need It?

View File

@@ -184,12 +184,12 @@ module.exports = {
current: {
label: "Latest",
},
'2.13': {
label: 'v2.13 (Preview)',
path: 'v2.13',
banner: 'unreleased'
"2.13": {
label: "v2.13",
path: "v2.13",
banner: 'none'
},
'2.12': {
"2.12": {
label: "v2.12",
path: "v2.12",
banner: "none"
@@ -256,6 +256,11 @@ module.exports = {
{
// Plugin Options for loading OpenAPI files
specs: [
{
id: "rancher-api-v2-13",
spec: "openapi/swagger-v2.13.json",
// route: '/api/',
},
{
id: "rancher-api-v2-12",
spec: "openapi/swagger-v2.12.json",

View File

@@ -14,4 +14,4 @@ title: API 参考
import ApiDocMdx from '@theme/ApiDocMdx';
<ApiDocMdx id="rancher-api-v2-12" />
<ApiDocMdx id="rancher-api-v2-13" />

View File

@@ -63,6 +63,7 @@ title: API 令牌
| 设置 | 描述 |
| ------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes) | 用户认证会话令牌的 TTL单位分钟。 |
| [`auth-user-session-idle-ttl-minutes`](#auth-user-session-idle-ttl-minutes) | TTL in minutes on a user auth session token, without user activity. |
| [`kubeconfig-default-token-TTL-minutes`](#kubeconfig-default-token-ttl-minutes) | 默认 TTL应用于所有 kubeconfig 令牌(除了[由 Rancher CLI 生成的令牌](#在生成的-kubeconfig-中禁用令牌))。**此设置从 2.6.6 版本开始引入。** |
| [`kubeconfig-token-ttl-minutes`](#kubeconfig-token-ttl-minutes) | 在 CLI 中生成的令牌 TTL。**自 2.6.6 起已弃用,并将在 2.8.0 中删除**。请知悉,`kubeconfig-default-token-TTL-minutes` 将用于所有 kubeconfig 令牌。 |
| [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes) | 除了由 [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes) 控制的令牌外,所有令牌的最大 TTL。 |
@@ -71,6 +72,11 @@ title: API 令牌
### auth-user-session-ttl-minutes
存活时间TTL单位分钟用于确定用户身份验证会话令牌的到期时间。过期后用户将需要登录并获取新令牌。此设置不受 [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes) 的影响。会话令牌是在用户登录 Rancher 时创建的。
### auth-user-session-idle-ttl-minutes
Time to live (TTL) without user activity for login sessions tokens, in minutes.
By default, [`auth-user-session-idle-ttl-minutes`](#auth-user-session-idle-ttl-minutes) is set to the same value as [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes) (for backward compatibility). It must never exceed the value of `auth-user-session-ttl-minutes`.
### kubeconfig-default-token-TTL-minutes
存活时间TTL单位分钟用于确定 kubeconfig 令牌的到期时间。令牌过期后API 将拒绝令牌。此设置的值不能大于 [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes) 的值。此设置适用于在请求的 kubeconfig 文件中生成的令牌,不包括[由 Rancher CLI 生成的](#在生成的-kubeconfig-中禁用令牌)令牌。
**此设置从 2.6.6 版本开始引入**

View File

@@ -16,10 +16,7 @@ Rancher 将在 GitHub 上发布的 Rancher 的[发版说明](https://github.com/
| Patch 版本 | 发布时间 |
| ----------------------------------------------------------------- | ------------------ |
| [2.12.3](https://github.com/rancher/rancher/releases/tag/v2.12.3) | 2025 年 10 月 23 日 |
| [2.12.2](https://github.com/rancher/rancher/releases/tag/v2.12.2) | 2025 年 9 月 25 日 |
| [2.12.1](https://github.com/rancher/rancher/releases/tag/v2.12.1) | 2025 年 8 月 28 日 |
| [2.12.0](https://github.com/rancher/rancher/releases/tag/v2.12.0) | 2025 年 7 月 30 日 |
| [2.13.0](https://github.com/rancher/rancher/releases/tag/v2.13.0) | 2025 年 11 月 25 |
## 当一个功能被标记为弃用我可以得到什么样的预期?

View File

@@ -14,11 +14,8 @@ title: 安装 Adapter
:::
| Rancher 版本 | Adapter 版本 |
|-----------------|:----------------:|
| v2.12.3 | 107.0.0+up7.0.0 |
| v2.12.2 | 107.0.0+up7.0.0 |
| v2.12.1 | 107.0.0+up7.0.0 |
| v2.12.0 | 107.0.0+up7.0.0 |
|-----------------|------------------|
| v2.13.0 | 108.0.0+up8.0.0 |
## 1. 获取对 Local 集群的访问权限

View File

@@ -20,10 +20,7 @@ Rancher 将 Rancher-Webhook 作为单独的 deployment 和服务部署在 local
| Rancher Version | Webhook Version | Availability in Prime | Availability in Community |
|-----------------|-----------------|-----------------------|---------------------------|
| v2.12.3 | v0.8.3 | &check; | &check; |
| v2.12.2 | v0.8.2 | &check; | &check; |
| v2.12.1 | v0.8.1 | &check; | &check; |
| v2.12.0 | v0.8.0 | &cross; | &check; |
| v2.13.0 | v0.9.0 | &cross; | &check; |
## 为什么我们需要它?

View File

@@ -14,4 +14,4 @@ title: API 参考
import ApiDocMdx from '@theme/ApiDocMdx';
<ApiDocMdx id="rancher-api-v2-12" />
<ApiDocMdx id="rancher-api-v2-13" />

View File

@@ -63,6 +63,7 @@ title: API 令牌
| 设置 | 描述 |
| ------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes) | 用户认证会话令牌的 TTL单位分钟。 |
| [`auth-user-session-idle-ttl-minutes`](#auth-user-session-idle-ttl-minutes) | TTL in minutes on a user auth session token, without user activity. |
| [`kubeconfig-default-token-TTL-minutes`](#kubeconfig-default-token-ttl-minutes) | 默认 TTL应用于所有 kubeconfig 令牌(除了[由 Rancher CLI 生成的令牌](#在生成的-kubeconfig-中禁用令牌))。**此设置从 2.6.6 版本开始引入。** |
| [`kubeconfig-token-ttl-minutes`](#kubeconfig-token-ttl-minutes) | 在 CLI 中生成的令牌 TTL。**自 2.6.6 起已弃用,并将在 2.8.0 中删除**。请知悉,`kubeconfig-default-token-TTL-minutes` 将用于所有 kubeconfig 令牌。 |
| [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes) | 除了由 [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes) 控制的令牌外,所有令牌的最大 TTL。 |
@@ -71,6 +72,11 @@ title: API 令牌
### auth-user-session-ttl-minutes
存活时间TTL单位分钟用于确定用户身份验证会话令牌的到期时间。过期后用户将需要登录并获取新令牌。此设置不受 [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes) 的影响。会话令牌是在用户登录 Rancher 时创建的。
### auth-user-session-idle-ttl-minutes
Time to live (TTL) without user activity for login sessions tokens, in minutes.
By default, [`auth-user-session-idle-ttl-minutes`](#auth-user-session-idle-ttl-minutes) is set to the same value as [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes) (for backward compatibility). It must never exceed the value of `auth-user-session-ttl-minutes`.
### kubeconfig-default-token-TTL-minutes
存活时间TTL单位分钟用于确定 kubeconfig 令牌的到期时间。令牌过期后API 将拒绝令牌。此设置的值不能大于 [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes) 的值。此设置适用于在请求的 kubeconfig 文件中生成的令牌,不包括[由 Rancher CLI 生成的](#在生成的-kubeconfig-中禁用令牌)令牌。
**此设置从 2.6.6 版本开始引入**

View File

@@ -16,10 +16,7 @@ Rancher 将在 GitHub 上发布的 Rancher 的[发版说明](https://github.com/
| Patch 版本 | 发布时间 |
| ----------------------------------------------------------------- | ------------------ |
| [2.12.3](https://github.com/rancher/rancher/releases/tag/v2.12.3) | 2025 年 10 月 23 日 |
| [2.12.2](https://github.com/rancher/rancher/releases/tag/v2.12.2) | 2025 年 9 月 25 日 |
| [2.12.1](https://github.com/rancher/rancher/releases/tag/v2.12.1) | 2025 年 8 月 28 日 |
| [2.12.0](https://github.com/rancher/rancher/releases/tag/v2.12.0) | 2025 年 7 月 30 日 |
| [2.13.0](https://github.com/rancher/rancher/releases/tag/v2.13.0) | 2025 年 11 月 25 |
## 当一个功能被标记为弃用我可以得到什么样的预期?

View File

@@ -14,11 +14,8 @@ title: 安装 Adapter
:::
| Rancher 版本 | Adapter 版本 |
|-----------------|:----------------:|
| v2.12.3 | 107.0.0+up7.0.0 |
| v2.12.2 | 107.0.0+up7.0.0 |
| v2.12.1 | 107.0.0+up7.0.0 |
| v2.12.0 | 107.0.0+up7.0.0 |
|-----------------|------------------|
| v2.13.0 | 108.0.0+up8.0.0 |
## 1. 获取对 Local 集群的访问权限

View File

@@ -20,10 +20,7 @@ Rancher 将 Rancher-Webhook 作为单独的 deployment 和服务部署在 local
| Rancher Version | Webhook Version | Availability in Prime | Availability in Community |
|-----------------|-----------------|-----------------------|---------------------------|
| v2.12.3 | v0.8.3 | &check; | &check; |
| v2.12.2 | v0.8.2 | &check; | &check; |
| v2.12.1 | v0.8.1 | &check; | &check; |
| v2.12.0 | v0.8.0 | &cross; | &check; |
| v2.13.0 | v0.9.0 | &cross; | &check; |
## 为什么我们需要它?

9569
openapi/swagger-v2.13.json Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -4,7 +4,7 @@ The following table summarizes different GitHub metrics to give you an idea of e
| Provider | Project | Stars | Forks | Contributors |
| ---- | ---- | ---- | ---- | ---- |
| Canal | https://github.com/projectcalico/canal | 721 | 98 | 20 |
| Flannel | https://github.com/flannel-io/flannel | 9.3k | 2.9k | 243 |
| Calico | https://github.com/projectcalico/calico | 6.8k | 1.5k | 387 |
| Weave | https://github.com/weaveworks/weave | 6.6k | 679 | 84 |
| Cilium | https://github.com/cilium/cilium | 22.7k | 3.4k | 1,002 |
| Flannel | https://github.com/flannel-io/flannel | 9.3k | 2.9k | 244 |
| Calico | https://github.com/projectcalico/calico | 6.9k | 1.5k | 392 |
| Weave | https://github.com/weaveworks/weave | 6.6k | 680 | 84 |
| Cilium | https://github.com/cilium/cilium | 22.9k | 3.4k | 1,012 |

View File

@@ -0,0 +1,39 @@
Rancher supports the ability to configure OIDC Single Logout (SLO). Options include logging out of the Rancher application only, logging out of Rancher and registered applications tied to the external authentication provider, or a prompt asking the user to choose between the previous options.
### Prerequisites
Before configuring OIDC SLO, ensure the following is set up on your IdP:
- **SLO Support**: The **Log Out behavior** configuration section only appears if your OIDC IdP allows for `OIDC SLO`.
- **Post-Logout Redirect URI**: Your Rancher Server URL must be configured as an authorized post-logout redirect URI in your IdP's OIDC client settings. This URL is used by the IdP to redirect a user back to Rancher after a successful external logout.
### OIDC SLO Configuration
Configure the SLO settings when setting up or editing your OIDC authentication provider.
1. Sign in to Rancher using a standard user or an administrator role.
1. In the top left corner, select **☰** > **Users & Authentication**.
1. In the left navigation menu, select **Auth Provider**.
1. Under the section **Log Out behavior**, choose the appropriate SLO setting as described below:
| Setting | Description |
| ------------------------- | ----------------------------------------------------------------------------- |
| Log out of Rancher and not authentication provider | Choosing this option will only logout the Rancher application and not external authentication providers. |
| Log out of Rancher and authentication provider (includes all other applications registered with authentication provider) | Choosing this option will logout Rancher and all external authentication providers along with any registered applications linked to the provider. |
| Allow the user to choose one of the above in an additional log out step | Choosing this option presents users with a choice of logout method as described above. |
1. If you choose to log out of your IdP, provide an [**End Session Endpoint**](#how-to-get-the-end-session-endpoint). Rancher uses this URL to initiate the external logout.
#### How to get the End Session Endpoint
The `end_session_endpoint` is one of the specific URLs published within a standardized JSON object containing the IdP's metadata and is retrieved from the OIDC Discovery URL. To get the `end_session_endpoint` from the OIDC Discovery URL, follow these steps:
1. Obtain the Discovery URL by appending the IdP Issuer URL with the well-known path (`.well-known/openid-configuration`).
1. Send an HTTP `GET` request to the Discovery URL.
1. In the JSON object, look for the key named `end_session_endpoint` and retrieve the URL.
You can also use a `curl` command to retrieve `end_session_endpoint`:
```sh
curl -s <ISSUER_URL>/.well-known/openid-configuration | jq '.end_session_endpoint'
```

View File

@@ -308,6 +308,12 @@
<p>
<b>Related terms:</b> <i>Downstream cluster, Hosted cluster, Imported cluster, Managed cluster, Registered cluster</i>
</p>
<dt>
User
</dt>
<dd>
A Rancher resource <code>users.management.cattle.io</code> that defines a user within Rancher.
</dd>
</dl>
## W
@@ -319,4 +325,4 @@
<dd>
Objects that set deployment rules for pods. Based on these rules, Kubernetes performs the deployment and updates the workload with the current state of the application. Workloads let you define the rules for application scheduling, scaling, and upgrade.
</dd>
</dl>
</dl>

View File

@@ -228,6 +228,7 @@ const sidebars = {
"how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-freeipa",
"how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad",
"how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-github",
"how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-github-app",
"how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-keycloak-oidc",
"how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-keycloak-saml",
"how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-pingidentity",
@@ -771,7 +772,7 @@ const sidebars = {
"how-to-guides/advanced-user-guides/enable-experimental-features/unsupported-storage-drivers",
"how-to-guides/advanced-user-guides/enable-experimental-features/istio-traffic-management-features",
"how-to-guides/advanced-user-guides/enable-experimental-features/continuous-delivery",
"how-to-guides/advanced-user-guides/enable-experimental-features/cluster-role-aggregation",
"how-to-guides/advanced-user-guides/enable-experimental-features/role-template-aggregation",
],
},
"how-to-guides/advanced-user-guides/open-ports-with-firewalld",
@@ -882,9 +883,7 @@ const sidebars = {
type: "doc",
id: "reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/use-existing-nodes",
},
items: [
"reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options",
],
items: [],
},
"reference-guides/cluster-configuration/rancher-server-configuration/sync-clusters",
],
@@ -1013,6 +1012,8 @@ const sidebars = {
"reference-guides/system-tools",
"reference-guides/dual-stack",
"reference-guides/rke1-template-example-yaml",
"reference-guides/rancher-webhook",
{
@@ -1273,7 +1274,8 @@ const sidebars = {
label: "Example Workflows",
items: ["api/workflows/projects",
"api/workflows/kubeconfigs",
"api/workflows/tokens"],
"api/workflows/tokens",
"api/workflows/users"],
},
"api/api-reference",
"api/api-tokens",

View File

@@ -5,6 +5,27 @@ title: Rancher Documentation Versions
<!-- releaseTask -->
### Current Versions
Here you can find links to supporting documentation for the current released version of Rancher v2.13, and its availability for [Rancher Prime](/v2.13/getting-started/quick-start-guides/deploy-rancher-manager/prime) and the Community version of Rancher:
<table>
<tr>
<th>Version</th>
<th>Documentation</th>
<th>Release Notes</th>
<th>Support Matrix</th>
<th>Prime</th>
<th>Community</th>
</tr>
<tr>
<td><b>v2.13.0</b></td>
<td><a href="https://ranchermanager.docs.rancher.com/v2.13">Documentation</a></td>
<td><a href="https://github.com/rancher/rancher/releases/tag/v2.13.0">Release Notes</a></td>
<td><center>N/A</center></td>
<td><center>N/A</center></td>
<td><center>&#10003;</center></td>
</tr>
</table>
Here you can find links to supporting documentation for the current released version of Rancher v2.12, and its availability for [Rancher Prime](/v2.12/getting-started/quick-start-guides/deploy-rancher-manager/prime) and the Community version of Rancher:
<table>

View File

@@ -12,6 +12,7 @@ import DeprecationWeave from '/shared-files/_deprecation-weave.md';
import DeprecationHelm2 from '/shared-files/_deprecation-helm2.md';
import DockerSupportWarning from '/shared-files/_docker-support-warning.md';
import ConfigureSLO from '/shared-files/_configure-slo.md';
import ConfigureSLOOidc from '/shared-files/_configure-slo-oidc.md';
import EOLRKE1Warning from '/shared-files/_eol-rke1-warning.md';
import PermissionsWarning from '/shared-files/_permissions-warning.md';
@@ -27,6 +28,7 @@ export default {
CNIPopularityTable,
ConfigureSLO,
ConfigureSLOOidc,
DeprecationOPAGatekeeper,
DeprecationWeave,
DeprecationHelm2,

View File

@@ -15,4 +15,4 @@ At this time, not all Rancher resources are available through the Rancher Kubern
import ApiDocMdx from '@theme/ApiDocMdx';
<ApiDocMdx id="rancher-api-v2-12" />
<ApiDocMdx id="rancher-api-v2-13" />

View File

@@ -60,17 +60,23 @@ This feature affects all tokens which include, but are not limited to, the follo
These global settings affect Rancher token behavior.
| Setting | Description |
| ------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes) | TTL in minutes on a user auth session token. |
| [`kubeconfig-default-token-ttl-minutes`](#kubeconfig-default-token-ttl-minutes) | Default TTL applied to all kubeconfig tokens except for tokens [generated by Rancher CLI](#disable-tokens-in-generated-kubeconfigs). |
| [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes) | Max TTL for all tokens except those controlled by [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes). |
| [`kubeconfig-generate-token`](#kubeconfig-generate-token) | If true, automatically generate tokens when a user downloads a kubeconfig. |
| Setting | Description |
| ------- | ----------- |
| [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes) | TTL in minutes on a user auth session token. |
| [`auth-user-session-idle-ttl-minutes`](#auth-user-session-idle-ttl-minutes) | TTL in minutes on a user auth session token, without user activity. |
| [`kubeconfig-default-token-ttl-minutes`](#kubeconfig-default-token-ttl-minutes) | Default TTL applied to all kubeconfig tokens except for tokens [generated by Rancher CLI](#disable-tokens-in-generated-kubeconfigs). |
| [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes) | Max TTL for all tokens except those controlled by [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes). |
| [`kubeconfig-generate-token`](#kubeconfig-generate-token) | If true, automatically generate tokens when a user downloads a kubeconfig. |
### auth-user-session-ttl-minutes
Time to live (TTL) duration in minutes, used to determine when a user auth session token expires. When expired, the user must log in and obtain a new token. This setting is not affected by [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes). Session tokens are created when a user logs into Rancher.
### auth-user-session-idle-ttl-minutes
Time to live (TTL) without user activity for login sessions tokens, in minutes.
By default, [`auth-user-session-idle-ttl-minutes`](#auth-user-session-idle-ttl-minutes) is set to the same value as [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes) (for backward compatibility). It must never exceed the value of `auth-user-session-ttl-minutes`.
### kubeconfig-default-token-ttl-minutes
Time to live (TTL) duration in minutes, used to determine when a kubeconfig token expires. When the token is expired, the API rejects the token. This setting can't be larger than [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes). This setting applies to tokens generated in a requested kubeconfig file, except for tokens [generated by Rancher CLI](#disable-tokens-in-generated-kubeconfigs). As of Rancher v2.8, the default duration is `43200`, which means that tokens expire in 30 days.

View File

@@ -20,14 +20,6 @@ To get a description of the fields and structure of the Kubeconfig resource, run
kubectl explain kubeconfigs.ext.cattle.io
```
## Feature Flag
The Kubeconfigs Public API is available since Rancher v2.12.0 and is enabled by default. It can be disabled by setting the `ext-kubeconfigs` feature flag to `false`.
```sh
kubectl patch feature ext-kubeconfigs -p '{"spec":{"value":false}}'
```
## Creating a Kubeconfig
Only a **valid and active** Rancher user can create a Kubeconfig. For example, trying to create a Kubeconfig using a `system:admin` service account will lead to an error:

View File

@@ -20,20 +20,14 @@ To get a description of the fields and structure of the Token resource, run:
kubectl explain tokens.ext.cattle.io
```
## Feature Flag
The Tokens Public API is available for Rancher v2.12.0 and later, and is enabled by default. You can disable the Tokens Public API by setting the `ext-tokens` feature flag to `false` as shown in the example `kubectl` command below:
```sh
kubectl patch feature ext-tokens -p '{"spec":{"value":false}}'
```
## Creating a Token
:::caution
The Token value is only returned once in the `status.value` field.
:::
Since Rancher v2.13.0 the `status.bearerToken` now contains a fully formed and ready-to-use Bearer token that can be used to authenticate to [Rancher API](../v3-rancher-api-guide.md).
Only a **valid and active** Rancher user can create a Token. Otherwise, you will get an error displayed (`Error from server (Forbidden)...`) when attempting to create a Token.
```bash

View File

@@ -0,0 +1,186 @@
---
title: Users
---
## User Resource
The `User` resource (users.management.cattle.io) represents a user account in Rancher.
To get a description of the fields and structure of the `User` resource, run:
```sh
kubectl explain users.management.cattle.io
```
## Creating a User
Creating a local user is a two-step process: you must create the `User` resource, then provide a password via a Kubernetes `Secret`.
Only a user with sufficient permissions can create a `User` resource.
```bash
kubectl create -f -<<EOF
apiVersion: management.cattle.io/v3
kind: User
metadata:
name: testuser
displayName: "Test User"
username: "testuser"
EOF
```
The user's password must be provided in a `Secret` object within the `cattle-local-user-passwords` namespace. The Rancher webhook will automatically hash the password and update the `Secret`.
:::important
Important: The `Secret` must have the same name as the metadata.name (and username) of the `User` resource.
:::
```bash
kubectl create -f -<<EOF
apiVersion: v1
kind: Secret
metadata:
name: testuser
namespace: cattle-local-user-passwords
type: Opaque
stringData:
password: Pass1234567!
EOF
```
After the plaintext password is submitted, the Rancher-Webhook automatically hashes it, replacing the content of the `Secret`, ensuring that the plaintext password is never stored:
```yaml
apiVersion: v1
data:
password: 1c1Y4CdjlehGWFz26F414x2qoj4gch5L5OXsx35MAa8=
salt: m8Co+CfMDo5XwVl0FqYzGcRIOTgRrwFSqW8yurh5DcE=
kind: Secret
metadata:
annotations:
cattle.io/password-hash: pbkdf2sha3512
name: testuser
namespace: cattle-local-user-passwords
ownerReferences:
- apiVersion: management.cattle.io/v3
kind: User
name: testuser
uid: 663ffb4f-8178-46c8-85a3-337f4d5cbc2e
uid: bade9f0a-b06f-4a77-9a39-4284dc2349c5
type: Opaque
```
## Updating User's Password
To change a user's password, use the `PasswordChangeRequest` resource, which handles secure password updates.
```yaml
kubectl create -f -<<EOF
apiVersion: ext.cattle.io/v1
kind: PasswordChangeRequest
spec:
userID: "testuser"
currentPassword: "Pass1234567!"
newPassword: "NewPass1234567!"
EOF
```
## Listing Users
List all `User` resources in the cluster:
```sh
kubectl get users
NAME AGE
testuser 3m54s
user-4n5ws 12m
```
## Viewing a User
View a specific `User` resource by name:
```sh
kubectl get user testuser
NAME AGE
testuser 3m54s
```
## Deleting a User
Deleting a user will automatically delete the corresponding password `Secret`.
```sh
kubectl delete user testuser
user.management.cattle.io "testuser" deleted
```
## Get a Current User's Information
A client uses the `SelfUser` resource to retrieve information about the currently authenticated user without knowing their ID. The user ID is returned in the `.status.userID` field.
```bash
kubectl create -o jsonpath='{.status.userID}' -f -<<EOF
apiVersion: ext.cattle.io/v1
kind: SelfUser
EOF
testuser
```
## Refreshing a User's Group Membership
Updates to user group memberships are triggered by the `GroupMembershipRefreshRequest` resource.
:::note
Group membership is only supported for external authentication providers.
:::
### For a Single User
```bash
kubectl create -o jsonpath='{.status}' -f -<<EOF
apiVersion: ext.cattle.io/v1
kind: GroupMembershipRefreshRequest
spec:
userId: testuser
EOF
{
"conditions": [
{
"lastTransitionTime": "2025-11-10T12:01:03Z",
"message": "",
"reason": "",
"status": "True",
"type": "UserRefreshInitiated"
}
],
"summary": "Completed"
}
```
### For All Users
```bash
kubectl create -o jsonpath='{.status}' -f -<<EOF
apiVersion: ext.cattle.io/v1
kind: GroupMembershipRefreshRequest
spec:
userId: "*"
EOF
{
"conditions": [
{
"lastTransitionTime": "2025-11-10T12:01:59Z",
"message": "",
"reason": "",
"status": "True",
"type": "UserRefreshInitiated"
}
],
"summary": "Completed"
}
```

View File

@@ -16,10 +16,7 @@ Rancher will publish deprecated features as part of the [release notes](https://
| Patch Version | Release Date |
|---------------|---------------|
| [2.12.3](https://github.com/rancher/rancher/releases/tag/v2.12.3) | October 23, 2025 |
| [2.12.2](https://github.com/rancher/rancher/releases/tag/v2.12.2) | September 25, 2025 |
| [2.12.1](https://github.com/rancher/rancher/releases/tag/v2.12.1) | August 28, 2025 |
| [2.12.0](https://github.com/rancher/rancher/releases/tag/v2.12.0) | July 30, 2025 |
| [2.13.0](https://github.com/rancher/rancher/releases/tag/v2.13.0) | November 25, 2025 |
## What can I expect when a feature is marked for deprecation?

View File

@@ -18,7 +18,7 @@ Some feature flags require a restart of the Rancher container. Features that req
The following is a list of feature flags available in Rancher. If you've upgraded from a previous Rancher version, you may see additional flags in the Rancher UI, such as `proxy` or `dashboard` (both [discontinued](https://github.com/rancher/rancher-docs/tree/main/archived_docs/en/version-2.5/reference-guides/installation-references/feature-flags.md)):
- `aggregated-roletemplates`: Use cluster role aggregation architecture for RoleTemplates, ProjectRoleTemplateBindings, and ClusterRoleTemplateBindings. See [Cluster Role Aggregation](../../../how-to-guides/advanced-user-guides/enable-experimental-features/cluster-role-aggregation.md) for more information.
- `aggregated-roletemplates`: Use cluster role aggregation architecture for RoleTemplates, ProjectRoleTemplateBindings, and ClusterRoleTemplateBindings. See [RoleTemplate Aggregation](../../../how-to-guides/advanced-user-guides/enable-experimental-features/role-template-aggregation.md) for more information.
- `clean-stale-secrets`: Removes stale secrets from the `cattle-impersonation-system` namespace. This slowly cleans up old secrets which are no longer being used by the impersonation system.
- `continuous-delivery`: Allows Fleet GitOps to be disabled separately from Fleet. See [Continuous Delivery.](../../../how-to-guides/advanced-user-guides/enable-experimental-features/continuous-delivery.md) for more information.
- `fleet`: The Rancher provisioning framework in v2.6 and later requires Fleet. The flag will be automatically enabled when you upgrade, even if you disabled this flag in an earlier version of Rancher. See [Continuous Delivery with Fleet](../../../integrations-in-rancher/fleet/fleet.md) for more information.
@@ -61,7 +61,7 @@ The following table shows the availability and default values for some feature f
| Feature Flag Name | Default Value | Status | Available As Of | Additional Information |
| ----------------------------- | ------------- | ------------ | --------------- | ---------------------- |
| `aggregated-roletemplates` | `Disabled` | Highly experimental | v2.11.0 | This flag value is locked on install and can't be changed. |
| `aggregated-roletemplates` | `Disabled` | Experimental | v2.11.0 | This flag value is locked on install and can't be changed. |
| `clean-stale-secrets` | `Active` | GA | v2.10.2 | |
| `continuous-delivery` | `Active` | GA | v2.6.0 | |
| `external-rules` | v2.7.14: `Disabled`, v2.8.5: `Active` | Removed | v2.7.14, v2.8.5 | This flag affected [external `RoleTemplate` behavior](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#external-roletemplate-behavior). It is removed in Rancher v2.9.0 and later as the behavior is enabled by default. |

View File

@@ -6,4 +6,4 @@ title: Installation References
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/installation-references"/>
</head>
Please see the following reference guides for other installation resources: [Rancher Helm chart options](helm-chart-options.md), [TLS settings](tls-settings.md), and [feature flags](feature-flags.md).
Please see the following reference guides for other installation resources: [Rancher Helm chart options](helm-chart-options.md), [TLS settings](tls-settings.md), and [feature flags](feature-flags.md).

View File

@@ -25,10 +25,16 @@ Rancher needs to be installed on a supported Kubernetes version. Consult the [Ra
Regardless of version and distribution, the Kubernetes cluster must have the aggregation API layer properly configured to support the [extension API](../../../api/extension-apiserver.md) used by Rancher.
### Install Rancher on a Hardened Kubernetes cluster
### Install Rancher on a Hardened Kubernetes Cluster
If you install Rancher on a hardened Kubernetes cluster, check the [Exempting Required Rancher Namespaces](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md#exempting-required-rancher-namespaces) section for detailed requirements.
### Install Rancher on an IPv6-only or Dual-stack Kubernetes Cluster
You can deploy Rancher on an IPv6-only or dual-stack Kubernetes cluster.
For details on Ranchers IPv6-only and dual-stack support, see the [IPv4/IPv6 Dual-stack](../../../reference-guides/dual-stack.md) page.
## Operating Systems and Container Runtime Requirements
All supported operating systems are 64-bit x86. Rancher should work with any modern Linux distribution.

View File

@@ -238,21 +238,23 @@ In these cases, you have to explicitly allow this traffic in your host firewall,
When using the [AWS EC2 node driver](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md) to provision cluster nodes in Rancher, you can choose to let Rancher create a security group called `rancher-nodes`. The following rules are automatically added to this security group.
| Type | Protocol | Port Range | Source/Destination | Rule Type |
| Type | Protocol | Port Range | Source/Destination | Rule Type |
|-----------------|:--------:|:-----------:|------------------------|:---------:|
| SSH | TCP | 22 | 0.0.0.0/0 | Inbound |
| HTTP | TCP | 80 | 0.0.0.0/0 | Inbound |
| Custom TCP Rule | TCP | 443 | 0.0.0.0/0 | Inbound |
| Custom TCP Rule | TCP | 2376 | 0.0.0.0/0 | Inbound |
| Custom TCP Rule | TCP | 2379-2380 | sg-xxx (rancher-nodes) | Inbound |
| Custom UDP Rule | UDP | 4789 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 6443 | 0.0.0.0/0 | Inbound |
| Custom UDP Rule | UDP | 8472 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 10250-10252 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 10256 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 30000-32767 | 0.0.0.0/0 | Inbound |
| Custom UDP Rule | UDP | 30000-32767 | 0.0.0.0/0 | Inbound |
| All traffic | All | All | 0.0.0.0/0 | Outbound |
| SSH | TCP | 22 | 0.0.0.0/0 and ::/0 | Inbound |
| HTTP | TCP | 80 | 0.0.0.0/0 and ::/0 | Inbound |
| Custom TCP Rule | TCP | 443 | 0.0.0.0/0 and ::/0 | Inbound |
| Custom TCP Rule | TCP | 2376 | 0.0.0.0/0 and ::/0 | Inbound |
| Custom TCP Rule | TCP | 6443 | 0.0.0.0/0 and ::/0 | Inbound |
| Custom TCP Rule | TCP | 179 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 9345 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 2379-2380 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 10250-10252 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 10256 | sg-xxx (rancher-nodes) | Inbound |
| Custom UDP Rule | UDP | 4789 | sg-xxx (rancher-nodes) | Inbound |
| Custom UDP Rule | UDP | 8472 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 30000-32767 | 0.0.0.0/0 and ::/0 | Inbound |
| Custom UDP Rule | UDP | 30000-32767 | 0.0.0.0/0 and ::/0 | Inbound |
| All traffic | All | All | 0.0.0.0/0 and ::/0 | Outbound |
### Opening SUSE Linux Ports

View File

@@ -1,19 +0,0 @@
---
title: ClusterRole Aggregation
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features/cluster-role-aggregation"/>
</head>
:::caution
ClusterRole aggregation is a highly experimental feature that changes the RBAC architecture used for RoleTemplates, ClusterRoleTemplateBindings and ProjectRoleTemplateBindings. **It is not supported for production environments**. This feature is meant exclusively for internal testing in v2.11 and v2.12. It is expected to be available as a beta for users in v2.13.
:::
ClusterRole aggregation implements RoleTemplates, ClusterRoleTemplateBindings and ProjectRoleTemplateBindings using the Kubernetes feature [Aggregated ClusterRoles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles). The new architecture results in a net reduction in RBAC objects (Roles, RoleBindings, ClusterRoles and ClusterRoleBindings) both in the Rancher cluster and the downstream clusters.
| Environment Variable Key | Default Value | Description |
| --- | --- | --- |
| `aggregated-roletemplates` | `false` | [Experimental] Make RoleTemplates use aggregation for generated RBAC roles. |
The value of this feature flag is locked on installation, which shows up in the UI as a lock symbol beside the feature flag. That means the feature can only be set on the first ever installation of Rancher. After that, attempting to modify the value will be denied.

View File

@@ -0,0 +1,21 @@
---
title: RoleTemplate Aggregation
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features/role-template-aggregation"/>
</head>
:::caution
RoleTemplate aggregation is an experimental feature in v2.13 that changes the RBAC architecture used for RoleTemplates, ClusterRoleTemplateBindings and ProjectRoleTemplateBindings. **It is not supported for production environments**. Breaking changes may occur between v2.13 and v2.14.
:::
RoleTemplate aggregation implements RoleTemplates, ClusterRoleTemplateBindings and ProjectRoleTemplateBindings using the Kubernetes feature [Aggregated ClusterRoles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles). The new architecture results in a net reduction in RBAC objects (Roles, RoleBindings, ClusterRoles and ClusterRoleBindings) both in the Rancher cluster and the downstream clusters.
For more information on how the feature can improve scalability and performance, please see the [Rancher Blog post](https://www.suse.com/c/rancher_blog/fewer-bindings-more-power-ranchers-rbac-boost-for-enhanced-performance-and-scalability/).
| Environment Variable Key | Default Value | Description |
| --- | --- | --- |
| `aggregated-roletemplates` | `false` | [Beta] Make RoleTemplates use aggregation for generated RBAC roles. |
The value of this feature flag is locked on installation, which shows up in the UI as a lock symbol beside the feature flag. That means the feature can only be set on the first ever installation of Rancher. After that, attempting to modify the value will be denied.

View File

@@ -58,3 +58,7 @@ if the user has not yet logged in to Rancher. However, if the user has previousl
### You are not redirected to your authentication provider
If you fill out the **Configure an Amazon Cognito account** form and click on **Enable**, and you are not redirected to Amazon Cognito, verify your Amazon Cognito configuration.
## Configuring OIDC Single Logout (SLO)
<ConfigureSLOOidc />

View File

@@ -363,3 +363,22 @@ Since the filter prevents Rancher from seeing that the user belongs to an exclud
>- If you don't wish to upgrade to v2.7.0+ after the Azure AD Graph API is retired, you'll need to either:
- Use the built-in Rancher auth or
- Use another third-party auth system and set that up in Rancher. Please see the [authentication docs](authentication-config.md) to learn how to configure other open authentication providers.
## Azure AD Roles Claims
Rancher supports the Roles claim provided by the Azure AD OIDC provider token, allowing for complete delegation of Role-Based Access Control (RBAC) to Azure AD. Previously, Rancher only processed the `Groups` claim to determine a user's `group` membership. This enhancement extends the logic to also include the Roles claim within the user's OIDC token.
By including the Roles claim, administrators can:
- Define specific high-level roles in Azure AD.
- Bind these Azure AD Roles directly to ProjectRoles or ClusterRoles within Rancher.
- Centralize and fully delegate access control decisions to the external OIDC provider.
For example, consider the following role structure in Azure AD:
| Azure AD Role Name | Members |
|--------------------|----------------|
| project-alpha-dev | User A, User C |
User A logs into Rancher via Azure AD. The OIDC token includes a Roles claim, [`project-alpha-dev`]. The Rancher logic processes the token, and the internal list of `groups`/roles for User A which includes `project-alpha-dev`. An administrator has created a Project Role Binding that maps the Azure AD Role `project-alpha-dev` to the Project Role `Dev Member` for Project Alpha. User A is automatically granted the `Dev Member` role in Project Alpha.

View File

@@ -7,60 +7,69 @@ description: Create an OpenID Connect (OIDC) client and configure Rancher to wor
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-generic-oidc"/>
</head>
If your organization uses an OIDC provider for user authentication, you can configure Rancher to allow login using Identity Provider (IdP) credentials. Rancher supports integration with the OpenID Connect (OIDC) protocol and the SAML protocol. Both implementations are functionally equivalent when used with Rancher. The following instructions describe how to configure Rancher to work using the OIDC protocol.
Generic OpenID Connect (OIDC) allows users to sign in to Rancher using their credentials from their existing account at an OIDC Identity Provider (IdP). Rancher supports integration with the OIDC protocol and the SAML protocol. Both implementations are functionally equivalent when used with Rancher. The following instructions describe how to create an OIDC client and configure Rancher to work with your authentication provider. Users can then sign into Rancher using their login from the OIDC IdP.
## Prerequisites
- In Rancher:
- Generic OIDC is disabled.
### Identity Provider
In Rancher, Generic OIDC is disabled.
:::note
Consult the documentation for your specific IdP to complete the listed prerequisites.
:::
- In your IdP:
- Create a new client with the settings below:
#### OIDC Client
In your IdP, create a new client with the settings below:
Setting | Value
------------|------------
`Client ID` | <CLIENT_ID> (e.g. `rancher`)
`Name` | <CLIENT_NAME> (e.g. `rancher`)
`Client Protocol` | `openid-connect`
`Access Type` | `confidential`
`Valid Redirect URI` | `https://yourRancherHostURL/verify-auth`
In the new OIDC client, create mappers to expose the user's fields.
1. Create a new `Groups Mapper` with the settings below:
Setting | Value
------------|------------
`Client ID` | <CLIENT_ID> (e.g. `rancher`)
`Name` | <CLIENT_NAME> (e.g. `rancher`)
`Client Protocol` | `openid-connect`
`Access Type` | `confidential`
`Valid Redirect URI` | `https://yourRancherHostURL/verify-auth`
`Name` | `Groups Mapper`
`Mapper Type` | `Group Membership`
`Token Claim Name` | `groups`
`Add to ID token` | `OFF`
`Add to access token` | `OFF`
`Add to user info` | `ON`
- In the new OIDC client, create mappers to expose the users fields.
- Create a new Groups Mapper with the settings below:
1. Create a new `Client Audience` with the settings below:
Setting | Value
------------|------------
`Name` | `Groups Mapper`
`Mapper Type` | `Group Membership`
`Token Claim Name` | `groups`
`Add to ID token` | `OFF`
`Add to access token` | `OFF`
`Add to user info` | `ON`
Setting | Value
------------|------------
`Name` | `Client Audience`
`Mapper Type` | `Audience`
`Included Client Audience` | `CLIENT_NAME`
`Add to access token` | `ON`
- Create a new Client Audience with the settings below:
1. Create a new `Groups Path` with the settings below.
Setting | Value
------------|------------
`Name` | `Client Audience`
`Mapper Type` | `Audience`
`Included Client Audience` | <CLIENT_NAME>
`Add to access token` | `ON`
Setting | Value
------------|------------
`Name` | `Group Path`
`Mapper Type` | `Group Membership`
`Token Claim Name` | `full_group_path`
`Full group path` | `ON`
`Add to user info` | `ON`
- Create a new "Groups Path" with the settings below.
:::warning
Setting | Value
------------|------------
`Name` | `Group Path`
`Mapper Type` | `Group Membership`
`Token Claim Name` | `full_group_path`
`Full group path` | `ON`
`Add to user info` | `ON`
Rancher uses the value received in the "sub" claim to form the PrincipalID which is the unique identifier in Rancher. It is important to make this a value that is unique and immutable.
- Important: Rancher will use the value received in the "sub" claim to form the PrincipalID which is the unique identifier in Rancher. It is important to make this a value that will be unique and immutable.
:::
## Configuring Generic OIDC in Rancher
@@ -80,7 +89,31 @@ Consult the documentation for your specific IdP to complete the listed prerequis
**Result:** Rancher is configured to work with your provider using the OIDC protocol. Your users can now sign into Rancher using their IdP logins.
## Configuration Reference
### Custom Claim Mapping
Custom claim mapping within the Generic OIDC configuration is supported for `name`, `email` and `groups` claims. This allows you to manually map these OIDC claims when your IdP doesn't use standard names in tokens.
#### How a Custom Groups Claim Works
A custom groups claim influences how user groups work:
- If both the standard OIDC `groups` claim and the custom groups claim are present in the user's token, the custom claim supplements the list of groups provided by the standard claim.
- If there is no standard groups claim in the token, the groups listed in the custom claim will form the user's only groups.
:::note
There is no search functionality available for groups sourced from a custom claim. To assign a role to one of these groups, you must manually enter the group's exact name into the RBAC field.
:::
#### Configuring Custom Claims
When on the **Configure an OIDC account** form:
1. Select **Add custom claims**.
1. Add your custom `name`, `email` or `groups` claims to the appropriate **Custom Claims** field.
For example, if your IdP sends `groups` in a claim called `custom_roles`, enter `custom_roles` into the **Custom Groups Claim** field. Rancher then supplements the standard OIDC `groups` claim or looks for that specific claim when processing the user's token.
### Configuration Reference
| Field | Description |
| ------------------------- |----------------------------------------------------------------------------------------------------------------------------------------------------|
@@ -91,6 +124,15 @@ Consult the documentation for your specific IdP to complete the listed prerequis
| Rancher URL | The URL for your Rancher Server. |
| Issuer | The URL of your IdP. If your provider has discovery enabled, Rancher uses the Issuer URL to fetch all of the required URLs. |
| Auth Endpoint | The URL where users are redirected to authenticate. |
#### Custom Claims
| Custom Claim Field | Default OIDC Claim | Custom Claim Description |
| ------------- | ------------------ | ------------------------ |
| Custom Name Claim | `name` | The name of the claim in the OIDC token that contains the user's full name or display name. |
| Custom Email Claim | `email` | The name of the claim in the OIDC token that contains the user's email address. |
| Custom Groups Claim | `groups` | The name of the claim in the OIDC token that contains the user's group memberships (used for RBAC). |
## Troubleshooting
If you are experiencing issues while testing the connection to the OIDC server, first double-check the configuration options of your OIDC client. You can also inspect the Rancher logs to help pinpoint what's causing issues. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging](../../../../faq/technical-items.md#how-can-i-enable-debug-logging) in this documentation.
@@ -108,3 +150,7 @@ If the `Issuer` and `Auth Endpoint` are generated incorrectly, open the **Config
### Error: "Invalid grant_type"
In some cases, the "Invalid grant_type" error message may be misleading and is actually caused by setting the `Valid Redirect URI` incorrectly.
## Configuring OIDC Single Logout (SLO)
<ConfigureSLOOidc />

View File

@@ -0,0 +1,84 @@
---
title: Configure GitHub App
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-github-app"/>
</head>
In environments using GitHub, you can configure the new GitHub App authentication provider in Rancher, which allows users to authenticate against a GitHub Organization account using a dedicated [GitHub App](https://docs.github.com/en/apps/overview). This new provider runs alongside the existing standard GitHub authentication provider, offering increased security and better management of permissions based on GitHub Organization teams.
## Prerequisites
:::warning
The GitHub App authentication provider only works with [GitHub Organization accounts](https://docs.github.com/en/get-started/learning-about-github/types-of-github-accounts#organization-accounts). It does not function with individual [GitHub User accounts](https://docs.github.com/en/get-started/learning-about-github/types-of-github-accounts#user-accounts).
:::
Before configuring the provider in Rancher, you must first create a GitHub App for your organization, generate a client secret for your GitHub App and generate a private key for your GitHub App. Refer to [Registering a GitHub App](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app) for details.
### Create GitHub App
1. Open your [GitHub organization settings](https://github.com/settings/organizations).
1. To the right of the organization, select **Settings**.
1. In the left sidebar, click **Developer settings** > **GitHub Apps**.
1. Click **New Github App**.
1. Fill in the GitHub App configuration form with these values:
- **GitHub App name**: Anything you like, e.g. `My Rancher`.
- **Application description**: Optional, can be left blank.
- **Homepage URL**: `https://localhost:8443`.
- **Callback URL**: `https://localhost:8443/verify-auth`.
1. Select **Create Github App**.
### Generate a Client Secret
Generate a [client secret](https://docs.github.com/en/rest/authentication/authenticating-to-the-rest-api#using-basic-authentication) on the settings page for your app.
1. Go to your GitHub App.
1. Next to **Client Secrets**, select **Generate a new client secret**.
### Generate a Private Key
Generate a [private key](https://docs.github.com/en/enterprise-server/apps/creating-github-apps/authenticating-with-a-github-app/managing-private-keys-for-github-apps#generating-private-keys) on the settings page for your app.
1. Go to your GitHub App.
1. Next to **Private Keys**, click **Generate a private key**.
## GitHub App Auth Provider Configuration
To set up the GitHub App Auth Provider in Rancher, follow these steps:
1. Navigate to the **Users & Authentication** section in the Rancher UI.
1. Select **Auth Providers**.
1. Select the **GitHub App** tile.
1. Gather and enter the details of your GitHub App into the configuration form fields.
| Field Name | Description |
| ---------- | ----------- |
| **Client ID** (Required) | The client ID of your GitHub App. |
| **Client Secret** (Required) | The client secret of your GitHub App. |
| **GitHub App ID** (Required) | The numeric ID associated with your GitHub App. |
| **Installation ID** (Optional) | If you want to restrict authentication to a single installation of the App, provide its specific numeric Installation ID. |
| **Private Key** (Required) | The contents of the Private Key file (in PEM format) generated by GitHub for your App. |
:::note
A GitHub App can be installed across multiple Organizations, and each installation has a unique Installation ID. If you want to restrict authentication to a single App installation and GitHub Organization, provide the Installation ID during configuration. If you do not provide an Installation ID, the user's permissions are aggregated across all installations.
:::
1. Select **Enable**. Rancher attempts to validate the credentials and, upon success, activates the GitHub App provider.
After it is enabled, users logging in via the GitHub App provider are automatically identified and you can leverage your GitHub Organization's teams and users to configure Role-Based Access Control (RBAC) and to assign permissions to projects and clusters.
:::note
Ensure that the users and teams you intend to use for authorization exist within the GitHub organization managed by the App.
:::
- **Users**: Individual GitHub users who are members of the GitHub Organization where the App is installed can log in.
- **Groups**: GitHub Organization teams are mapped to Rancher Groups, allowing you to assign entire teams permissions within Rancher projects and clusters.

View File

@@ -203,3 +203,7 @@ To resolve this, you can either:
3. Save your changes.
2. Reconfigure your Keycloak OIDC setup using a user that is assigned to at least one group in Keycloak.
## Configuring OIDC Single Logout (SLO)
<ConfigureSLOOidc />

View File

@@ -120,6 +120,18 @@ For a breakdown of the port requirements for etcd nodes, controlplane nodes, and
Details on which ports are used in each situation are found under [Downstream Cluster Port Requirements](../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#downstream-kubernetes-cluster-nodes).
### IPv6 Address Requirements
Rancher supports clusters configured with IPv4-only, IPv6-only, or dual-stack networking.
You must provision each node with at least one valid IPv4 address, one IPv6 address, or both, according to the cluster networking configuration.
For IPv6-only environments, ensure you correctly configure the operating system and that the `/etc/hosts` file includes a valid localhost entry, for example:
```
::1 localhost
```
:::caution
You should never register a node with the same hostname or IP address as an existing node. Doing so causes RKE to prevent the node from joining, and provisioning to hang. This can occur for both node driver and custom clusters. If a node must reuse a hostname or IP of an existing node, you must set the `hostname_override` [RKE option](https://rke.docs.rancher.com/config-options/nodes#overriding-the-hostname) before registering the node, so that it can join correctly.

View File

@@ -299,7 +299,7 @@ rancher_kubernetes_engine_config:
useInstanceMetadataHostname: true
```
You must not enable `useInstanceMetadataHostname` when setting custom values for `hostname-override` for custom clusters. When you create a [custom cluster](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/use-existing-nodes.md), add [`--node-name`](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options.md) to the `docker run` node registration command to set `hostname-override` — for example, `"$(hostname -f)"`. This can be done manually or by using **Show Advanced Options** in the Rancher UI to add **Node Name**.
You must not enable `useInstanceMetadataHostname` when setting custom values for `hostname-override` for custom clusters. When you create a [custom cluster](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/use-existing-nodes.md), add `--node-name` to the `docker run` node registration command to set `hostname-override` — for example, `"$(hostname -f)"`. This can be done manually or by using **Show Advanced Options** in the Rancher UI to add **Node Name**.
2. Select the cloud provider.

View File

@@ -103,11 +103,11 @@ The `worker` nodes, which is where your workloads will be deployed on, will typi
We recommend the minimum three-node architecture listed in the table below, but you can always add more Linux and Windows workers to scale up your cluster for redundancy:
| Node | Operating System | Kubernetes Cluster Role(s) | Purpose |
| ------ | --------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- |
| Node 1 | Linux (Ubuntu Server 18.04 recommended) | Control plane, etcd, worker | Manage the Kubernetes cluster |
| Node 2 | Linux (Ubuntu Server 18.04 recommended) | Worker | Support the Rancher Cluster agent, Metrics server, DNS, and Ingress for the cluster |
| Node 3 | Windows (Windows Server core version 1809 or above) | Worker | Run your Windows containers |
| Node | Operating System | Kubernetes Cluster Role(s) | Purpose |
|--------|----------------------------------------------------------------------------------------|-----------------------------|-------------------------------------------------------------------------------------|
| Node 1 | Linux (Ubuntu Server 18.04 recommended) | Control plane, etcd, worker | Manage the Kubernetes cluster |
| Node 2 | Linux (Ubuntu Server 18.04 recommended) | Worker | Support the Rancher Cluster agent, Metrics server, DNS, and Ingress for the cluster |
| Node 3 | Windows (Windows Server core version 1809 or above required, version 2022 recommended) | Worker | Run your Windows containers |
### Container Requirements
@@ -126,8 +126,6 @@ If you are using the GCE (Google Compute Engine) cloud provider, you must do the
This tutorial describes how to create a Rancher-provisioned cluster with the three nodes in the [recommended architecture.](#recommended-architecture)
When you provision a cluster with Rancher on existing nodes, you add nodes to the cluster by installing the [Rancher agent](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options.md) on each one. To create or edit your cluster from the Rancher UI, run the **Registration Command** on each server to add it to your cluster.
To set up a cluster with support for Windows nodes and containers, you will need to complete the tasks below.
### 1. Provision Hosts
@@ -142,15 +140,15 @@ Your hosts can be:
You will provision three nodes:
- One Linux node, which manages the Kubernetes control plane and stores your `etcd`
- One Linux node, which manages the Kubernetes control plane, stores your `etcd`, and optionally be a worker node
- A second Linux node, which will be another worker node
- The Windows node, which will run your Windows containers as a worker node
| Node | Operating System |
| ------ | ------------------------------------------------------------ |
| Node 1 | Linux (Ubuntu Server 18.04 recommended) |
| Node 2 | Linux (Ubuntu Server 18.04 recommended) |
| Node 3 | Windows (Windows Server core version 1809 or above required) |
| Node | Operating System |
|--------|----------------------------------------------------------------------------------------|
| Node 1 | Linux (Ubuntu Server 18.04 recommended) |
| Node 2 | Linux (Ubuntu Server 18.04 recommended) |
| Node 3 | Windows (Windows Server core version 1809 or above required, version 2022 recommended) |
If your nodes are hosted by a **Cloud Provider** and you want automation support such as loadbalancers or persistent storage devices, your nodes have additional configuration requirements. For details, see [Selecting Cloud Providers.](../set-up-cloud-providers/set-up-cloud-providers.md)
@@ -164,11 +162,11 @@ The instructions for creating a Windows cluster on existing nodes are very simil
1. Enter a name for your cluster in the **Cluster Name** field.
1. In the **Kubernetes Version** dropdown menu, select a supported Kubernetes version.
1. In the **Container Network** field, select either **Calico** or **Flannel**.
1. Click **Next**.
1. Click **Create**.
### 3. Add Nodes to the Cluster
This section describes how to register your Linux and Worker nodes to your cluster. You will run a command on each node, which will install the Rancher agent and allow Rancher to manage each node.
This section describes how to register your Linux and Worker nodes to your cluster. You will run a command on each node, which will install the rancher system agent and allow Rancher to manage each node.
#### Add Linux Master Node
@@ -177,23 +175,18 @@ In this section, we fill out a form on the Rancher UI to get a custom command to
The first node in your cluster should be a Linux host that has both the **Control Plane** and **etcd** roles. At a minimum, both of these roles must be enabled for this node, and this node must be added to your cluster before you can add Windows hosts.
1. After cluster creation, navigate to the **Registration** tab.
1. In **Step 1** under the **Node Role** section, select at least **etcd** and **Control Plane**. We recommend selecting all three.
1. Optional: If you click **Show advanced options,** you can customize the settings for the [Rancher agent](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options.md) and [node labels.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
1. In **Step 1** under the **Node Role** section, select all three roles. Although you can choose only the **etcd** and **Control Plane** roles, we recommend selecting all three.
1. Optional: If you click **Show Advanced**, you can configure additional settings such as specifying the IP address(es), overriding the node hostname, or adding [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node.
1. In **Step 2**, under the **Registration** section, copy the command displayed on the screen to your clipboard.
1. SSH into your Linux host and run the command that you copied to your clipboard.
**Result:**
**Results:**
Your cluster is created and assigned a state of **Provisioning**. Rancher is standing up your cluster.
Your cluster is created and assigned a state of **Updating**. Rancher is standing up your cluster.
You can access your cluster after its state is updated to **Active**.
It may take a few minutes for the node to register and appear under the **Machines** tab.
**Active** clusters are assigned two Projects:
- `Default`, containing the `default` namespace
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
It may take a few minutes for the node to be registered in your cluster.
Youll be able to access the cluster once its state changes to **Active**.
#### Add Linux Worker Node
@@ -203,11 +196,13 @@ After the initial provisioning of your cluster, your cluster only has a single L
1. After cluster creation, navigate to the **Registration** tab.
1. In **Step 1** under the **Node Role** section, select **Worker**.
1. Optional: If you click **Show advanced options,** you can customize the settings for the [Rancher agent](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options.md) and [node labels.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
1. Optional: If you click **Show Advanced**, you can configure additional settings such as specifying the IP address(es), overriding the node hostname, or to add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node.
1. In **Step 2**, under the **Registration** section, copy the command displayed on the screen to your clipboard.
1. SSH into your Linux host and run the command that you copied to your clipboard.
**Result:** The **Worker** role is installed on your Linux host, and the node registers with Rancher. It may take a few minutes for the node to be registered in your cluster.
**Results:**
The **Worker** role is installed on your Linux host, and the node registers with Rancher. It may take a few minutes for the node to be registered in your cluster.
:::note
@@ -216,7 +211,7 @@ Taints on Linux Worker Nodes
For each Linux worker node added into the cluster, the following taints will be added to Linux worker node. By adding this taint to the Linux worker node, any workloads added to the Windows cluster will be automatically scheduled to the Windows worker node. If you want to schedule workloads specifically onto the Linux worker node, you will need to add tolerations to those workloads.
| Taint Key | Taint Value | Taint Effect |
| -------------- | ----------- | ------------ |
|----------------|-------------|--------------|
| `cattle.io/os` | `linux` | `NoSchedule` |
:::
@@ -231,12 +226,16 @@ The registration command to add the Windows workers only appears after the clust
1. After cluster creation, navigate to the **Registration** tab.
1. In **Step 1** under the **Node Role** section, select **Worker**.
1. Optional: If you click **Show advanced options,** you can customize the settings for the [Rancher agent](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options.md) and [node labels.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
1. Optional: If you click **Show Advanced**, you can configure additional settings such as specifying the IP address(es), overriding the node hostname, or adding [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node.
1. In **Step 2**, under the **Registration** section, copy the command for Windows workers displayed on the screen to your clipboard.
1. Log in to your Windows host using your preferred tool, such as [Microsoft Remote Desktop](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients). Run the command copied to your clipboard in the **Command Prompt (CMD)**.
1. Log in to your Windows host using your preferred tool, such as [Microsoft Remote Desktop](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients). Run the command copied to your clipboard in the **PowerShell Console** as an Administrator.
1. Optional: Repeat these instructions if you want to add more Windows nodes to your cluster.
**Result:** The **Worker** role is installed on your Windows host, and the node registers with Rancher. It may take a few minutes for the node to be registered in your cluster. You now have a Windows Kubernetes cluster.
**Results:**
The **Worker** role is installed on your Windows host, and the node registers with Rancher. It may take a few minutes for the node to be registered in your cluster.
You now have a Windows Kubernetes cluster.
### Optional Next Steps

View File

@@ -20,7 +20,8 @@ Then you will create an EC2 cluster in Rancher, and when configuring the new clu
- [Example IAM Policy](#example-iam-policy)
- [Example IAM Policy with PassRole](#example-iam-policy-with-passrole) (needed if you want to use [Kubernetes Cloud Provider](../../kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md) or want to pass an IAM Profile to an instance)
- [Example IAM Policy to allow encrypted EBS volumes](#example-iam-policy-to-allow-encrypted-ebs-volumes)
- **IAM Policy added as Permission** to the user. See [Amazon Documentation: Adding Permissions to a User (Console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) how to attach it to an user.
- **IAM Policy added as Permission** to the user. See [Amazon Documentation: Adding Permissions to a User (Console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) how to attach it to a user.
- **IPv4-only or IPv6-only or dual-stack subnet and/or VPC** where nodes can be provisioned and assigned IPv4 and/or IPv6 addresses. See [Amazon Documentation: IPv6 support for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html).
## Creating an EC2 Cluster

View File

@@ -19,10 +19,7 @@ In order to deploy and run the adapter successfully, you need to ensure its vers
| Rancher Version | Adapter Version |
|-----------------|------------------|
| v2.12.3 | 107.0.0+up7.0.0 |
| v2.12.2 | 107.0.0+up7.0.0 |
| v2.12.1 | 107.0.0+up7.0.0 |
| v2.12.0 | 107.0.0+up7.0.0 |
| v2.13.0 | 108.0.0+up8.0.0 |
### 1. Gain Access to the Local Cluster

View File

@@ -80,3 +80,18 @@ Use [Instance Metadata Service Version 2 (IMDSv2)](https://docs.aws.amazon.com/A
Add metadata using [tags](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) to categorize resources.
### IPv6 Address Count
Specify how many IPv6 addresses to assign to the instances network interface.
### IPv6 Address Only
Enable this option if the instance should use IPv6 exclusively. IPv6-only VPCs or subnets require this. When enabled, the instance will have IPv6 as its sole address, and the IPv6 Address Count must be greater than zero.
### HTTP Protocol IPv6
Enable or disable IPv6 endpoints for the instance metadata service.
### Enable Primary IPv6
Enable this option to designate the first assigned IPv6 address as the primary address. This ensures a consistent, non-changing IPv6 address for the instance. It does not control whether IPv6 addresses are assigned.

View File

@@ -28,6 +28,8 @@ Enable the DigitalOcean agent for additional [monitoring](https://docs.digitaloc
Enable IPv6 for Droplets.
For more information, refer to the [Digital Ocean IPv6 documentation](https://docs.digitalocean.com/products/networking/ipv6).
### Private Networking
Enable private networking for Droplets.

View File

@@ -71,7 +71,7 @@ Tags is a list of _network tags_, which can be used to associate preexisting Fir
### Labels
A comma seperated list of custom labels to be attached to all VMs within a given machine pool. Unlike Tags, Labels do not influence networking behavior and only serve to organize cloud resources.
A comma separated list of custom labels to be attached to all VMs within a given machine pool. Unlike Tags, Labels do not influence networking behavior and only serve to organize cloud resources.
## Advanced Options

View File

@@ -6,4 +6,4 @@ title: Machine Configuration
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration"/>
</head>
Machine configuration is the arrangement of resources assigned to a virtual machine. Please see the docs for [Amazon EC2](amazon-ec2.md), [DigitalOcean](digitalocean.md), and [Azure](azure.md) to learn more.
Machine configuration is the arrangement of resources assigned to a virtual machine. Please see the docs for [Amazon EC2](amazon-ec2.md), [DigitalOcean](digitalocean.md), [Google GCE](google-gce.md), and [Azure](azure.md) to learn more.

View File

@@ -6,4 +6,6 @@ title: Node Template Configuration
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration"/>
</head>
<EOLRKE1Warning />
To learn about node template config, refer to [EC2 Node Template Configuration](amazon-ec2.md), [DigitalOcean Node Template Configuration](digitalocean.md), [Azure Node Template Configuration](azure.md), [vSphere Node Template Configuration](vsphere.md), and [Nutanix Node Template Configuration](nutanix.md).

View File

@@ -63,7 +63,15 @@ Enable network policy enforcement on the cluster. A network policy defines the l
_Mutable: yes_
choose whether to enable or disable inter-project communication. Note that enabling Project Network Isolation will automatically enable Network Policy and Network Policy Config, but not vice versa.
Choose whether to enable or disable inter-project communication.
#### Imported Clusters
For imported clusters, Project Network Isolation (PNI) requires Kubernetes Network Policy to be enabled on the cluster beforehand.
For clusters created by Rancher, Rancher enables Kubernetes Network Policy automatically.
1. In GKE, enable Network Policy at the cluster level. Refer to the [official GKE guide](https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy) for instructions.
1. After enabling Network Policy, import the cluster into Rancher and enable PNI for project-level isolation.
### Node Ipv4 CIDR Block

View File

@@ -13,7 +13,7 @@ This section covers the configuration options that are available in Rancher for
You can configure the Kubernetes options one of two ways:
- [Rancher UI](#configuration-options-in-the-rancher-ui): Use the Rancher UI to select options that are commonly customized when setting up a Kubernetes cluster.
- [Cluster Config File](#cluster-config-file-reference): Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create a K3s config file. Using a config file allows you to set any of the [options](https://rancher.com/docs/k3s/latest/en/installation/install-options/) available in an K3s installation.
- [Cluster Config File](#cluster-config-file-reference): Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create a K3s config file. Using a config file lets you set any of the [options](https://rancher.com/docs/k3s/latest/en/installation/install-options/) available during a K3s installation.
## Editing Clusters in the Rancher UI
@@ -32,7 +32,7 @@ To edit your cluster,
### Editing Clusters in YAML
For a complete reference of configurable options for K3s clusters in YAML, see the [K3s documentation.](https://rancher.com/docs/k3s/latest/en/installation/install-options/)
For a complete reference of configurable options for K3s clusters in YAML, see the [K3s documentation](https://docs.k3s.io/installation/configuration).
To edit your cluster with YAML:
@@ -48,7 +48,8 @@ This subsection covers generic machine pool configurations. For specific infrast
- [Azure](../downstream-cluster-configuration/machine-configuration/azure.md)
- [DigitalOcean](../downstream-cluster-configuration/machine-configuration/digitalocean.md)
- [EC2](../downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [Amazon EC2](../downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [Google GCE](../downstream-cluster-configuration/machine-configuration/google-gce.md)
##### Pool Name
@@ -86,9 +87,9 @@ Add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-tolerat
#### Basics
##### Kubernetes Version
The version of Kubernetes installed on your cluster nodes. Rancher packages its own version of Kubernetes based on [hyperkube](https://github.com/rancher/hyperkube).
The version of Kubernetes installed on your cluster nodes.
For more detail, see [Upgrading Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
For details on upgrading or rolling back Kubernetes, refer to [this guide](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
##### Pod Security Admission Configuration Template
@@ -108,7 +109,7 @@ Option to enable or disable [SELinux](https://rancher.com/docs/k3s/latest/en/adv
##### CoreDNS
By default, [CoreDNS](https://coredns.io/) is installed as the default DNS provider. If CoreDNS is not installed, an alternate DNS provider must be installed yourself. Refer to the [K3s documentation](https://rancher.com/docs/k3s/latest/en/networking/#coredns) for details..
By default, [CoreDNS](https://coredns.io/) is installed as the default DNS provider. If CoreDNS is not installed, an alternate DNS provider must be installed yourself. Refer to the [K3s documentation](https://rancher.com/docs/k3s/latest/en/networking/#coredns) for details.
##### Klipper Service LB
@@ -148,15 +149,49 @@ Option to choose whether to expose etcd metrics to the public or only within the
##### Cluster CIDR
IPv4/IPv6 network CIDRs to use for pod IPs (default: 10.42.0.0/16).
IPv4/IPv6 network CIDRs to use for pod IPs (default: `10.42.0.0/16`).
Example values:
- IPv4-only: `10.42.0.0/16`
- IPv6-only: `2001:cafe:42::/56`
- Dual-stack: `10.42.0.0/16,2001:cafe:42::/56`
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [K3s documentation: Dual-stack (IPv4 + IPv6) Networking](https://docs.k3s.io/networking/basic-network-options#dual-stack-ipv4--ipv6-networking)
- [K3s documentation: Single-stack IPv6 Networking](https://docs.k3s.io/networking/basic-network-options#single-stack-ipv6-networking)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
##### Service CIDR
IPv4/IPv6 network CIDRs to use for service IPs (default: 10.43.0.0/16).
IPv4/IPv6 network CIDRs to use for service IPs (default: `10.43.0.0/16`).
Example values:
- IPv4-only: `10.43.0.0/16`
- IPv6-only: `2001:cafe:43::/112`
- Dual-stack: `10.43.0.0/16,2001:cafe:43::/112`
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [K3s documentation: Dual-stack (IPv4 + IPv6) Networking](https://docs.k3s.io/networking/basic-network-options#dual-stack-ipv4--ipv6-networking)
- [K3s documentation: Single-stack IPv6 Networking](https://docs.k3s.io/networking/basic-network-options#single-stack-ipv6-networking)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
##### Cluster DNS
IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: 10.43.0.10).
IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: `10.43.0.10`).
##### Cluster Domain
@@ -168,11 +203,11 @@ Option to change the range of ports that can be used for [NodePort services](htt
##### Truncate Hostnames
Option to truncate hostnames to 15 characters or less. You can only set this field during the initial creation of the cluster. You can't enable or disable the 15 character limit after cluster creation.
Option to truncate hostnames to 15 characters or fewer. You can only set this field during the initial creation of the cluster. You can't enable or disable the 15-character limit after cluster creation.
This setting only affects machine-provisioned clusters. Since custom clusters set hostnames during their own node creation process, which occurs outside of Rancher, this field doesn't restrict custom cluster hostname length.
Truncating hostnames in a cluster improves compatibility with Windows-based systems. Although Kubernetes allows hostnames up to 63 characters in length, systems that use NetBIOS restrict hostnames to 15 characters or less.
Truncating hostnames in a cluster improves compatibility with Windows-based systems. Although Kubernetes allows hostnames up to 63 characters in length, systems that use NetBIOS restrict hostnames to 15 characters or fewer.
##### TLS Alternate Names
@@ -186,6 +221,33 @@ For more detail on how an authorized cluster endpoint works and why it is used,
We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.](../../rancher-manager-architecture/architecture-recommendations.md#architecture-for-an-authorized-cluster-endpoint-ace)
##### Stack Preference
Choose the networking stack for the cluster. This option affects:
- The address used for health and readiness probes of components such as Calico, etcd, kube-apiserver, kube-scheduler, kube-controller-manager, and kubelet.
- The server URL in the `authentication-token-webhook-config-file` for the Authorized Cluster Endpoint.
- The `advertise-client-urls` setting for etcd during snapshot restoration.
Options are `ipv4`, `ipv6`, `dual`:
- When set to `ipv4`, the cluster uses `127.0.0.1`
- When set to `ipv6`, the cluster uses `[::1]`
- When set to `dual`, the cluster uses `localhost`
The stack preference must match the clusters networking configuration:
- Set to `ipv4` for IPv4-only clusters
- Set to `ipv6` for IPv6-only clusters
- Set to `dual` for dual-stack clusters
:::caution
Ensuring the loopback address configuration is correct is critical for successful cluster provisioning.
For more information, refer to the [Node Requirements](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md) page.
:::
#### Registries
Select the image repository to pull Rancher images from. For more details and configuration options, see the [K3s documentation](https://rancher.com/docs/k3s/latest/en/installation/private-registry/).

View File

@@ -32,7 +32,7 @@ To edit your cluster,
### Editing Clusters in YAML
For a complete reference of configurable options for K3s clusters in YAML, see the [K3s documentation.](https://rancher.com/docs/k3s/latest/en/installation/install-options/)
For a complete reference of configurable options for RKE2 clusters in YAML, see the [RKE2 documentation](https://docs.rke2.io/install/configuration).
To edit your cluster in YAML:
@@ -48,7 +48,8 @@ This subsection covers generic machine pool configurations. For specific infrast
- [Azure](../downstream-cluster-configuration/machine-configuration/azure.md)
- [DigitalOcean](../downstream-cluster-configuration/machine-configuration/digitalocean.md)
- [EC2](../downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [Amazon EC2](../downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [Google GCE](../downstream-cluster-configuration/machine-configuration/google-gce.md)
##### Pool Name
@@ -86,9 +87,9 @@ Add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-tolerat
#### Basics
##### Kubernetes Version
The version of Kubernetes installed on your cluster nodes. Rancher packages its own version of Kubernetes based on [hyperkube](https://github.com/rancher/hyperkube).
The version of Kubernetes installed on your cluster nodes.
For more detail, see [Upgrading Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
For details on upgrading or rolling back Kubernetes, refer to [this guide](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
##### Container Network Provider
@@ -105,20 +106,19 @@ Out of the box, Rancher is compatible with the following network providers:
- [Canal](https://github.com/projectcalico/canal)
- [Cilium](https://cilium.io/)*
- [Calico](https://docs.projectcalico.org/v3.11/introduction/)
- [Flannel](https://github.com/flannel-io/flannel)
- [Multus](https://github.com/k8snetworkplumbingwg/multus-cni)
\* When using [project network isolation](#project-network-isolation) in the [Cilium CNI](../../../faq/container-network-interface-providers.md#cilium), it is possible to enable cross-node ingress routing. Click the [CNI provider docs](../../../faq/container-network-interface-providers.md#ingress-routing-across-nodes-in-cilium) to learn more.
For more details on the different networking providers and how to configure them, please view our [RKE2 documentation](https://docs.rke2.io/install/network_options).
For more details on the different networking providers and how to configure them, please view our [RKE2 documentation](https://docs.rke2.io/networking/basic_network_options).
###### Dual-stack Networking
[Dual-stack](https://docs.rke2.io/install/network_options#dual-stack-configuration) networking is supported for all CNI providers. To configure RKE2 in dual-stack mode, set valid IPv4/IPv6 CIDRs for your [Cluster CIDR](#cluster-cidr) and/or [Service CIDR](#service-cidr).
###### Dual-stack Additional Configuration
:::caution
When using `cilium` or `multus,cilium` as your container network interface provider, ensure the **Enable IPv6 Support** option is also enabled.
:::
##### Cloud Provider
You can configure a [Kubernetes cloud provider](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md). If you want to use dynamically provisioned [volumes and storage](../../../how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md) in Kubernetes, typically you must select the specific cloud provider in order to use it. For example, if you want to use Amazon EBS, you would need to select the `aws` cloud provider.
@@ -181,27 +181,62 @@ Option to choose whether to expose etcd metrics to the public or only within the
##### Cluster CIDR
IPv4 and/or IPv6 network CIDRs to use for pod IPs (default: 10.42.0.0/16).
IPv4 and/or IPv6 network CIDRs to use for pod IPs (default: `10.42.0.0/16`).
###### Dual-stack Networking
Example values:
To configure [dual-stack](https://docs.rke2.io/install/network_options#dual-stack-configuration) mode, enter a valid IPv4/IPv6 CIDR. For example `10.42.0.0/16,2001:cafe:42:0::/56`.
- IPv4-only: `10.42.0.0/16`
- IPv6-only: `2001:cafe:42::/56`
- Dual-stack: `10.42.0.0/16,2001:cafe:42::/56`
[Additional configuration](#dual-stack-additional-configuration) is required when using `cilium` or `multus,cilium` as your [container network](#container-network-provider) interface provider.
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [RKE2 documentation: Dual-stack configuration](https://docs.rke2.io/networking/basic_network_options#dual-stack-configuration)
- [RKE2 documentation: IPv6-only setup](https://docs.rke2.io/networking/basic_network_options#ipv6-setup)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
:::caution
When using `cilium` or `multus,cilium` as your container network interface provider, ensure the **Enable IPv6 Support** option is also enabled.
:::
##### Service CIDR
IPv4/IPv6 network CIDRs to use for service IPs (default: 10.43.0.0/16).
IPv4/IPv6 network CIDRs to use for service IPs (default: `10.43.0.0/16`).
###### Dual-stack Networking
Example values:
To configure [dual-stack](https://docs.rke2.io/install/network_options#dual-stack-configuration) mode, enter a valid IPv4/IPv6 CIDR. For example `10.42.0.0/16,2001:cafe:42:0::/56`.
- IPv4-only: `10.43.0.0/16`
- IPv6-only: `2001:cafe:43::/112`
- Dual-stack: `10.43.0.0/16,2001:cafe:43::/112`
[Additional configuration](#dual-stack-additional-configuration) is required when using `cilium ` or `multus,cilium` as your [container network](#container-network-provider) interface provider.
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [RKE2 documentation: Dual-stack configuration](https://docs.rke2.io/networking/basic_network_options#dual-stack-configuration)
- [RKE2 documentation: IPv6-only setup](https://docs.rke2.io/networking/basic_network_options#ipv6-setup)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
:::caution
When using `cilium` or `multus,cilium` as your container network interface provider, ensure the **Enable IPv6 Support** option is also enabled.
:::
##### Cluster DNS
IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: 10.43.0.10).
IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: `10.43.0.10`).
##### Cluster Domain
@@ -213,11 +248,11 @@ Option to change the range of ports that can be used for [NodePort services](htt
##### Truncate Hostnames
Option to truncate hostnames to 15 characters or less. You can only set this field during the initial creation of the cluster. You can't enable or disable the 15 character limit after cluster creation.
Option to truncate hostnames to 15 characters or fewer. You can only set this field during the initial creation of the cluster. You can't enable or disable the 15-character limit after cluster creation.
This setting only affects machine-provisioned clusters. Since custom clusters set hostnames during their own node creation process, which occurs outside of Rancher, this field doesn't restrict custom cluster hostname length.
Truncating hostnames in a cluster improves compatibility with Windows-based systems. Although Kubernetes allows hostnames up to 63 characters in length, systems that use NetBIOS restrict hostnames to 15 characters or less.
Truncating hostnames in a cluster improves compatibility with Windows-based systems. Although Kubernetes allows hostnames up to 63 characters in length, systems that use NetBIOS restrict hostnames to 15 characters or fewer.
##### TLS Alternate Names
@@ -233,6 +268,33 @@ For more detail on how an authorized cluster endpoint works and why it is used,
We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.](../../rancher-manager-architecture/architecture-recommendations.md#architecture-for-an-authorized-cluster-endpoint-ace)
##### Stack Preference
Choose the networking stack for the cluster. This option affects:
- The address used for health and readiness probes of components such as Calico, etcd, kube-apiserver, kube-scheduler, kube-controller-manager, and kubelet.
- The server URL in the `authentication-token-webhook-config-file` for the Authorized Cluster Endpoint.
- The `advertise-client-urls` setting for etcd during snapshot restoration.
Options are `ipv4`, `ipv6`, `dual`:
- When set to `ipv4`, the cluster uses `127.0.0.1`
- When set to `ipv6`, the cluster uses `[::1]`
- When set to `dual`, the cluster uses `localhost`
The stack preference must match the clusters networking configuration:
- Set to `ipv4` for IPv4-only clusters
- Set to `ipv6` for IPv6-only clusters
- Set to `dual` for dual-stack clusters
:::caution
Ensuring the loopback address configuration is correct is critical for successful cluster provisioning.
For more information, refer to the [Node Requirements](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md) page.
:::
#### Registries
Select the image repository to pull Rancher images from. For more details and configuration options, see the [RKE2 documentation](https://docs.rke2.io/install/private_registry).

View File

@@ -1,57 +0,0 @@
---
title: Rancher Agent Options
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options"/>
</head>
Rancher deploys an agent on each node to communicate with the node. This pages describes the options that can be passed to the agent. To use these options, you will need to [create a cluster with custom nodes](use-existing-nodes.md) and add the options to the generated `docker run` command when adding a node.
For an overview of how Rancher communicates with downstream clusters using node agents, refer to the [architecture section.](../../../rancher-manager-architecture/communicating-with-downstream-user-clusters.md#3-node-agents)
## General options
| Parameter | Environment variable | Description |
| ---------- | -------------------- | ----------- |
| `--server` | `CATTLE_SERVER` | The configured Rancher `server-url` setting which the agent connects to |
| `--token` | `CATTLE_TOKEN` | Token that is needed to register the node in Rancher |
| `--ca-checksum` | `CATTLE_CA_CHECKSUM` | The SHA256 checksum of the configured Rancher `cacerts` setting to validate |
| `--node-name` | `CATTLE_NODE_NAME` | Override the hostname that is used to register the node (defaults to `hostname -s`) |
| `--label` | `CATTLE_NODE_LABEL` | Add node labels to the node. For multiple labels, pass additional `--label` options. (`--label key=value`) |
| `--taints` | `CATTLE_NODE_TAINTS` | Add node taints to the node. For multiple taints, pass additional `--taints` options. (`--taints key=value:effect`) |
## Role options
| Parameter | Environment variable | Description |
| ---------- | -------------------- | ----------- |
| `--all-roles` | `ALL=true` | Apply all roles (`etcd`,`controlplane`,`worker`) to the node |
| `--etcd` | `ETCD=true` | Apply the role `etcd` to the node |
| `--controlplane` | `CONTROL=true` | Apply the role `controlplane` to the node |
| `--worker` | `WORKER=true` | Apply the role `worker` to the node |
## IP address options
| Parameter | Environment variable | Description |
| ---------- | -------------------- | ----------- |
| `--address` | `CATTLE_ADDRESS` | The IP address the node will be registered with (defaults to the IP used to reach `8.8.8.8`) |
| `--internal-address` | `CATTLE_INTERNAL_ADDRESS` | The IP address used for inter-host communication on a private network |
### Dynamic IP address options
For automation purposes, you can't have a specific IP address in a command as it has to be generic to be used for every node. For this, we have dynamic IP address options. They are used as a value to the existing IP address options. This is supported for `--address` and `--internal-address`.
| Value | Example | Description |
| ---------- | -------------------- | ----------- |
| Interface name | `--address eth0` | The first configured IP address will be retrieved from the given interface |
| `ipify` | `--address ipify` | Value retrieved from `https://api.ipify.org` will be used |
| `awslocal` | `--address awslocal` | Value retrieved from `http://169.254.169.254/latest/meta-data/local-ipv4` will be used |
| `awspublic` | `--address awspublic` | Value retrieved from `http://169.254.169.254/latest/meta-data/public-ipv4` will be used |
| `doprivate` | `--address doprivate` | Value retrieved from `http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address` will be used |
| `dopublic` | `--address dopublic` | Value retrieved from `http://169.254.169.254/metadata/v1/interfaces/public/0/ipv4/address` will be used |
| `azprivate` | `--address azprivate` | Value retrieved from `http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/privateIpAddress?api-version=2017-08-01&format=text` will be used |
| `azpublic` | `--address azpublic` | Value retrieved from `http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/publicIpAddress?api-version=2017-08-01&format=text` will be used |
| `gceinternal` | `--address gceinternal` | Value retrieved from `http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip` will be used |
| `gceexternal` | `--address gceexternal` | Value retrieved from `http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip` will be used |
| `packetlocal` | `--address packetlocal` | Value retrieved from `https://metadata.packet.net/2009-04-04/meta-data/local-ipv4` will be used |
| `packetpublic` | `--address packetlocal` | Value retrieved from `https://metadata.packet.net/2009-04-04/meta-data/public-ipv4` will be used |

View File

@@ -9,7 +9,7 @@ description: To create a cluster with custom nodes, youll need to access serv
When you create a custom cluster, Rancher can use RKE2/K3s to create a Kubernetes cluster in on-prem bare-metal servers, on-prem virtual machines, or in any node hosted by an infrastructure provider.
To use this option you'll need access to servers you intend to use in your Kubernetes cluster. Provision each server according to the [requirements](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md), which includes some hardware specifications and Docker. After you install Docker on each server, you willl also run the command provided in the Rancher UI on each server to turn each one into a Kubernetes node.
To use this option, you need access to the servers that will be part of your Kubernetes cluster. Provision each server according to the [requirements](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md). Then, run the command provided in the Rancher UI on each server to convert it into a Kubernetes node.
This section describes how to set up a custom cluster.
@@ -33,7 +33,15 @@ If you want to reuse a node from a previous custom cluster, [clean the node](../
Provision the host according to the [installation requirements](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md) and the [checklist for production-ready clusters.](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/checklist-for-production-ready-clusters.md)
If you're using Amazon EC2 as your host and want to use the [dual-stack](https://kubernetes.io/docs/concepts/services-networking/dual-stack/) feature, there are additional [requirements](https://rancher.com/docs/rke//latest/en/config-options/dual-stack#requirements) when provisioning the host.
:::note IPv6-only cluster
For an IPv6-only cluster, ensure that your operating system correctly configures the `/etc/hosts` file.
```
::1 localhost
```
:::
### 2. Create the Custom Cluster
@@ -41,39 +49,43 @@ If you're using Amazon EC2 as your host and want to use the [dual-stack](https:/
1. On the **Clusters** page, click **Create**.
1. Click **Custom**.
1. Enter a **Cluster Name**.
1. Use **Cluster Configuration** section to choose the version of Kubernetes, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on **Show advanced options**.
1. Use the **Cluster Configuration** section to set up the cluster. For more information, see [RKE2 Cluster Configuration Reference](../rke2-cluster-configuration.md) and [K3s Cluster Configuration Reference](../k3s-cluster-configuration.md).
:::note Using Windows nodes as Kubernetes workers?
:::note Windows nodes
- See [Enable the Windows Support Option](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters.md).
- The only Network Provider available for clusters with Windows support is Flannel.
To learn more about using Windows nodes as Kubernetes workers, see [Launching Kubernetes on Windows Clusters](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters.md).
:::
:::
:::note Dual-stack on Amazon EC2:
1. Click **Create**.
If you're using Amazon EC2 as your host and want to use the [dual-stack](https://kubernetes.io/docs/concepts/services-networking/dual-stack/) feature, there are additional [requirements](https://rancher.com/docs/rke//latest/en/config-options/dual-stack#requirements) when configuring RKE.
**Result:** The UI redirects to the **Registration** page, where you can generate the registration command for your nodes.
:::
1. From **Node Role**, select the roles you want a cluster node to fill. You must provision at least one node for each role: etcd, worker, and control plane. A custom cluster requires all three roles to finish provisioning. For more information on roles, see [Roles for Nodes in Kubernetes Clusters](../../../kubernetes-concepts.md#roles-for-nodes-in-kubernetes-clusters).
6. Click **Next**.
:::note Bare-Metal Server
4. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
If you plan to dedicate bare-metal servers to each role, you must provision a bare-metal server for each role (i.e., provision multiple bare-metal servers).
7. From **Node Role**, choose the roles that you want filled by a cluster node. You must provision at least one node for each role: `etcd`, `worker`, and `control plane`. All three roles are required for a custom cluster to finish provisioning. For more information on roles, see [this section.](../../../kubernetes-concepts.md#roles-for-nodes-in-kubernetes-clusters)
:::note
:::note
1. **Optional**: Click **Show Advanced** to configure additional settings such as specifying the IP address(es), overriding the node hostname, or adding [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node
- Using Windows nodes as Kubernetes workers? See [this section](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters.md).
- Bare-Metal Server Reminder: If you plan on dedicating bare-metal servers to each role, you must provision a bare-metal server for each role (i.e. provision multiple bare-metal servers).
:::note
:::
The **Node Public IP** and **Node Private IP** fields can accept either a single address or a comma-separated list of addresses (for example: `10.0.0.5,2001:db8::1`).
8. **Optional**: Click **[Show advanced options](rancher-agent-options.md)** to specify IP address(es) to use when registering the node, override the hostname of the node, or to add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node.
:::
9. Copy the command displayed on screen to your clipboard.
:::note Ipv6-only or Dual-stack Cluster
10. Log in to your Linux host using your preferred shell, such as PuTTy or a remote Terminal connection. Run the command copied to your clipboard.
In both IPv6-only and dual-stack clusters, you should specify the nodes **IPv6 address** as the **Node Private IP**.
:::
1. Copy the command displayed on screen to your clipboard.
1. Log in to your Linux host using your preferred shell, such as PuTTy or a remote Terminal connection. Run the command copied to your clipboard.
:::note
@@ -81,11 +93,9 @@ Repeat steps 7-10 if you want to dedicate specific hosts to specific node roles.
:::
11. When you finish running the command(s) on your Linux host(s), click **Done**.
**Result:**
Your cluster is created and assigned a state of **Provisioning**. Rancher is standing up your cluster.
The cluster is created and transitions to the **Updating** state while Rancher initializes and provisions cluster components.
You can access your cluster after its state is updated to **Active**.

View File

@@ -0,0 +1,122 @@
---
title: IPv4/IPv6 Dual-stack
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/dual-stack/"/>
</head>
Kubernetes supports IPv4-only, IPv6-only, and dual-stack networking configurations.
For more details, refer to the official [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/dual-stack/).
## Installing Rancher on IPv6-Only or Dual-Stack Clusters
Rancher can run on clusters using:
- IPv4-only
- IPv6-only
- Dual-stack (IPv4 + IPv6)
When you install Rancher on an **IPv6-only cluster**, it can communicate externally **only over IPv6**. This means it can provision:
- IPv6-only clusters
- Dual-stack clusters
_(IPv4-only downstream clusters are not possible in this case)_
When you install Rancher on a **dual-stack cluster**, it can communicate over both IPv4 and IPv6, and can therefore provision:
- IPv4-only clusters
- IPv6-only clusters
- Dual-stack clusters
For installation steps, see the guide: **[Installing and Upgrading Rancher](../getting-started/installation-and-upgrade/installation-and-upgrade.md)**.
### Requirement for the Rancher Server URL
When provisioning IPv6-only downstream clusters, the **Rancher Server URL must be reachable over IPv6** because downstream nodes connect back to the Rancher server using IPv6.
## Provisioning IPv6-Only or Dual-Stack Clusters
You can provision RKE2 and K3s **Node driver** (machine pools) or **Custom cluster** (existing hosts) clusters using IPv4-only, IPv6-only, or dual-stack networking.
### Network Configuration
To enable IPv6-only or dual-stack networking, you must configure:
- Cluster CIDR
- Service CIDR
- Stack Preference
Configuration references:
- [K3s Cluster Configuration Reference](cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md)
- [RKE2 Cluster Configuration Reference](cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md)
### Support for Windows
Kubernetes on Windows:
| Feature | Support Status |
|---------------------|-------------------------------|
| IPv6-only clusters | Not supported |
| Dual-stack clusters | Supported |
| Services | Limited to a single IP family |
For more information, see the [Kubernetes Documentation](https://kubernetes.io/docs/concepts/services-networking/dual-stack/#windows-support).
K3s does **not** support Windows ([FAQ](https://docs.k3s.io/faq#does-k3s-support-windows))
RKE2 supports Windows, but requires using either `Calico` or `Flannel` as the CNI.
Note that Windows installations of RKE2 do not support dual-stack clusters using BGP.
For more details, see [RKE2 Network Options](https://docs.rke2.io/networking/basic_network_options).
### Provisioning Node Driver Clusters
Rancher currently supports assigning IPv6 addresses in **node driver** clusters with:
- [Amazon EC2](../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md)
- [DigitalOcean](../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-a-digitalocean-cluster.md)
Support for additional providers will be introduced in future releases.
:::note DigitalOcean Limitation
Creating an **IPv6-only cluster** using the DigitalOcean node driver is currently **not supported**.
For more details, please see [rancher/rancher#52523](https://github.com/rancher/rancher/issues/52523#issuecomment-3457803572).
:::
#### Infrastructure Requirements
Cluster nodes must meet the requirements listed in the [Node Requirements for Rancher Managed Clusters](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md).
Machine pool configuration guides:
- [Amazon EC2 Configuration](cluster-configuration/downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [DigitalOcean Configuration](cluster-configuration/downstream-cluster-configuration/machine-configuration/digitalocean.md)
### Provisioning Custom Clusters
To provision on your own nodes, follow the instructions in [Provision Kubernetes on Existing Nodes](cluster-configuration/rancher-server-configuration/use-existing-nodes/use-existing-nodes.md).
:::note
- **Node Public IP** and **Node Private IP** fields accept IPv4, IPv6, or both (comma-separated).
> Example: `10.0.0.5,2001:db8::1`
- In **IPv6-only** and **dual-stack** clusters, specify the nodes **IPv6 address** as the **Private IP**.
:::
#### Infrastructure Requirements
Infrastructure requirements are the same as above for node-driver clusters.
## Other Limitations
### GitHub.com
GitHub.com does **not** support IPv6. As a result:
- Any application repositories ( `ClusterRepo.catalog.cattle.io/v1` CR) hosted on GitHub.com will **not be reachable** from IPv6-only clusters.
- Similarly, any **non-builtin node drivers** hosted on GitHub.com will also **not be accessible** in IPv6-only environments.

View File

@@ -20,10 +20,7 @@ Each Rancher version is designed to be compatible with a single version of the w
| Rancher Version | Webhook Version | Availability in Prime | Availability in Community |
|-----------------|-----------------|-----------------------|---------------------------|
| v2.12.3 | v0.8.3 | &check; | &check; |
| v2.12.2 | v0.8.2 | &check; | &check; |
| v2.12.1 | v0.8.1 | &check; | &check; |
| v2.12.0 | v0.8.0 | &cross; | &check; |
| v2.13.0 | v0.9.0 | &cross; | &check; |
## Why Do We Need It?

View File

@@ -209,6 +209,7 @@
"how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-freeipa",
"how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-azure-ad",
"how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-github",
"how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-github-app",
"how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-keycloak-oidc",
"how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-keycloak-saml",
"how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-pingidentity",
@@ -738,7 +739,7 @@
"how-to-guides/advanced-user-guides/enable-experimental-features/unsupported-storage-drivers",
"how-to-guides/advanced-user-guides/enable-experimental-features/istio-traffic-management-features",
"how-to-guides/advanced-user-guides/enable-experimental-features/continuous-delivery",
"how-to-guides/advanced-user-guides/enable-experimental-features/cluster-role-aggregation"
"how-to-guides/advanced-user-guides/enable-experimental-features/role-template-aggregation"
]
},
"how-to-guides/advanced-user-guides/open-ports-with-firewalld",
@@ -849,9 +850,7 @@
"type": "doc",
"id": "reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/use-existing-nodes"
},
"items": [
"reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options"
]
"items": []
},
"reference-guides/cluster-configuration/rancher-server-configuration/sync-clusters"
]
@@ -979,6 +978,7 @@
"reference-guides/rancher-cluster-tools",
"reference-guides/rancher-project-tools",
"reference-guides/system-tools",
"reference-guides/dual-stack",
"reference-guides/rke1-template-example-yaml",
"reference-guides/rancher-webhook",
{
@@ -1249,7 +1249,8 @@
"items": [
"api/workflows/projects",
"api/workflows/kubeconfigs",
"api/workflows/tokens"
"api/workflows/tokens",
"api/workflows/users"
]
},
"api/api-reference",