Merge release v2.13.0 to main (#2091)

* Sync main to v2.13.0 (#2065)

* It's bad form to ask users to pass something they just curled from the internet directly to sh

Updated the instructions for uninstalling the rancher-system-agent to use a temporary script file instead of piping directly to sh.

* doc(rancher-security): improve structure and content to latest, v2.13-preview and v2.12 (#2024)

- add Rancher Kubernetes Distributions (K3s/RKE2) Self-Assessment and Hardening Guide section
- add kubernetes cluster security best practices link to rancher-security section
- add k3s-selinux and update selinux-rpm details
- remove rhel/centos 7 support

Signed-off-by: Andy Pitcher <andy.pitcher@suse.com>

* Updating across supported versions and translations.

Signed-off-by: Sunil Singh <sunil.singh@suse.com>

---------

Signed-off-by: Andy Pitcher <andy.pitcher@suse.com>
Signed-off-by: Sunil Singh <sunil.singh@suse.com>
Co-authored-by: Tejeev <tj@rancher.com>
Co-authored-by: Andy Pitcher <andy.pitcher@suse.com>
Co-authored-by: Sunil Singh <sunil.singh@suse.com>

* Update roletemplate aggregation doc and version information

* Add versioned docs

* Remove ext token and kubeconfig feature flag sections and document bearer Token

* Update corresponding v2.13 pages

* update doc for pni in gke

* Adding reverted session idle information from PR 1653

Signed-off-by: Sunil Singh <sunil.singh@suse.com>

* [2.13.0] Add versions table entry

* [2.13.0] Add webhook version

* [2.13.0] Add CSP Adapter version

* [2.13.0] Add deprecated feature table entry

* [2.13.0] Update CNI popularity stats

* Update GKE Cluster Configuration for Project Network Isolation instructions

* Fix link and port to 2.13

* [2.13.0] Add Swagger JSON

* [v2.13.0] Add info about Azure AD Roles claims (#2079)

* Add info about Azure AD roles claims compatibility

* Apply suggestions from code review

Co-authored-by: Sunil Singh <sunil.singh@suse.com>

* Add suggestions to v2.13

---------

Co-authored-by: Sunil Singh <sunil.singh@suse.com>

* [2.13.0] Remove preview designation

* user public api docs (#2069)

* user public api docs

* Apply suggestions from code review

Co-authored-by: Andreas Kupries <akupries@suse.com>

* Apply suggestions from code review

Co-authored-by: Peter Matseykanets <pmatseykanets@gmail.com>

* explain plaintext is never stored

* add users 2.13 versioned docs

* remove extra ```

* Apply suggestions from code review

Co-authored-by: Lucas Saintarbor <lucas.saintarbor@suse.com>

* add space before code block

---------

Co-authored-by: Andreas Kupries <akupries@suse.com>
Co-authored-by: Peter Matseykanets <pmatseykanets@gmail.com>
Co-authored-by: Lucas Saintarbor <lucas.saintarbor@suse.com>

* support IPv6 (#2041)

* [v2.13.0] Add Configure GitHub App page (#2081)

* Add Configure GitHub App page

* Apply suggestions from code review

Co-authored-by: Billy Tat <btat@suse.com>

* Fix header/GH URL & add suggestions to v2.13

* Apply suggestions from code review

Co-authored-by: Petr Kovar <pknbe@volny.cz>

* Apply suggestions from code review to v2.13

* Add note describing why to use Installation ID

* Apply suggestions from code review

Co-authored-by: Billy Tat <btat@suse.com>

---------

Co-authored-by: Billy Tat <btat@suse.com>
Co-authored-by: Petr Kovar <pknbe@volny.cz>

* [v2.13.0] Add info about Generic OIDC Custom Mapping (#2080)

* Add info about Generic OIDC Custom Mapping

* Apply suggestions from code review

Co-authored-by: Sunil Singh <sunil.singh@suse.com>
Co-authored-by: Billy Tat <btat@suse.com>

* Apply suggestions from code review

Co-authored-by: Sunil Singh <sunil.singh@suse.com>
Co-authored-by: Billy Tat <btat@suse.com>

* Add suggestions to v2.13

* Remove repetitive statement in intro

* Move Prereq intro/note to appropriate section

* Fix formatting, UI typo, add Custom Claims section under Configuration Reference section

* Add section about how a custom groups claim works / note about search limitations for groups in RBAC

---------

Co-authored-by: Sunil Singh <sunil.singh@suse.com>
Co-authored-by: Billy Tat <btat@suse.com>

* [v2.13.0] Add info about OIDC SLO support (#2086)

* Add shared file covering OIDC SLO support to OIDC auth pages

* Ad How to get the End Session Endpoint steps

* Add generic curl exampleto retrieve end_session_endpoint

* [2.13.0] Bump release date

---------

Signed-off-by: Andy Pitcher <andy.pitcher@suse.com>
Signed-off-by: Sunil Singh <sunil.singh@suse.com>
Co-authored-by: Lucas Saintarbor <lucas.saintarbor@suse.com>
Co-authored-by: Tejeev <tj@rancher.com>
Co-authored-by: Andy Pitcher <andy.pitcher@suse.com>
Co-authored-by: Sunil Singh <sunil.singh@suse.com>
Co-authored-by: Jonathan Crowther <jonathan.crowther@suse.com>
Co-authored-by: Peter Matseykanets <peter.matseykanets@suse.com>
Co-authored-by: Petr Kovar <petr.kovar@suse.com>
Co-authored-by: Krunal Hingu <krunal.hingu222@gmail.com>
Co-authored-by: Raul Cabello Martin <raul.cabello@suse.com>
Co-authored-by: Andreas Kupries <akupries@suse.com>
Co-authored-by: Peter Matseykanets <pmatseykanets@gmail.com>
Co-authored-by: Jack Luo <jiaqi.luo@suse.com>
Co-authored-by: Petr Kovar <pknbe@volny.cz>
This commit is contained in:
Billy Tat
2025-11-25 10:51:39 -08:00
committed by GitHub
parent 94197793cb
commit 24fc5a657c
87 changed files with 11352 additions and 564 deletions
@@ -63,7 +63,15 @@ Enable network policy enforcement on the cluster. A network policy defines the l
_Mutable: yes_
choose whether to enable or disable inter-project communication. Note that enabling Project Network Isolation will automatically enable Network Policy and Network Policy Config, but not vice versa.
Choose whether to enable or disable inter-project communication.
#### Imported Clusters
For imported clusters, Project Network Isolation (PNI) requires Kubernetes Network Policy to be enabled on the cluster beforehand.
For clusters created by Rancher, Rancher enables Kubernetes Network Policy automatically.
1. In GKE, enable Network Policy at the cluster level. Refer to the [official GKE guide](https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy) for instructions.
1. After enabling Network Policy, import the cluster into Rancher and enable PNI for project-level isolation.
### Node Ipv4 CIDR Block
@@ -13,7 +13,7 @@ This section covers the configuration options that are available in Rancher for
You can configure the Kubernetes options one of two ways:
- [Rancher UI](#configuration-options-in-the-rancher-ui): Use the Rancher UI to select options that are commonly customized when setting up a Kubernetes cluster.
- [Cluster Config File](#cluster-config-file-reference): Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create a K3s config file. Using a config file allows you to set any of the [options](https://rancher.com/docs/k3s/latest/en/installation/install-options/) available in an K3s installation.
- [Cluster Config File](#cluster-config-file-reference): Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create a K3s config file. Using a config file lets you set any of the [options](https://rancher.com/docs/k3s/latest/en/installation/install-options/) available during a K3s installation.
## Editing Clusters in the Rancher UI
@@ -32,7 +32,7 @@ To edit your cluster,
### Editing Clusters in YAML
For a complete reference of configurable options for K3s clusters in YAML, see the [K3s documentation.](https://rancher.com/docs/k3s/latest/en/installation/install-options/)
For a complete reference of configurable options for K3s clusters in YAML, see the [K3s documentation](https://docs.k3s.io/installation/configuration).
To edit your cluster with YAML:
@@ -48,7 +48,8 @@ This subsection covers generic machine pool configurations. For specific infrast
- [Azure](../downstream-cluster-configuration/machine-configuration/azure.md)
- [DigitalOcean](../downstream-cluster-configuration/machine-configuration/digitalocean.md)
- [EC2](../downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [Amazon EC2](../downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [Google GCE](../downstream-cluster-configuration/machine-configuration/google-gce.md)
##### Pool Name
@@ -86,9 +87,9 @@ Add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-tolerat
#### Basics
##### Kubernetes Version
The version of Kubernetes installed on your cluster nodes. Rancher packages its own version of Kubernetes based on [hyperkube](https://github.com/rancher/hyperkube).
The version of Kubernetes installed on your cluster nodes.
For more detail, see [Upgrading Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
For details on upgrading or rolling back Kubernetes, refer to [this guide](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
##### Pod Security Admission Configuration Template
@@ -108,7 +109,7 @@ Option to enable or disable [SELinux](https://rancher.com/docs/k3s/latest/en/adv
##### CoreDNS
By default, [CoreDNS](https://coredns.io/) is installed as the default DNS provider. If CoreDNS is not installed, an alternate DNS provider must be installed yourself. Refer to the [K3s documentation](https://rancher.com/docs/k3s/latest/en/networking/#coredns) for details..
By default, [CoreDNS](https://coredns.io/) is installed as the default DNS provider. If CoreDNS is not installed, an alternate DNS provider must be installed yourself. Refer to the [K3s documentation](https://rancher.com/docs/k3s/latest/en/networking/#coredns) for details.
##### Klipper Service LB
@@ -148,15 +149,49 @@ Option to choose whether to expose etcd metrics to the public or only within the
##### Cluster CIDR
IPv4/IPv6 network CIDRs to use for pod IPs (default: 10.42.0.0/16).
IPv4/IPv6 network CIDRs to use for pod IPs (default: `10.42.0.0/16`).
Example values:
- IPv4-only: `10.42.0.0/16`
- IPv6-only: `2001:cafe:42::/56`
- Dual-stack: `10.42.0.0/16,2001:cafe:42::/56`
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [K3s documentation: Dual-stack (IPv4 + IPv6) Networking](https://docs.k3s.io/networking/basic-network-options#dual-stack-ipv4--ipv6-networking)
- [K3s documentation: Single-stack IPv6 Networking](https://docs.k3s.io/networking/basic-network-options#single-stack-ipv6-networking)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
##### Service CIDR
IPv4/IPv6 network CIDRs to use for service IPs (default: 10.43.0.0/16).
IPv4/IPv6 network CIDRs to use for service IPs (default: `10.43.0.0/16`).
Example values:
- IPv4-only: `10.43.0.0/16`
- IPv6-only: `2001:cafe:43::/112`
- Dual-stack: `10.43.0.0/16,2001:cafe:43::/112`
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [K3s documentation: Dual-stack (IPv4 + IPv6) Networking](https://docs.k3s.io/networking/basic-network-options#dual-stack-ipv4--ipv6-networking)
- [K3s documentation: Single-stack IPv6 Networking](https://docs.k3s.io/networking/basic-network-options#single-stack-ipv6-networking)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
##### Cluster DNS
IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: 10.43.0.10).
IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: `10.43.0.10`).
##### Cluster Domain
@@ -168,11 +203,11 @@ Option to change the range of ports that can be used for [NodePort services](htt
##### Truncate Hostnames
Option to truncate hostnames to 15 characters or less. You can only set this field during the initial creation of the cluster. You can't enable or disable the 15 character limit after cluster creation.
Option to truncate hostnames to 15 characters or fewer. You can only set this field during the initial creation of the cluster. You can't enable or disable the 15-character limit after cluster creation.
This setting only affects machine-provisioned clusters. Since custom clusters set hostnames during their own node creation process, which occurs outside of Rancher, this field doesn't restrict custom cluster hostname length.
Truncating hostnames in a cluster improves compatibility with Windows-based systems. Although Kubernetes allows hostnames up to 63 characters in length, systems that use NetBIOS restrict hostnames to 15 characters or less.
Truncating hostnames in a cluster improves compatibility with Windows-based systems. Although Kubernetes allows hostnames up to 63 characters in length, systems that use NetBIOS restrict hostnames to 15 characters or fewer.
##### TLS Alternate Names
@@ -186,6 +221,33 @@ For more detail on how an authorized cluster endpoint works and why it is used,
We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.](../../rancher-manager-architecture/architecture-recommendations.md#architecture-for-an-authorized-cluster-endpoint-ace)
##### Stack Preference
Choose the networking stack for the cluster. This option affects:
- The address used for health and readiness probes of components such as Calico, etcd, kube-apiserver, kube-scheduler, kube-controller-manager, and kubelet.
- The server URL in the `authentication-token-webhook-config-file` for the Authorized Cluster Endpoint.
- The `advertise-client-urls` setting for etcd during snapshot restoration.
Options are `ipv4`, `ipv6`, `dual`:
- When set to `ipv4`, the cluster uses `127.0.0.1`
- When set to `ipv6`, the cluster uses `[::1]`
- When set to `dual`, the cluster uses `localhost`
The stack preference must match the clusters networking configuration:
- Set to `ipv4` for IPv4-only clusters
- Set to `ipv6` for IPv6-only clusters
- Set to `dual` for dual-stack clusters
:::caution
Ensuring the loopback address configuration is correct is critical for successful cluster provisioning.
For more information, refer to the [Node Requirements](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md) page.
:::
#### Registries
Select the image repository to pull Rancher images from. For more details and configuration options, see the [K3s documentation](https://rancher.com/docs/k3s/latest/en/installation/private-registry/).
@@ -32,7 +32,7 @@ To edit your cluster,
### Editing Clusters in YAML
For a complete reference of configurable options for K3s clusters in YAML, see the [K3s documentation.](https://rancher.com/docs/k3s/latest/en/installation/install-options/)
For a complete reference of configurable options for RKE2 clusters in YAML, see the [RKE2 documentation](https://docs.rke2.io/install/configuration).
To edit your cluster in YAML:
@@ -48,7 +48,8 @@ This subsection covers generic machine pool configurations. For specific infrast
- [Azure](../downstream-cluster-configuration/machine-configuration/azure.md)
- [DigitalOcean](../downstream-cluster-configuration/machine-configuration/digitalocean.md)
- [EC2](../downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [Amazon EC2](../downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [Google GCE](../downstream-cluster-configuration/machine-configuration/google-gce.md)
##### Pool Name
@@ -86,9 +87,9 @@ Add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-tolerat
#### Basics
##### Kubernetes Version
The version of Kubernetes installed on your cluster nodes. Rancher packages its own version of Kubernetes based on [hyperkube](https://github.com/rancher/hyperkube).
The version of Kubernetes installed on your cluster nodes.
For more detail, see [Upgrading Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
For details on upgrading or rolling back Kubernetes, refer to [this guide](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
##### Container Network Provider
@@ -105,20 +106,19 @@ Out of the box, Rancher is compatible with the following network providers:
- [Canal](https://github.com/projectcalico/canal)
- [Cilium](https://cilium.io/)*
- [Calico](https://docs.projectcalico.org/v3.11/introduction/)
- [Flannel](https://github.com/flannel-io/flannel)
- [Multus](https://github.com/k8snetworkplumbingwg/multus-cni)
\* When using [project network isolation](#project-network-isolation) in the [Cilium CNI](../../../faq/container-network-interface-providers.md#cilium), it is possible to enable cross-node ingress routing. Click the [CNI provider docs](../../../faq/container-network-interface-providers.md#ingress-routing-across-nodes-in-cilium) to learn more.
For more details on the different networking providers and how to configure them, please view our [RKE2 documentation](https://docs.rke2.io/install/network_options).
For more details on the different networking providers and how to configure them, please view our [RKE2 documentation](https://docs.rke2.io/networking/basic_network_options).
###### Dual-stack Networking
[Dual-stack](https://docs.rke2.io/install/network_options#dual-stack-configuration) networking is supported for all CNI providers. To configure RKE2 in dual-stack mode, set valid IPv4/IPv6 CIDRs for your [Cluster CIDR](#cluster-cidr) and/or [Service CIDR](#service-cidr).
###### Dual-stack Additional Configuration
:::caution
When using `cilium` or `multus,cilium` as your container network interface provider, ensure the **Enable IPv6 Support** option is also enabled.
:::
##### Cloud Provider
You can configure a [Kubernetes cloud provider](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md). If you want to use dynamically provisioned [volumes and storage](../../../how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md) in Kubernetes, typically you must select the specific cloud provider in order to use it. For example, if you want to use Amazon EBS, you would need to select the `aws` cloud provider.
@@ -181,27 +181,62 @@ Option to choose whether to expose etcd metrics to the public or only within the
##### Cluster CIDR
IPv4 and/or IPv6 network CIDRs to use for pod IPs (default: 10.42.0.0/16).
IPv4 and/or IPv6 network CIDRs to use for pod IPs (default: `10.42.0.0/16`).
###### Dual-stack Networking
Example values:
To configure [dual-stack](https://docs.rke2.io/install/network_options#dual-stack-configuration) mode, enter a valid IPv4/IPv6 CIDR. For example `10.42.0.0/16,2001:cafe:42:0::/56`.
- IPv4-only: `10.42.0.0/16`
- IPv6-only: `2001:cafe:42::/56`
- Dual-stack: `10.42.0.0/16,2001:cafe:42::/56`
[Additional configuration](#dual-stack-additional-configuration) is required when using `cilium` or `multus,cilium` as your [container network](#container-network-provider) interface provider.
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [RKE2 documentation: Dual-stack configuration](https://docs.rke2.io/networking/basic_network_options#dual-stack-configuration)
- [RKE2 documentation: IPv6-only setup](https://docs.rke2.io/networking/basic_network_options#ipv6-setup)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
:::caution
When using `cilium` or `multus,cilium` as your container network interface provider, ensure the **Enable IPv6 Support** option is also enabled.
:::
##### Service CIDR
IPv4/IPv6 network CIDRs to use for service IPs (default: 10.43.0.0/16).
IPv4/IPv6 network CIDRs to use for service IPs (default: `10.43.0.0/16`).
###### Dual-stack Networking
Example values:
To configure [dual-stack](https://docs.rke2.io/install/network_options#dual-stack-configuration) mode, enter a valid IPv4/IPv6 CIDR. For example `10.42.0.0/16,2001:cafe:42:0::/56`.
- IPv4-only: `10.43.0.0/16`
- IPv6-only: `2001:cafe:43::/112`
- Dual-stack: `10.43.0.0/16,2001:cafe:43::/112`
[Additional configuration](#dual-stack-additional-configuration) is required when using `cilium ` or `multus,cilium` as your [container network](#container-network-provider) interface provider.
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [RKE2 documentation: Dual-stack configuration](https://docs.rke2.io/networking/basic_network_options#dual-stack-configuration)
- [RKE2 documentation: IPv6-only setup](https://docs.rke2.io/networking/basic_network_options#ipv6-setup)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
:::caution
When using `cilium` or `multus,cilium` as your container network interface provider, ensure the **Enable IPv6 Support** option is also enabled.
:::
##### Cluster DNS
IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: 10.43.0.10).
IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: `10.43.0.10`).
##### Cluster Domain
@@ -213,11 +248,11 @@ Option to change the range of ports that can be used for [NodePort services](htt
##### Truncate Hostnames
Option to truncate hostnames to 15 characters or less. You can only set this field during the initial creation of the cluster. You can't enable or disable the 15 character limit after cluster creation.
Option to truncate hostnames to 15 characters or fewer. You can only set this field during the initial creation of the cluster. You can't enable or disable the 15-character limit after cluster creation.
This setting only affects machine-provisioned clusters. Since custom clusters set hostnames during their own node creation process, which occurs outside of Rancher, this field doesn't restrict custom cluster hostname length.
Truncating hostnames in a cluster improves compatibility with Windows-based systems. Although Kubernetes allows hostnames up to 63 characters in length, systems that use NetBIOS restrict hostnames to 15 characters or less.
Truncating hostnames in a cluster improves compatibility with Windows-based systems. Although Kubernetes allows hostnames up to 63 characters in length, systems that use NetBIOS restrict hostnames to 15 characters or fewer.
##### TLS Alternate Names
@@ -233,6 +268,33 @@ For more detail on how an authorized cluster endpoint works and why it is used,
We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.](../../rancher-manager-architecture/architecture-recommendations.md#architecture-for-an-authorized-cluster-endpoint-ace)
##### Stack Preference
Choose the networking stack for the cluster. This option affects:
- The address used for health and readiness probes of components such as Calico, etcd, kube-apiserver, kube-scheduler, kube-controller-manager, and kubelet.
- The server URL in the `authentication-token-webhook-config-file` for the Authorized Cluster Endpoint.
- The `advertise-client-urls` setting for etcd during snapshot restoration.
Options are `ipv4`, `ipv6`, `dual`:
- When set to `ipv4`, the cluster uses `127.0.0.1`
- When set to `ipv6`, the cluster uses `[::1]`
- When set to `dual`, the cluster uses `localhost`
The stack preference must match the clusters networking configuration:
- Set to `ipv4` for IPv4-only clusters
- Set to `ipv6` for IPv6-only clusters
- Set to `dual` for dual-stack clusters
:::caution
Ensuring the loopback address configuration is correct is critical for successful cluster provisioning.
For more information, refer to the [Node Requirements](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md) page.
:::
#### Registries
Select the image repository to pull Rancher images from. For more details and configuration options, see the [RKE2 documentation](https://docs.rke2.io/install/private_registry).
@@ -1,57 +0,0 @@
---
title: Rancher Agent Options
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options"/>
</head>
Rancher deploys an agent on each node to communicate with the node. This pages describes the options that can be passed to the agent. To use these options, you will need to [create a cluster with custom nodes](use-existing-nodes.md) and add the options to the generated `docker run` command when adding a node.
For an overview of how Rancher communicates with downstream clusters using node agents, refer to the [architecture section.](../../../rancher-manager-architecture/communicating-with-downstream-user-clusters.md#3-node-agents)
## General options
| Parameter | Environment variable | Description |
| ---------- | -------------------- | ----------- |
| `--server` | `CATTLE_SERVER` | The configured Rancher `server-url` setting which the agent connects to |
| `--token` | `CATTLE_TOKEN` | Token that is needed to register the node in Rancher |
| `--ca-checksum` | `CATTLE_CA_CHECKSUM` | The SHA256 checksum of the configured Rancher `cacerts` setting to validate |
| `--node-name` | `CATTLE_NODE_NAME` | Override the hostname that is used to register the node (defaults to `hostname -s`) |
| `--label` | `CATTLE_NODE_LABEL` | Add node labels to the node. For multiple labels, pass additional `--label` options. (`--label key=value`) |
| `--taints` | `CATTLE_NODE_TAINTS` | Add node taints to the node. For multiple taints, pass additional `--taints` options. (`--taints key=value:effect`) |
## Role options
| Parameter | Environment variable | Description |
| ---------- | -------------------- | ----------- |
| `--all-roles` | `ALL=true` | Apply all roles (`etcd`,`controlplane`,`worker`) to the node |
| `--etcd` | `ETCD=true` | Apply the role `etcd` to the node |
| `--controlplane` | `CONTROL=true` | Apply the role `controlplane` to the node |
| `--worker` | `WORKER=true` | Apply the role `worker` to the node |
## IP address options
| Parameter | Environment variable | Description |
| ---------- | -------------------- | ----------- |
| `--address` | `CATTLE_ADDRESS` | The IP address the node will be registered with (defaults to the IP used to reach `8.8.8.8`) |
| `--internal-address` | `CATTLE_INTERNAL_ADDRESS` | The IP address used for inter-host communication on a private network |
### Dynamic IP address options
For automation purposes, you can't have a specific IP address in a command as it has to be generic to be used for every node. For this, we have dynamic IP address options. They are used as a value to the existing IP address options. This is supported for `--address` and `--internal-address`.
| Value | Example | Description |
| ---------- | -------------------- | ----------- |
| Interface name | `--address eth0` | The first configured IP address will be retrieved from the given interface |
| `ipify` | `--address ipify` | Value retrieved from `https://api.ipify.org` will be used |
| `awslocal` | `--address awslocal` | Value retrieved from `http://169.254.169.254/latest/meta-data/local-ipv4` will be used |
| `awspublic` | `--address awspublic` | Value retrieved from `http://169.254.169.254/latest/meta-data/public-ipv4` will be used |
| `doprivate` | `--address doprivate` | Value retrieved from `http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address` will be used |
| `dopublic` | `--address dopublic` | Value retrieved from `http://169.254.169.254/metadata/v1/interfaces/public/0/ipv4/address` will be used |
| `azprivate` | `--address azprivate` | Value retrieved from `http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/privateIpAddress?api-version=2017-08-01&format=text` will be used |
| `azpublic` | `--address azpublic` | Value retrieved from `http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/publicIpAddress?api-version=2017-08-01&format=text` will be used |
| `gceinternal` | `--address gceinternal` | Value retrieved from `http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip` will be used |
| `gceexternal` | `--address gceexternal` | Value retrieved from `http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip` will be used |
| `packetlocal` | `--address packetlocal` | Value retrieved from `https://metadata.packet.net/2009-04-04/meta-data/local-ipv4` will be used |
| `packetpublic` | `--address packetlocal` | Value retrieved from `https://metadata.packet.net/2009-04-04/meta-data/public-ipv4` will be used |
@@ -9,7 +9,7 @@ description: To create a cluster with custom nodes, youll need to access serv
When you create a custom cluster, Rancher can use RKE2/K3s to create a Kubernetes cluster in on-prem bare-metal servers, on-prem virtual machines, or in any node hosted by an infrastructure provider.
To use this option you'll need access to servers you intend to use in your Kubernetes cluster. Provision each server according to the [requirements](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md), which includes some hardware specifications and Docker. After you install Docker on each server, you willl also run the command provided in the Rancher UI on each server to turn each one into a Kubernetes node.
To use this option, you need access to the servers that will be part of your Kubernetes cluster. Provision each server according to the [requirements](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md). Then, run the command provided in the Rancher UI on each server to convert it into a Kubernetes node.
This section describes how to set up a custom cluster.
@@ -33,7 +33,15 @@ If you want to reuse a node from a previous custom cluster, [clean the node](../
Provision the host according to the [installation requirements](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md) and the [checklist for production-ready clusters.](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/checklist-for-production-ready-clusters.md)
If you're using Amazon EC2 as your host and want to use the [dual-stack](https://kubernetes.io/docs/concepts/services-networking/dual-stack/) feature, there are additional [requirements](https://rancher.com/docs/rke//latest/en/config-options/dual-stack#requirements) when provisioning the host.
:::note IPv6-only cluster
For an IPv6-only cluster, ensure that your operating system correctly configures the `/etc/hosts` file.
```
::1 localhost
```
:::
### 2. Create the Custom Cluster
@@ -41,39 +49,43 @@ If you're using Amazon EC2 as your host and want to use the [dual-stack](https:/
1. On the **Clusters** page, click **Create**.
1. Click **Custom**.
1. Enter a **Cluster Name**.
1. Use **Cluster Configuration** section to choose the version of Kubernetes, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on **Show advanced options**.
1. Use the **Cluster Configuration** section to set up the cluster. For more information, see [RKE2 Cluster Configuration Reference](../rke2-cluster-configuration.md) and [K3s Cluster Configuration Reference](../k3s-cluster-configuration.md).
:::note Using Windows nodes as Kubernetes workers?
:::note Windows nodes
- See [Enable the Windows Support Option](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters.md).
- The only Network Provider available for clusters with Windows support is Flannel.
To learn more about using Windows nodes as Kubernetes workers, see [Launching Kubernetes on Windows Clusters](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters.md).
:::
:::
:::note Dual-stack on Amazon EC2:
1. Click **Create**.
If you're using Amazon EC2 as your host and want to use the [dual-stack](https://kubernetes.io/docs/concepts/services-networking/dual-stack/) feature, there are additional [requirements](https://rancher.com/docs/rke//latest/en/config-options/dual-stack#requirements) when configuring RKE.
**Result:** The UI redirects to the **Registration** page, where you can generate the registration command for your nodes.
:::
1. From **Node Role**, select the roles you want a cluster node to fill. You must provision at least one node for each role: etcd, worker, and control plane. A custom cluster requires all three roles to finish provisioning. For more information on roles, see [Roles for Nodes in Kubernetes Clusters](../../../kubernetes-concepts.md#roles-for-nodes-in-kubernetes-clusters).
6. Click **Next**.
:::note Bare-Metal Server
4. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
If you plan to dedicate bare-metal servers to each role, you must provision a bare-metal server for each role (i.e., provision multiple bare-metal servers).
7. From **Node Role**, choose the roles that you want filled by a cluster node. You must provision at least one node for each role: `etcd`, `worker`, and `control plane`. All three roles are required for a custom cluster to finish provisioning. For more information on roles, see [this section.](../../../kubernetes-concepts.md#roles-for-nodes-in-kubernetes-clusters)
:::note
:::note
1. **Optional**: Click **Show Advanced** to configure additional settings such as specifying the IP address(es), overriding the node hostname, or adding [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node
- Using Windows nodes as Kubernetes workers? See [this section](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters.md).
- Bare-Metal Server Reminder: If you plan on dedicating bare-metal servers to each role, you must provision a bare-metal server for each role (i.e. provision multiple bare-metal servers).
:::note
:::
The **Node Public IP** and **Node Private IP** fields can accept either a single address or a comma-separated list of addresses (for example: `10.0.0.5,2001:db8::1`).
8. **Optional**: Click **[Show advanced options](rancher-agent-options.md)** to specify IP address(es) to use when registering the node, override the hostname of the node, or to add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node.
:::
9. Copy the command displayed on screen to your clipboard.
:::note Ipv6-only or Dual-stack Cluster
10. Log in to your Linux host using your preferred shell, such as PuTTy or a remote Terminal connection. Run the command copied to your clipboard.
In both IPv6-only and dual-stack clusters, you should specify the nodes **IPv6 address** as the **Node Private IP**.
:::
1. Copy the command displayed on screen to your clipboard.
1. Log in to your Linux host using your preferred shell, such as PuTTy or a remote Terminal connection. Run the command copied to your clipboard.
:::note
@@ -81,11 +93,9 @@ Repeat steps 7-10 if you want to dedicate specific hosts to specific node roles.
:::
11. When you finish running the command(s) on your Linux host(s), click **Done**.
**Result:**
Your cluster is created and assigned a state of **Provisioning**. Rancher is standing up your cluster.
The cluster is created and transitions to the **Updating** state while Rancher initializes and provisions cluster components.
You can access your cluster after its state is updated to **Active**.