Add v2.14 preview docs (#2212)

This commit is contained in:
Lucas Saintarbor
2026-03-05 12:30:57 -08:00
committed by GitHub
parent 4a0d71b3f3
commit 2dcfa6f6b8
874 changed files with 92618 additions and 0 deletions

View File

@@ -0,0 +1,32 @@
---
title: Cluster Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration"/>
</head>
After you provision a Kubernetes cluster using Rancher, you can still edit options and settings for the cluster.
For information on editing cluster membership, go to [this page.](../../how-to-guides/new-user-guides/manage-clusters/access-clusters/add-users-to-clusters.md)
## Cluster Configuration References
The cluster configuration options depend on the type of Kubernetes cluster:
- [RKE2 Cluster Configuration](rancher-server-configuration/rke2-cluster-configuration.md)
- [K3s Cluster Configuration](rancher-server-configuration/k3s-cluster-configuration.md)
- [EKS Cluster Configuration](rancher-server-configuration/eks-cluster-configuration.md)
- [GKE Cluster Configuration](rancher-server-configuration/gke-cluster-configuration/gke-cluster-configuration.md)
- [AKS Cluster Configuration](rancher-server-configuration/aks-cluster-configuration.md)
## Cluster Management Capabilities by Cluster Type
The options and settings available for an existing cluster change based on the method that you used to provision it.
The following table summarizes the options and settings available for each cluster type:
import ClusterCapabilitiesTable from '../../shared-files/_cluster-capabilities-table.md';
<ClusterCapabilitiesTable />

View File

@@ -0,0 +1,9 @@
---
title: Downstream Cluster Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration"/>
</head>
The following docs will discuss [machine configuration](machine-configuration/machine-configuration.md).

View File

@@ -0,0 +1,97 @@
---
title: EC2 Machine Configuration Reference
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/amazon-ec2"/>
</head>
For more details about EC2 nodes, refer to the official documentation for the [EC2 Management Console](https://aws.amazon.com/ec2).
### Region
The geographical [region](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) in which to build your cluster.
### Zone
The [zone](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-availability-zones), an isolated location within a region to build your cluster
### Instance Type
The [instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html), which determines the hardware characteristics, used to provision your cluster.
### Root Disk Size
Configure the size (in GB) for your [root device](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html).
### VPC/Subnet
The [VPC](https://docs.aws.amazon.com/vpc/latest/userguide/configure-your-vpc.html) or specific [subnet](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html), an IP range in your VPC, to add your resources to.
### IAM Instance Profile Name
The name of the [instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) used to pass an IAM role to an EC2 instance.
## Advanced Options
### AMI ID
The [Amazon Machine Image](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) used for the nodes in your cluster.
### SSH Username for AMI
The username for connecting to your launched instances. Refer to [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connection-prereqs.html) for the default usernames to selected AMIs. For AMIs not listed, check with the AMI provider.
### Security Group
Choose the default security group or configure a security group.
Please refer to [Amazon EC2 security group when using Node Driver](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-aws-ec2-security-group) to see what rules are created in the `rancher-nodes` Security Group.
### EBS Root Volume Type
The [EBS volume type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html) to use for the root device.
### Encrypt EBS Volume
Enable [Amazon EBS Encryption](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html).
### Request Spot Instance
Enable option to [request spot instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html) and specify the maximum instance price per hour you're willing to pay.
### Use only private address
Enable option on use only [private addresses](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html).
### EBS-Optimized Instance
Use an [EBS-optimized instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html).
### Allow access to EC2 metadata
Enable access to [EC2 metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html).
### Use tokens for metadata
Use [Instance Metadata Service Version 2 (IMDSv2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html), a token-based method to access metadata.
### Add Tag
Add metadata using [tags](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) to categorize resources.
### IPv6 Address Count
Specify how many IPv6 addresses to assign to the instances network interface.
### IPv6 Address Only
Enable this option if the instance should use IPv6 exclusively. IPv6-only VPCs or subnets require this. When enabled, the instance will have IPv6 as its sole address, and the IPv6 Address Count must be greater than zero.
### HTTP Protocol IPv6
Enable or disable IPv6 endpoints for the instance metadata service.
### Enable Primary IPv6
Enable this option to designate the first assigned IPv6 address as the primary address. This ensures a consistent, non-changing IPv6 address for the instance. It does not control whether IPv6 addresses are assigned.

View File

@@ -0,0 +1,124 @@
---
title: Azure Machine Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/azure"/>
</head>
For more information about Azure, refer to the official [Azure documentation.](https://docs.microsoft.com/en-us/azure/?product=featured)
### Environment
Microsoft provides multiple [clouds](https://docs.microsoft.com/en-us/cli/azure/cloud?view=azure-cli-latest) for compliance with regional laws, which are available for your use:
- AzurePublicCloud
- AzureGermanCloud
- AzureChinaCloud
- AzureUSGovernmentCloud
### Location
Configure the cluster and node [location](https://docs.microsoft.com/en-us/azure/virtual-machines/regions).
### Resource Group
A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to allocate resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.
Use an existing resource group or enter a resource group name and one will be created for you.
For information on managing resource groups, see the [Azure documentation.](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal)
### Availability Set (unmanaged)
Name or ID of an existing [availability set](https://docs.microsoft.com/en-us/azure/virtual-machines/availability-set-overview) to add the VM to.
### Image
The name of the operating system image provided as an ARM resource identifier. Requires using managed disk.
### VM Size
Choose a size for each VM in the node pool. For details about each VM size, see [this page.](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/)
## Advanced Options
### Fault Domain Count
Fault domains define the group of virtual machines that share a common power source and network switch. If the availability set has already been created, the fault domain count will be ignored.
For more information on fault domains, see [refer here](https://docs.microsoft.com/en-us/azure/virtual-machines/availability-set-overview#how-do-availability-sets-work).
### Update Domain Count
Update domains indicate groups of virtual machines and underlying physical hardware that can be rebooted at the same time. If the availability set has already been created, the update domain count will be ignored.
For more information on update domains, see [refer here](https://docs.microsoft.com/en-us/azure/virtual-machines/availability-set-overview#how-do-availability-sets-work).
### Purchase Plan
Some VM images in the Azure Marketplace require a plan. If applicable, select a purchase plan, formatted as `publisher:product:plan`, to use with your chosen image.
### Subnet
The name of the subnet when creating a new VNet or referencing an existing one.
Default: `docker-machine`
### Subnet Prefix
The subnet IP address prefix to use when creating a new VNet in CIDR format.
Default: `192.168.0.0/16`
### Virtual Network
The [virtual network](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview) to use or create if one does not exist. Formatted as `[resourcegroup:]name`.
### Public IP Options
#### No Public IP
Do not allocate a public IP address.
#### Static Public IP
Allocate a static public IP address.
### Use Private IP
Use a static private IP address.
### Private IP Address
Configure a static private IP address to use.
### Network Security Group
The [network security group](https://docs.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview) to use. All nodes using this template will use the supplied network security group. If no network security group is provided, a new one will be created for each node.
### DNS Label
A unique DNS name label for the public IP address.
### Storage Type
The [storage account](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview) type to use with your VMs. Options include Standard LRS, Standard ZRS, Standard GRS, Standard RAGRS, and Premium LRS.
### Use Managed Disks
[Azure managed disks](https://docs.microsoft.com/en-us/azure/virtual-machines/managed-disks-overview) are block-level storage volumes that are managed by Azure and used with Azure Virtual Machines. Managed disks are designed for 99.999% availability. Managed disks achieve this by providing you with three replicas of your data, allowing for high durability.
### Managed Disk Size
The size in GB for the disk for each node.
### SSH Username
The username used to create an SSH connection to your nodes.
### Open Port
Opens inbound traffic on specified ports. When using an existing Network Security Group, Open Ports are ignored.
Default: `2379/tcp, 2380/tcp, 6443/tcp, 9796/tcp, 10250/tcp, 10251/tcp, 10252/tcp, 10256/tcp` and `8472/udp, 4789/udp`

View File

@@ -0,0 +1,39 @@
---
title: DigitalOcean Machine Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/digitalocean"/>
</head>
For more details about DigitalOcean, Droplets, refer to the [official documentation](https://docs.digitalocean.com/products/compute/).
### Region
Configure the [region](https://docs.digitalocean.com/glossary/region/) where Droplets are created.
### Size
Configure the [size](https://docs.digitalocean.com/products/droplets/resources/choose-plan/) of Droplets.
### OS Image
Configure the operating system [image](https://docs.digitalocean.com/products/images/) Droplets are created from.
### Monitoring
Enable the DigitalOcean agent for additional [monitoring](https://docs.digitalocean.com/products/monitoring/).
### IPv6
Enable IPv6 for Droplets.
For more information, refer to the [Digital Ocean IPv6 documentation](https://docs.digitalocean.com/products/networking/ipv6).
### Private Networking
Enable private networking for Droplets.
### Droplet Tags
Apply a tag (label) to a Droplet. Tags may only contain letters, numbers, colons, dashes, and underscores. For example, `my_server`.

View File

@@ -0,0 +1,87 @@
---
title: GCE Machine Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration/google-gce"/>
</head>
For more information about Google Cloud Platform (GCP) and the Google Compute Engine (GCE), refer to the official [GCP documentation](https://cloud.google.com/docs).
### Zone
The GCP Region and Zone that the VM will be deployed to. For example, `us-east1-b`.
### Machine Image Project
The image project that the desired image families belong to.
### Machine Image Family
The image family that the desired machine operating system belongs to.
### Machine Image
The operating system that will be installed onto the VM.
### Disk Type
The type of the disk attached to the VM. The available types may differ between regions.
### Disk Size
The size of the disk attached to the VM, in Gigabytes.
### Machine Type
The type of VM that will be deployed. Machine types determine the number of resources (vCPU, RAM, etc.) allocated for each node.
### Network
The VPC network that the VM will be created in. This value cannot be changed once the machine pool has been provisioned.
### Subnet
The VPC subnetwork tha the VM will be created in. This value cannot be changed once the machine pool has been provisioned.
### Username
A custom username set as the default user of the GCE VM.
### External Address
The desired external IP address for the GCE VM.
### Scopes
A list of OAuth2 scopes which allow the VM to access other GCP APIs.
### Allow Internal Communication
By default, a VPC firewall rule is automatically created to expose a fixed set of ports within the VPC to facilitate communication between cluster nodes. This behavior can be disabled on a per machine pool basis, when clicking the `Show Advanced` option and disabling the `Allow Internal Communication` checkbox.
### Expose External ports
A list of ports to be opened _externally_ to the wider internet. Open ports are defined at the machine pool level. Enabling this option will result in the automatic creation of a VPC firewall rule. This rule will be automatically deleted when the cluster or machine pool is deleted.
### Network Tags
Tags is a list of _network tags_, which can be used to associate preexisting Firewall Rules with all VMs within a machine pool.
### Labels
A comma separated list of custom labels to be attached to all VMs within a given machine pool. Unlike Tags, Labels do not influence networking behavior and only serve to organize cloud resources.
## Advanced Options
When creating clusters via the Rancher UI some options are automatically configured for you. However, when creating machine config objects manually, you must ensure you properly configure the below fields.
### external-firewall-rule-prefix
A prefix that will be used when creating the firewall rule to expose ports publicly. Ideally, this should be a concatenation the machine pool name and the cluster name. This field must be set if the machine pool is configured to expose ports publicly, otherwise it can be omitted.
### internal-firewall-rule-prefix
A prefix that will be used when creating the internal firewall rule which allows for communication between nodes within the cluster. If this field is omitted, no internal firewall rule will be created.

View File

@@ -0,0 +1,9 @@
---
title: Machine Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/machine-configuration"/>
</head>
Machine configuration is the arrangement of resources assigned to a virtual machine. Please see the docs for [Amazon EC2](amazon-ec2.md), [DigitalOcean](digitalocean.md), [Google GCE](google-gce.md), and [Azure](azure.md) to learn more.

View File

@@ -0,0 +1,51 @@
---
title: EC2 Node Template Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/amazon-ec2"/>
</head>
For more details about EC2, nodes, refer to the official documentation for the [EC2 Management Console](https://aws.amazon.com/ec2).
### Region
In the **Region** field, select the same region that you used when creating your cloud credentials.
### Cloud Credentials
Your AWS account access information, stored in a [cloud credential.](../../../user-settings/manage-cloud-credentials.md)
See [Amazon Documentation: Creating Access Keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) how to create an Access Key and Secret Key.
See [Amazon Documentation: Creating IAM Policies (Console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html#access_policies_create-start) how to create an IAM policy.
See [Amazon Documentation: Adding Permissions to a User (Console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) how to attach an IAM
See our three example JSON policies:
- [Example IAM Policy](../../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md#example-iam-policy)
- [Example IAM Policy with PassRole](../../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md#example-iam-policy-with-passrole) (needed if you want to use [Kubernetes Cloud Provider](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md) or want to pass an IAM Profile to an instance)
- [Example IAM Policy to allow encrypted EBS volumes](../../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md#example-iam-policy-to-allow-encrypted-ebs-volumes) policy to an user.
### Authenticate & Configure Nodes
Choose an availability zone and network settings for your cluster.
### Security Group
Choose the default security group or configure a security group.
Please refer to [Amazon EC2 security group when using Node Driver](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-aws-ec2-security-group) to see what rules are created in the `rancher-nodes` Security Group.
If you provide your own security group for an EC2 instance, please note that Rancher will not modify it. As such, you will be responsible for ensuring that your security group is set to allow the [necessary ports for Rancher to provision the instance](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#ports-for-rancher-server-nodes-on-rke2). For more information on controlling inbound and outbound traffic to EC2 instances with security groups, refer [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups).
### Instance Options
Configure the instances that will be created. Make sure you configure the correct **SSH User** for the configured AMI. It is possible that a selected region does not support the default instance type. In this scenario you must select an instance type that does exist, otherwise an error will occur stating the requested configuration is not supported.
If you need to pass an **IAM Instance Profile Name** (not ARN), for example, when you want to use a [Kubernetes Cloud Provider](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md), you will need an additional permission in your policy. See [Example IAM policy with PassRole](../../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md#example-iam-policy-with-passrole) for an example policy.
### Engine Options
In the **Engine Options** section of the node template, you can configure the container daemon. You may want to specify the container version or a container image registry mirror.

View File

@@ -0,0 +1,37 @@
---
title: Azure Node Template Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/azure"/>
</head>
For more information about Azure, refer to the official [Azure documentation.](https://docs.microsoft.com/en-us/azure/?product=featured)
Account access information is stored as a cloud credential. Cloud credentials are stored as Kubernetes secrets. Multiple node templates can use the same cloud credential. You can use an existing cloud credential or create a new one.
- **Placement** sets the geographical region where your cluster is hosted and other location metadata.
- **Network** configures the networking used in your cluster.
- **Instance** customizes your VM configuration.
:::note
If using a VNet in a different Resource Group than the VMs, the VNet name should be prefixed with the Resource Group name. For example, `<resource group>:<vnet>`.
:::
If you use Docker, the [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include:
- **Labels:** For information on labels, refer to the [Docker object label documentation.](https://docs.docker.com/config/labels-custom-metadata/).
- **Docker Engine Install URL:** Determines what Docker version will be installed on the instance.
:::note
If you're provisioning Red Hat Enterprise Linux (RHEL) or CentOS nodes, leave the **Docker Install URL** field as the default value, or select **none**. This will bypass a check for Docker installation, as Docker is already installed on these node types.
If you set **Docker Install URL** to a value other than the default or **none**, you might see an error message such as the following: `Error creating machine: RHEL ssh command error: command: sudo -E yum install -y curl err: exit status 1 output: Updating Subscription Management repositories.`
:::
- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon.
- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/).

View File

@@ -0,0 +1,22 @@
---
title: DigitalOcean Node Template Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/digitalocean"/>
</head>
Account access information is stored as a cloud credential. Cloud credentials are stored as Kubernetes secrets. Multiple node templates can use the same cloud credential. You can use an existing cloud credential or create a new one.
## Droplet Options
The **Droplet Options** provision your cluster's geographical region and specifications.
## Docker Daemon
If you use Docker, the [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include:
- **Labels:** For information on labels, refer to the [Docker object label documentation.](https://docs.docker.com/config/labels-custom-metadata/)
- **Docker Engine Install URL:** Determines what Docker version will be installed on the instance.
- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon
- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/).

View File

@@ -0,0 +1,11 @@
---
title: Node Template Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration"/>
</head>
<EOLRKE1Warning />
To learn about node template config, refer to [EC2 Node Template Configuration](amazon-ec2.md), [DigitalOcean Node Template Configuration](digitalocean.md), [Azure Node Template Configuration](azure.md), [vSphere Node Template Configuration](vsphere.md), and [Nutanix Node Template Configuration](nutanix.md).

View File

@@ -0,0 +1,70 @@
---
title: Nutanix Node Template Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/nutanix"/>
</head>
## Account Access
| Parameter | Required | Description | Default
|:-----------------------------|:--------:|:-----------------------------------------------------------------|:-----
| Management Endpoint | ✓ | Hostname/IP address of Prism Central |
| Username | ✓ | Username of the Prism Central user |
| Password | ✓ | Password of the Prism Central user |
| Allow insecure communication | | Set to true to allow insecure SSL communication to Prism Central | False
## Scheduling
Choose what Nutanix cluster the virtual machine will be scheduled to.
| Parameter | Required | Description
|:----------|:--------:|:----------------------------------------------------------------------------
| Cluster | ✓ | Name of the Nutanix cluster where the VM should be deployed (case sensitive)
## Instance Options
In the **Instance Options** section, configure the number of vCPUs, memory, and disk size for the VMs created by this template.
| Parameter | Required | Description | Default
|:---------------------|:--------:|:--------------------------------------------------------------------------------------------|:-------
| CPUs | | Number of vCPUs allocated to the VM (cores) | 2
| Memory | | Amount of RAM allocated to the VM (MB) | 2 GB
| Template Image | ✓ | Name of the Disk Image template to clone as the VM's primary disk (must support cloud-init) |
| VM Disk Size | | New size of the VM's primary disk (in GiB) |
| Additional Disk Size | | Size of an additional disk to add to the VM (in GiB) |
| Storage Container | | Storage container _UUID_ in which to provision an additional disk |
| Cloud Config YAML | | Cloud-init to provide to the VM (will be patched with Rancher root user) |
| Network | ✓ | Name(s) of the network(s) to attach to the VM |
| VM Categories | | Name(s) of any categories to be applied to the VM |
The VM may use any modern Linux operating system that is configured with support for [cloud-init](https://cloudinit.readthedocs.io/en/latest/) using the [Config Drive v2 datasource](https://cloudinit.readthedocs.io/en/latest/reference/datasources/configdrive.html).
## Networks
The node template allows a VM to be provisioned with multiple networks. In the **Network** field, you can click **Add** to add any networks available to you in AOS.
## VM Categories
A category is a grouping of entities into a key value pair. Typically, VMs are assigned to a category based on some criteria. Policies can then be tied to those entities that are assigned (grouped by) a specific category value.
## cloud-init
[Cloud-init](https://cloudinit.readthedocs.io/en/latest/) allows you to initialize your nodes by applying configuration on the first boot. This may involve things such as creating users or authorizing SSH keys.
To make use of cloud-init initialization, paste a cloud config using valid YAML syntax into the **Cloud Config YAML** field. Refer to the [cloud-init documentation](https://cloudinit.readthedocs.io/en/latest/topics/examples.html) for a commented set of examples of supported cloud config directives.
Note that cloud-init based network configuration is not recommended and only supported via user data `runcmd` rather than by NoCloud or other network configuration datasources.
Nutanix IP Address Management (IPAM) or another DHCP service is recommended.
## Engine Options
In the **Engine Options** section of the node template, you can configure the container daemon. You may want to specify the container version or a container image registry mirror.
:::note
If you're provisioning Red Hat Enterprise Linux (RHEL) or CentOS nodes, leave the **Docker Install URL** field as the default value, or select **none**. This will bypass a check for Docker installation, as Docker is already installed on these node types.
If you set **Docker Install URL** to a value other than the default or **none**, you might see an error message such as the following: `Error creating machine: RHEL ssh command error: command: sudo -E yum install -y curl err: exit status 1 output: Updating Subscription Management repositories.`
:::

View File

@@ -0,0 +1,100 @@
---
title: VMware vSphere Node Template Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/downstream-cluster-configuration/node-template-configuration/vsphere"/>
</head>
## Account Access
| Parameter | Required | Description |
|:----------------------|:--------:|:-----|
| Cloud Credentials | * | Your vSphere account access information, stored in a [cloud credential.](../../../user-settings/manage-cloud-credentials.md) |
Your cloud credential has these fields:
| Credential Field | Description |
|-----------------|--------------|
| vCenter or ESXi Server | Enter the vCenter or ESXi hostname/IP. ESXi is the virtualization platform where you create and run virtual machines and virtual appliances. vCenter Server is the service through which you manage multiple hosts connected in a network and pool host resources. |
| Port | Configure the port of the vCenter or ESXi server. |
| Username and password | Enter your vSphere login username and password. |
## Scheduling
Choose what hypervisor the virtual machine will be scheduled to.
The fields in the **Scheduling** section should auto-populate with the data center and other scheduling options that are available to you in vSphere.
| Field | Required | Explanation |
|---------|---------------|-----------|
| Data Center | * | Choose the name/path of the data center where the VM will be scheduled. |
| Resource Pool | | Name of the resource pool to schedule the VMs in. Resource pools can be used to partition available CPU and memory resources of a standalone host or cluster, and they can also be nested. Leave blank for standalone ESXi. If not specified, the default resource pool is used. |
| Data Store | * | If you have a data store cluster, you can toggle the **Data Store** field. This lets you select a data store cluster where your VM will be scheduled to. If the field is not toggled, you can select an individual disk. |
| Folder | | Name of a folder in the datacenter to create the VMs in. Must already exist. The VM folders in this dropdown menu directly correspond to your VM folders in vSphere. The folder name should be prefaced with `vm/` in your vSphere config file. |
| Host | | The IP of the host system to schedule VMs in. Leave this field blank for a standalone ESXi or for a cluster with DRS (Distributed Resource Scheduler). If specified, the host system's pool will be used and the **Resource Pool** parameter will be ignored. |
| Graceful Shutdown Timeout | | The amount of time, in seconds, that Rancher waits before deleting virtual machines on a cluster. If set to `0`, graceful shutdown is disabled. Only accepts integer values. |
## Instance Options
In the **Instance Options** section, configure the number of vCPUs, memory, and disk size for the VMs created by this template.
| Parameter | Required | Description |
|:----------------|:--------:|:-----------|
| CPUs | * | Number of vCPUS to assign to VMs. |
| Memory | * | Amount of memory to assign to VMs. |
| Disk | * | Size of the disk (in MB) to attach to the VMs. |
| Creation method | * | The method for setting up an operating system on the node. The operating system can be installed from an ISO or from a VM template. Depending on the creation method, you will also have to specify a VM template, content library, existing VM, or ISO. For more information on creation methods, refer to [About VM Creation Methods.](#about-vm-creation-methods) |
| Cloud Init | | URL of a `cloud-config.yml` file or URL to provision VMs with. This file allows further customization of the operating system, such as network configuration, DNS servers, or system daemons. The operating system must support `cloud-init`. |
| Networks | | Name(s) of the network to attach the VM to. |
| Configuration Parameters used for guestinfo | | Additional configuration parameters for the VMs. These correspond to the [Advanced Settings](https://kb.vmware.com/s/article/1016098) in the vSphere console. Example use cases include providing RancherOS [guestinfo](https://rancher.com/docs/os/v1.x/en/installation/cloud/vmware-esxi/#vmware-guestinfo) parameters or enabling disk UUIDs for the VMs (`disk.EnableUUID=TRUE`). |
### About VM Creation Methods
In the **Creation method** field, configure the method used to provision VMs in vSphere. Available options include creating VMs that boot from a RancherOS ISO or creating VMs by cloning from an existing virtual machine or [VM template](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-F7BF0E6B-7C4F-4E46-8BBF-76229AEA7220.html).
The existing VM or template may use any modern Linux operating system that is configured with support for [cloud-init](https://cloudinit.readthedocs.io/en/latest/) using the [NoCloud datasource](https://canonical-cloud-init.readthedocs-hosted.com/en/latest/reference/datasources/nocloud.html).
Choose the way that the VM will be created:
- **Deploy from template: Data Center:** Choose a VM template that exists in the data center that you selected.
- **Deploy from template: Content Library:** First, select the [Content Library](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-254B2CE8-20A8-43F0-90E8-3F6776C2C896.html) that contains your template, then select the template from the populated list **Library templates**.
- **Clone an existing virtual machine:** In the **Virtual machine** field, choose an existing VM that the new VM will be cloned from.
- **Install from boot2docker ISO:** Ensure that the **OS ISO URL** field contains the URL of a VMware ISO release for RancherOS (`rancheros-vmware.iso`). Note that this URL must be accessible from the nodes running your Rancher server installation.
## Networks
The node template now allows a VM to be provisioned with multiple networks. In the **Networks** field, you can now click **Add Network** to add any networks available to you in vSphere.
## Node Tags and Custom Attributes
Tags allow you to attach metadata to objects in the vSphere inventory to make it easier to sort and search for these objects.
For tags, all your vSphere tags will show up as options to select from in your node template.
In the custom attributes, Rancher will let you select all the custom attributes you have already set up in vSphere. The custom attributes are keys and you can enter values for each one.
:::note
Custom attributes are a legacy feature that will eventually be removed from vSphere.
:::
## cloud-init
[Cloud-init](https://cloudinit.readthedocs.io/en/latest/) allows you to initialize your nodes by applying configuration on the first boot. This may involve things such as creating users, authorizing SSH keys or setting up the network.
To make use of cloud-init initialization, create a cloud config file using valid YAML syntax and paste the file content in the the **Cloud Init** field. Refer to the [cloud-init documentation.](https://cloudinit.readthedocs.io/en/latest/topics/examples.html) for a commented set of examples of supported cloud config directives.
Note that cloud-init is not supported when using the ISO creation method.
## Engine Options
In the **Engine Options** section of the node template, you can configure the container daemon. You may want to specify the container version or a container image registry mirror.
:::note
If you're provisioning Red Hat Enterprise Linux (RHEL) or CentOS nodes, leave the **Docker Install URL** field as the default value, or select **none**. This will bypass a check for Docker installation, as Docker is already installed on these node types.
If you set **Docker Install URL** to a value other than the default or **none**, you might see an error message such as the following: `Error creating machine: RHEL ssh command error: command: sudo -E yum install -y curl err: exit status 1 output: Updating Subscription Management repositories.`
:::

View File

@@ -0,0 +1,221 @@
---
title: AKS Cluster Configuration Reference
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/aks-cluster-configuration"/>
</head>
## Role-based Access Control
When provisioning an AKS cluster in the Rancher UI, RBAC cannot be disabled. If RBAC is disabled in the AKS cluster, the cluster cannot be registered or imported into Rancher. In practice, this means that local accounts must be enabled in order to register or import an AKS cluster.
Rancher can configure member roles for AKS clusters in the same way as any other cluster. For more information, see the section on [role-based access control.](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/manage-role-based-access-control-rbac.md)
## Cloud Credentials
:::note
The configuration information in this section assumes you have already set up a service principal for Rancher. For step-by-step instructions for how to set up the service principal, see [this section.](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/aks.md#prerequisites-in-microsoft-azure)
:::
### Subscription ID
To get the subscription ID, click **All Services** in the left navigation bar. Then click **Subscriptions**. Go to the name of the subscription that you want to associate with your Kubernetes cluster and copy the **Subscription ID**.
### Client ID
To get the client ID, go to the Azure Portal, then click **Azure Active Directory**, then click **App registrations,** then click the name of the service principal. The client ID is listed on the app registration detail page as **Application (client) ID**.
### Client Secret
You can't retrieve the client secret value after it is created, so if you don't already have a client secret value, you will need to create a new client secret.
To get a new client secret, go to the Azure Portal, then click **Azure Active Directory**, then click **App registrations,** then click the name of the service principal.
Then click **Certificates & secrets** and click **New client secret**. Click **Add**. Then copy the **Value** of the new client secret.
### Environment
Microsoft provides multiple [clouds](https://docs.microsoft.com/en-us/cli/azure/cloud?view=azure-cli-latest) for compliance with regional laws, which are available for your use:
- AzurePublicCloud
- AzureGermanCloud
- AzureChinaCloud
- AzureUSGovernmentCloud
## Account Access
In this section you will need to select an existing Azure cloud credential or create a new one.
For help configuring your Azure cloud credential, see [this section.](#cloud-credentials)
## Cluster Location
Configure the cluster and node location. For more information on availability zones for AKS, see the [AKS documentation.](https://docs.microsoft.com/en-us/azure/aks/availability-zones)
The high availability locations include multiple availability zones.
## Cluster Options
### Kubernetes Version
The available Kubernetes versions are dynamically fetched from the Azure API.
### Cluster Resource Group
A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to allocate resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.
Use an existing resource group or enter a resource group name and one will be created for you.
Using a resource group containing an existing AKS cluster will create a new resource group. Azure AKS only allows one AKS cluster per resource group.
For information on managing resource groups, see the [Azure documentation.](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal)
### Linux Admin Username
The username used to create an SSH connection to the Linux nodes.
The default username for AKS nodes is `azureuser`.
### SSH Public Key
The key used to create an SSH connection to the Linux nodes.
### Tags
Cluster tags can be useful if your organization uses tags as a way to organize resources across multiple Azure services. These tags don't apply to resources within the cluster.
## Networking Options
### LoadBalancer SKU
Azure load balancers support both standard and basic SKUs (stock keeping units).
For a comparison of standard and basic load balancers, see the official [Azure documentation.](https://docs.microsoft.com/en-us/azure/load-balancer/skus#skus) Microsoft recommends the Standard load balancer.
The Standard load balancer is required if you have selected one or more availability zones, or if you have more than one node pool.
### Network Policy
All pods in an AKS cluster can send and receive traffic without limitations, by default. To improve security, you can define rules that control the flow of traffic. The Network Policy feature in Kubernetes lets you define rules for ingress and egress traffic between pods in a cluster.
Azure provides two ways to implement network policy. You choose a network policy option when you create an AKS cluster. The policy option can't be changed after the cluster is created:
- Azure's own implementation, called Azure Network Policies. The Azure network policy requires the Azure CNI.
- Calico Network Policies, an open-source network and network security solution founded by [Tigera](https://www.tigera.io/).
You can also choose to have no network policy.
For more information about the differences between Azure and Calico network policies and their capabilities, see the [AKS documentation.](https://docs.microsoft.com/en-us/azure/aks/use-network-policies#differences-between-azure-and-calico-policies-and-their-capabilities)
### DNS Prefix
Enter a unique DNS prefix for your cluster's Kubernetes API server FQDN.
### Network Plugin
There are two network plugins: kubenet and Azure CNI.
The [kubenet](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#kubenet) Kubernetes plugin is the default configuration for AKS cluster creation. When kubenet is used, each node in the cluster receives a routable IP address. The pods use NAT to communicate with other resources outside the AKS cluster. This approach reduces the number of IP addresses you need to reserve in your network space for pods to use.
With the Azure CNI (advanced) networking plugin, pods get full virtual network connectivity and can be directly reached via their private IP address from connected networks. This plugin requires more IP address space.
For more information on the differences between kubenet and Azure CNI, see the [AKS documentation.](https://docs.microsoft.com/en-us/azure/aks/concepts-network#compare-network-models)
### HTTP Application Routing
When enabled, the HTTP application routing add-on makes it easier to access applications deployed to the AKS cluster. It deploys two components: a [Kubernetes Ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress/) and an [External-DNS](https://github.com/kubernetes-incubator/external-dns) controller.
For more information, see the [AKS documentation.](https://docs.microsoft.com/en-us/azure/aks/http-application-routing)
### Set Authorized IP Ranges
You can secure access to the Kubernetes API server using [authorized IP address ranges.](https://docs.microsoft.com/en-us/azure/aks/api-server-authorized-ip-ranges#overview-of-api-server-authorized-ip-ranges)
The Kubernetes API server exposes the Kubernetes API. This component provides the interaction for management tools, such as kubectl. AKS provides a single-tenant cluster control plane with a dedicated API server. By default, the API server is assigned a public IP address, and you should control access to it using Kubernetes-based or Azure-based RBAC.
To secure access to the otherwise publicly accessible AKS control plane and API server, you can enable and use authorized IP ranges. These authorized IP ranges only allow defined IP address ranges to communicate with the API server.
However, even if you use authorized IP address ranges, you should still use Kubernetes RBAC or Azure RBAC to authorize users and the actions they request.
### Container Monitoring
Container monitoring gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring, metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are written to the metrics store and log data is written to the logs store associated with your [Log Analytics](https://docs.microsoft.com/en-us/azure/azure-monitor/logs/log-query-overview) workspace.
### Log Analytics Workspace Resource Group
The [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview#resource-groups) containing the Log Analytics Workspace. You must create at least one workspace to use Azure Monitor Logs.
### Log Analytics Workspace Name
Data collected by Azure Monitor Logs is stored in one or more [Log Analytics workspaces.](https://docs.microsoft.com/en-us/azure/azure-monitor/logs/design-logs-deployment) The workspace defines the geographic location of the data, access rights defining which users can access data, and configuration settings such as the pricing tier and data retention.
You must create at least one workspace to use Azure Monitor Logs. A single workspace may be suffxicient for all of your monitoring data, or may choose to create multiple workspaces depending on your requirements. For example, you might have one workspace for your production data and another for testing.
For more information about Azure Monitor Logs, see the [Azure documentation.](https://docs.microsoft.com/en-us/azure/azure-monitor/logs/data-platform-logs)
### Support Private Kubernetes Service
Typically, AKS worker nodes do not get public IPs, regardless of whether the cluster is private. In a private cluster, the control plane does not have a public endpoint.
Rancher can connect to a private AKS cluster in one of two ways.
The first way to ensure that Rancher is running on the same [NAT](https://docs.microsoft.com/en-us/azure/virtual-network/nat-overview) as the AKS nodes.
The second way is to run a command to register the cluster with Rancher. Once the cluster is provisioned, you can run the displayed command anywhere you can connect to the clusters Kubernetes API. This command is displayed in a pop-up when you provision an AKS cluster with a private API endpoint enabled.
:::note
Please be aware that when registering an existing AKS cluster, the cluster might take some time, possibly hours, to appear in the `Cluster To register` dropdown list. This outcome will be based on region.
:::
For more information about connecting to an AKS private cluster, see the [AKS documentation.](https://docs.microsoft.com/en-us/azure/aks/private-clusters#options-for-connecting-to-the-private-cluster)
## Node Pools
### Mode
The Azure interface allows users to specify whether a Primary Node Pool relies on either `system` (normally used for control planes) or `user` (what is most typically needed for Rancher).
For Primary Node Pools, you can specify Mode, OS, Count and Size.
System node pools always require running nodes, so they cannot be scaled below one node. At least one system node pool is required.
For subsequent node pools, the Rancher UI forces the default of User. User node pools allow you to scale to zero nodes. User node pools don't run any part of the Kubernetes controlplane.
AKS doesn't expose the nodes that run the Kubernetes controlplane components.
### Availability Zones
[Availability zones](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview) are unique physical locations within a region. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking.
Not all regions have support for availability zones. For a list of Azure regions with availability zones, see the [Azure documentation.](https://docs.microsoft.com/en-us/azure/availability-zones/az-region#azure-regions-with-availability-zones)
### VM Size
Choose a size for each VM in the node pool. For details about each VM size, see [this page.](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/)
### OS Disk Type
The nodes in the node pool can have either managed or ephemeral disks.
[Ephemeral OS disks](https://docs.microsoft.com/en-us/azure/virtual-machines/ephemeral-os-disks) are created on the local virtual machine storage and not saved to the remote Azure Storage. Ephemeral OS disks work well for stateless workloads, where applications are tolerant of individual VM failures, but are more affected by VM deployment time or reimaging the individual VM instances. With Ephemeral OS disk, you get lower read/write latency to the OS disk and faster VM reimage.
[Azure managed disks](https://docs.microsoft.com/en-us/azure/virtual-machines/managed-disks-overview) are block-level storage volumes that are managed by Azure and used with Azure Virtual Machines. Managed disks are designed for 99.999% availability. Managed disks achieve this by providing you with three replicas of your data, allowing for high durability.
### OS Disk Size
The size in GB for the disk for each node.
### Node Count
The number of nodes in the node pool. The maximum number of nodes may be limited by your [Azure subscription.](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits)
### Max Pods Per Node
The maximum number of pods per node defaults to 110 with a maximum of 250.
### Enable Auto Scaling
When auto scaling is enabled, you will need to enter a minimum and maximum node count.
When Auto Scaling is enabled, you can't manually scale the node pool. The scale is controlled by the AKS autoscaler.

View File

@@ -0,0 +1,180 @@
---
title: EKS Cluster Configuration Reference
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/eks-cluster-configuration"/>
</head>
### Account Access
Complete each drop-down and field using the information obtained for your IAM policy.
| Setting | Description |
| ---------- | -------------------------------------------------------------------------------------------------------------------- |
| Region | From the drop-down choose the geographical region in which to build your cluster. |
| Cloud Credentials | Select the cloud credentials that you created for your IAM policy. For more information on creating cloud credentials in Rancher, refer to [this page.](../../user-settings/manage-cloud-credentials.md) |
### Service Role
Choose a [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html).
Service Role | Description
-------------|---------------------------
Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster.
Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you're already created within AWS. For more information on creating a custom service role in AWS, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#create-service-linked-role).
### Secrets Encryption
Optional: To encrypt secrets, select or enter a key created in [AWS Key Management Service (KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html)
### API Server Endpoint Access
Configuring Public/Private API access is an advanced use case. For details, refer to the EKS cluster endpoint access control [documentation.](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html)
### Private-only API Endpoints
If you enable private and disable public API endpoint access when creating a cluster, then there is an extra step you must take in order for Rancher to connect to the cluster successfully. In this case, a pop-up will be displayed with a command that you will run on the cluster to register it with Rancher. Once the cluster is provisioned, you can run the displayed command anywhere you can connect to the cluster's Kubernetes API.
There are two ways to avoid this extra manual step:
- You can create the cluster with both private and public API endpoint access on cluster creation. You can disable public access after the cluster is created and in an active state and Rancher will continue to communicate with the EKS cluster.
- You can ensure that Rancher shares a subnet with the EKS cluster. Then security groups can be used to enable Rancher to communicate with the cluster's API endpoint. In this case, the command to register the cluster is not needed, and Rancher will be able to communicate with your cluster. For more information on configuring security groups, refer to the [security groups documentation](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html).
### Public Access Endpoints
Optionally limit access to the public endpoint via explicit CIDR blocks.
If you limit access to specific CIDR blocks, then it is recommended that you also enable the private access to avoid losing network communication to the cluster.
One of the following is required to enable private access:
- Rancher's IP must be part of an allowed CIDR block
- Private access should be enabled, and Rancher must share a subnet with the cluster and have network access to the cluster, which can be configured with a security group
For more information about public and private access to the cluster endpoint, refer to the [Amazon EKS documentation.](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html)
### Subnet
| Option | Description |
| ------- | ------------ |
| Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC with 3 public subnets. |
| Custom: Choose from your existing VPC and Subnets | While provisioning your cluster, Rancher configures your Control Plane and nodes to use a VPC and Subnet that you've already [created in AWS](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html). |
For more information, refer to the AWS documentation for [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html). Follow one of the sets of instructions below based on your selection from the previous step.
- [What Is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html)
- [VPCs and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html)
### Security Group
Amazon Documentation:
- [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)
- [Security Groups for Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html)
- [Create a Security Group](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html#getting-started-create-security-group)
### Logging
Configure control plane logs to send to Amazon CloudWatch. You are charged the standard CloudWatch Logs data ingestion and storage costs for any logs sent to CloudWatch Logs from your clusters.
Each log type corresponds to a component of the Kubernetes control plane. To learn more about these components, see [Kubernetes Components](https://kubernetes.io/docs/concepts/overview/components/) in the Kubernetes documentation.
For more information on EKS control plane logging, refer to the official [documentation.](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html)
### Managed Node Groups
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
For more information about how node groups work and how they are configured, refer to the [EKS documentation.](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html)
#### User-provided Launch Templates
You can provide your own launch template ID and version to configure the EC2 instances in a node group. If you provide the launch template, none of the template settings will be configurable from Rancher. You must set all of the required options listed below in your launch template.
Also, if you provide the launch template, you can only update the template version, not the template ID. To use a new template ID, create a new managed node group.
| Option | Description | Required/Optional |
| ------ | ----------- | ----------------- |
| Instance Type | Choose the [hardware specs](https://aws.amazon.com/ec2/instance-types/) for the instance you're provisioning. | Required |
| Image ID | Specify a custom AMI for the nodes. Custom AMIs used with EKS must be [configured properly](https://aws.amazon.com/premiumsupport/knowledge-center/eks-custom-linux-ami/) | Optional |
| Node Volume Size | The launch template must specify an EBS volume with the desired size | Required |
| SSH Key | A key to be added to the instances to provide SSH access to the nodes | Optional |
| User Data | Cloud init script in [MIME multi-part format](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-user-data) | Optional |
| Instance Resource Tags | Tag each EC2 instance and its volumes in the node group | Optional |
:::caution
You can't directly update a node group to a newer Kubernetes version if the node group was created from a custom launch template. You must create a new launch template with the proper Kubernetes version, and associate the node group with the new template.
:::
#### Rancher-managed Launch Templates
If you do not specify a launch template, then you will be able to configure the above options in the Rancher UI and all of them can be updated after creation. In order to take advantage of all of these options, Rancher will create and manage a launch template for you. Each cluster in Rancher will have one Rancher-managed launch template and each managed node group that does not have a specified launch template will have one version of the managed launch template. The name of this launch template will have the prefix "rancher-managed-lt-" followed by the display name of the cluster. In addition, the Rancher-managed launch template will be tagged with the key "rancher-managed-template" and value "do-not-modify-or-delete" to help identify it as Rancher-managed. It is important that this launch template and its versions not be modified, deleted, or used with any other clusters or managed node groups. Doing so could result in your node groups being "degraded" and needing to be destroyed and recreated.
#### Custom AMIs
If you specify a custom AMI, whether in a launch template or in Rancher, then the image must be [configured properly](https://aws.amazon.com/premiumsupport/knowledge-center/eks-custom-linux-ami/) and you must provide user data to [bootstrap the node](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-custom-ami). This is considered an advanced use case and understanding the requirements is imperative.
If you specify a launch template that does not contain a custom AMI, then Amazon will use the [EKS-optimized AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) for the Kubernetes version and selected region. You can also select a [GPU enabled instance](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html#gpu-ami) for workloads that would benefit from it.
:::note
The GPU enabled instance setting in Rancher is ignored if a custom AMI is provided, either in the dropdown or in a launch template.
:::
#### Spot instances
Spot instances are now [supported by EKS](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html#managed-node-group-capacity-types-spot). If a launch template is specified, Amazon recommends that the template not provide an instance type. Instead, Amazon recommends providing multiple instance types. If the "Request Spot Instances" checkbox is enabled for a node group, then you will have the opportunity to provide multiple instance types.
:::note
Any selection you made in the instance type dropdown will be ignored in this situation and you must specify at least one instance type to the "Spot Instance Types" section. Furthermore, a launch template used with EKS cannot request spot instances. Requesting spot instances must be part of the EKS configuration.
:::
#### Node Group Settings
The following settings are also configurable. All of these except for the "Node Group Name" are editable after the node group is created.
| Option | Description |
| ------- | ------------ |
| Node Group Name | The name of the node group. |
| Desired ASG Size | The desired number of instances. |
| Maximum ASG Size | The maximum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. |
| Minimum ASG Size | The minimum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. |
| Labels | Kubernetes labels applied to the nodes in the managed node group. |
| Tags | These are tags for the managed node group and do not propagate to any of the associated resources. |
### Self-managed Amazon Linux Nodes
You can register an EKS cluster containing self-managed Amazon Linux nodes. You must configure this type of cluster according to the instructions in the official AWS documentation for [launching self-managed Amazon Linux nodes](https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html). EKS clusters containing self-managed Amazon Linux nodes are usually operated by the [Karpenter](https://karpenter.sh/docs/) project. After you provision an EKS cluster containing self-managed Amazon Linux nodes, [register the cluster](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md) so it can be managed by Rancher. However, the nodes won't be visible in the Rancher UI.
### IAM Roles for Service Accounts
An Applications Deployment running on an EKS cluster can make requests to AWS services via IAM permissions. These applications must sign their requests with AWS credentials. IAM roles for service accounts manage these credentials using an AWS OIDC endpoint. Rather than distributing AWS credentials to containers or relying on an EC2 instance's role, you can link an [IAM role to a Kubernetes service account](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) and configure your Pods to use this account.
:::note
Linking to an IAM role is not supported for Rancher pods in an EKS cluster.
:::
To enable IAM roles for service accounts:
1. [Create an IAM OIDC provider for your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html)
1. [Configure a Kubernetes service account to assume an IAM role](https://docs.aws.amazon.com/eks/latest/userguide/associate-service-account-role.html)
1. [Configure Pods to use a Kubernetes service account](https://docs.aws.amazon.com/eks/latest/userguide/pod-configuration.html)
1. [Use a supported AWS SDK](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-minimum-sdk.html)
### Configuring the Refresh Interval
The `eks-refresh-cron` setting is deprecated. It has been migrated to the `eks-refresh` setting, which is an integer representing seconds.
The default value is 300 seconds.
The syncing interval can be changed by running `kubectl edit setting eks-refresh`.
If the `eks-refresh-cron` setting was previously set, the migration will happen automatically.
The shorter the refresh window, the less likely any race conditions will occur, but it does increase the likelihood of encountering request limits that may be in place for AWS APIs.

View File

@@ -0,0 +1,327 @@
---
title: GKE Cluster Configuration Reference
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration"/>
</head>
## Cluster Location
| Value | Description |
|--------|--------------|
| Location Type | Zonal or Regional. With GKE, you can create a cluster tailored to the availability requirements of your workload and your budget. By default, a cluster's nodes run in a single compute zone. When multiple zones are selected, the cluster's nodes will span multiple compute zones, while the controlplane is located in a single zone. Regional clusters increase the availability of the controlplane as well. For help choosing the type of cluster availability, refer to [these docs.](https://cloud.google.com/kubernetes-engine/docs/best-practices/scalability#choosing_a_regional_or_zonal_control_plane) |
| Zone | Each region in Compute engine contains a number of zones. For more information about available regions and zones, refer to [these docs.](https://cloud.google.com/compute/docs/regions-zones#available) |
| Additional Zones | For zonal clusters, you can select additional zones to create a [multi-zone cluster.](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#multi-zonal_clusters) |
| Region | For [regional clusters,](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#regional_clusters) you can select a region. For more information about available regions and zones, refer to [this section](https://cloud.google.com/compute/docs/regions-zones#available). The first part of each zone name is the name of the region. |
## Cluster Options
### Kubernetes Version
_Mutable: yes_
For more information on GKE Kubernetes versions, refer to [these docs.](https://cloud.google.com/kubernetes-engine/versioning)
### Container Address Range
_Mutable: no_
The IP address range for pods in the cluster. Must be a valid CIDR range, e.g. 10.42.0.0/16. If not specified, a random range is automatically chosen from 10.0.0.0/8 and will exclude ranges already allocated to VMs, other clusters, or routes. Automatically chosen ranges may conflict with reserved IP addresses, dynamic routes, or routes within VPCs peering with the cluster.
### Network
_Mutable: no_
The Compute Engine Network that the cluster connects to. Routes and firewalls will be created using this network. If using [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc), the VPC networks that are shared to your project will appear here. will be available to select in this field. For more information, refer to [this page](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets).
### Node Subnet / Subnet
_Mutable: no_
The Compute Engine subnetwork that the cluster connects to. This subnetwork must belong to the network specified in the **Network** field. Select an existing subnetwork, or select "Auto Create Subnetwork" to have one automatically created. If not using an existing network, **Subnetwork Name** is required to generate one. If using [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc), the VPC subnets that are shared to your project will appear here. If using a Shared VPC network, you cannot select "Auto Create Subnetwork". For more information, refer to [this page.](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets)
### Subnetwork Name
_Mutable: no_
Automatically create a subnetwork with the provided name. Required if "Auto Create Subnetwork" is selected for **Node Subnet** or **Subnet**. For more information on subnetworks, refer to [this page.](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets)
### Ip Aliases
_Mutable: no_
Enable [alias IPs](https://cloud.google.com/vpc/docs/alias-ip). This enables VPC-native traffic routing. Required if using [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc).
### Network Policy
_Mutable: yes_
Enable network policy enforcement on the cluster. A network policy defines the level of communication that can occur between pods and services in the cluster. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy)
### Project Network Isolation
_Mutable: yes_
Choose whether to enable or disable inter-project communication.
#### Imported Clusters
For imported clusters, Project Network Isolation (PNI) requires Kubernetes Network Policy to be enabled on the cluster beforehand.
For clusters created by Rancher, Rancher enables Kubernetes Network Policy automatically.
1. In GKE, enable Network Policy at the cluster level. Refer to the [official GKE guide](https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy) for instructions.
1. After enabling Network Policy, import the cluster into Rancher and enable PNI for project-level isolation.
### Node Ipv4 CIDR Block
_Mutable: no_
The IP address range of the instance IPs in this cluster. Can be set if "Auto Create Subnetwork" is selected for **Node Subnet** or **Subnet**. Must be a valid CIDR range, e.g. 10.96.0.0/14. For more information on how to determine the IP address range, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing)
### Cluster Secondary Range Name
_Mutable: no_
The name of an existing secondary range for Pod IP addresses. If selected, **Cluster Pod Address Range** will automatically be populated. Required if using a Shared VPC network.
### Cluster Pod Address Range
_Mutable: no_
The IP address range assigned to pods in the cluster. Must be a valid CIDR range, e.g. 10.96.0.0/11. If not provided, will be created automatically. Must be provided if using a Shared VPC network. For more information on how to determine the IP address range for your pods, refer to [this section.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing_secondary_range_pods)
### Services Secondary Range Name
_Mutable: no_
The name of an existing secondary range for service IP addresses. If selected, **Service Address Range** will be automatically populated. Required if using a Shared VPC network.
### Service Address Range
_Mutable: no_
The address range assigned to the services in the cluster. Must be a valid CIDR range, e.g. 10.94.0.0/18. If not provided, will be created automatically. Must be provided if using a Shared VPC network. For more information on how to determine the IP address range for your services, refer to [this section.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing_secondary_range_svcs)
### Private Cluster
_Mutable: no_
:::caution
Private clusters require additional planning and configuration outside of Rancher. Refer to the [private cluster guide](gke-private-clusters.md).
:::
Assign nodes only internal IP addresses. Private cluster nodes cannot access the public internet unless additional networking steps are taken in GCP.
### Enable Private Endpoint
:::caution
Private clusters require additional planning and configuration outside of Rancher. Refer to the [private cluster guide](gke-private-clusters.md).
:::
_Mutable: no_
Locks down external access to the control plane endpoint. Only available if **Private Cluster** is also selected. If selected, and if Rancher does not have direct access to the Virtual Private Cloud network the cluster is running in, Rancher will provide a registration command to run on the cluster to enable Rancher to connect to it.
### Master IPV4 CIDR Block
_Mutable: no_
The IP range for the control plane VPC.
### Master Authorized Network
_Mutable: yes_
Enable control plane authorized networks to block untrusted non-GCP source IPs from accessing the Kubernetes master through HTTPS. If selected, additional authorized networks may be added. If the cluster is created with a public endpoint, this option is useful for locking down access to the public endpoint to only certain networks, such as the network where your Rancher service is running. If the cluster only has a private endpoint, this setting is required.
## Additional Options
### Cluster Addons
Additional Kubernetes cluster components. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters#Cluster.AddonsConfig)
#### Horizontal Pod Autoscaling
_Mutable: yes_
The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload's CPU or memory consumption, or in response to custom metrics reported from within Kubernetes or external metrics from sources outside of your cluster. For more information, see [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler)
#### HTTP (L7) Load Balancing
_Mutable: yes_
HTTP (L7) Load Balancing distributes HTTP and HTTPS traffic to backends hosted on GKE. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer)
#### Network Policy Config (master only)
_Mutable: yes_
Configuration for NetworkPolicy. This only tracks whether the addon is enabled or not on the master, it does not track whether network policy is enabled for the nodes.
### Cluster Features (Alpha Features)
_Mutable: no_
Turns on all Kubernetes alpha API groups and features for the cluster. When enabled, the cluster cannot be upgraded and will be deleted automatically after 30 days. Alpha clusters are not recommended for production use as they are not covered by the GKE SLA. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/alpha-clusters)
### Logging Service
_Mutable: yes_
The logging service the cluster uses to write logs. Use either [Cloud Logging](https://cloud.google.com/logging) or no logging service in which case no logs are exported from the cluster.
### Monitoring Service
_Mutable: yes_
The monitoring service the cluster uses to write metrics. Use either [Cloud Monitoring](https://cloud.google.com/monitoring) or monitoring service in which case no metrics are exported from the cluster.
### Maintenance Window
_Mutable: yes_
Set the start time for a 4 hour maintenance window. The time is specified in the UTC time zone using the HH:MM format. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/maintenance-windows-and-exclusions)
## Node Pools
In this section, enter details describing the configuration of each node in the node pool.
### Kubernetes Version
_Mutable: yes_
The Kubernetes version for each node in the node pool. For more information on GKE Kubernetes versions, refer to [these docs.](https://cloud.google.com/kubernetes-engine/versioning)
### Image Type
_Mutable: yes_
The node operating system image. For more information for the node image options that GKE offers for each OS, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#available_node_images)
:::note
The default option is "Container-Optimized OS with Docker". The read-only filesystem on GCP's Container-Optimized OS is not compatible with the [legacy logging](https://github.com/rancher/rancher-docs/tree/main/archived_docs/en/version-2.0-2.4/explanations/integrations-in-rancher/cluster-logging/cluster-logging.md) implementation in Rancher. If you need to use the legacy logging feature, select "Ubuntu with Docker" or "Ubuntu with Containerd". The [current logging feature](../../../../integrations-in-rancher/logging/logging.md) is compatible with the Container-Optimized OS image.
:::
:::note
If selecting "Windows Long Term Service Channel" or "Windows Semi-Annual Channel" for the node pool image type, you must also add at least one Container-Optimized OS or Ubuntu node pool.
:::
### Machine Type
_Mutable: no_
The virtualized hardware resources available to node instances. For more information on Google Cloud machine types, refer to [this page.](https://cloud.google.com/compute/docs/machine-types#machine_types)
### Root Disk Type
_Mutable: no_
Standard persistent disks are backed by standard hard disk drives (HDD), while SSD persistent disks are backed by solid state drives (SSD). For more information, refer to [this section.](https://cloud.google.com/compute/docs/disks)
### Local SSD Disks
_Mutable: no_
Configure each node's local SSD disk storage in GB. Local SSDs are physically attached to the server that hosts your VM instance. Local SSDs have higher throughput and lower latency than standard persistent disks or SSD persistent disks. The data that you store on a local SSD persists only until the instance is stopped or deleted. For more information, see [this section.](https://cloud.google.com/compute/docs/disks#localssds)
### Preemptible nodes (beta)
_Mutable: no_
Preemptible nodes, also called preemptible VMs, are Compute Engine VM instances that last a maximum of 24 hours in general, and provide no availability guarantees. For more information, see [this page.](https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms)
### Taints
_Mutable: no_
When you apply a taint to a node, only Pods that tolerate the taint are allowed to run on the node. In a GKE cluster, you can apply a taint to a node pool, which applies the taint to all nodes in the pool.
### Node Labels
_Mutable: no_
You can apply labels to the node pool, which applies the labels to all nodes in the pool.
Invalid labels can prevent upgrades or can prevent Rancher from starting. For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)
### Network Tags
_Mutable: no_
You can add network tags to the node pool to make firewall rules and routes between subnets. Tags will apply to all nodes in the pool.
For details on tag syntax and requirements, see the [Kubernetes documentation](https://cloud.google.com/vpc/docs/add-remove-network-tags).
## Group Details
In this section, enter details describing the node pool.
### Name
_Mutable: no_
Enter a name for the node pool.
### Initial Node Count
_Mutable: yes_
Integer for the starting number of nodes in the node pool.
### Max Pod Per Node
_Mutable: no_
GKE has a hard limit of 110 Pods per node. For more information on the Kubernetes limits, see [this section.](https://cloud.google.com/kubernetes-engine/docs/best-practices/scalability#dimension_limits)
### Autoscaling
_Mutable: yes_
Node pool autoscaling dynamically creates or deletes nodes based on the demands of your workload. For more information, see [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler)
### Auto Repair
_Mutable: yes_
GKE's node auto-repair feature helps you keep the nodes in your cluster in a healthy, running state. When enabled, GKE makes periodic checks on the health state of each node in your cluster. If a node fails consecutive health checks over an extended time period, GKE initiates a repair process for that node. For more information, see the section on [auto-repairing nodes.](https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair)
### Auto Upgrade
_Mutable: yes_
When enabled, the auto-upgrade feature keeps the nodes in your cluster up-to-date with the cluster control plane (master) version when your control plane is [updated on your behalf.](https://cloud.google.com/kubernetes-engine/upgrades#automatic_cp_upgrades) For more information about auto-upgrading nodes, see [this page.](https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades)
### Access Scopes
_Mutable: no_
Access scopes are the legacy method of specifying permissions for your nodes.
- **Allow default access:** The default access for new clusters is the [Compute Engine default service account.](https://cloud.google.com/compute/docs/access/service-accounts?hl=en_US#default_service_account)
- **Allow full access to all Cloud APIs:** Generally, you can just set the cloud-platform access scope to allow full access to all Cloud APIs, then grant the service account only relevant IAM roles. The combination of access scopes granted to the virtual machine instance and the IAM roles granted to the service account determines the amount of access the service account has for that instance.
- **Set access for each API:** Alternatively, you can choose to set specific scopes that permit access to the particular API methods that the service will call.
For more information, see the [section about enabling service accounts for a VM.](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances)
### Configuring the Refresh Interval
The refresh interval can be configured through the setting "gke-refresh", which is an integer representing seconds.
The default value is 300 seconds.
The syncing interval can be changed by running `kubectl edit setting gke-refresh`.
The shorter the refresh window, the less likely any race conditions will occur, but it does increase the likelihood of encountering request limits that may be in place for GCP APIs.

View File

@@ -0,0 +1,54 @@
---
title: Private Clusters
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters"/>
</head>
In GKE, [private clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept) are clusters whose nodes are isolated from inbound and outbound traffic by assigning them internal IP addresses only. Private clusters in GKE have the option of exposing the control plane endpoint as a publicly accessible address or as a private address. This is different from other Kubernetes providers, which may refer to clusters with private control plane endpoints as "private clusters" but still allow traffic to and from nodes. You may want to create a cluster with private nodes, with or without a public control plane endpoint, depending on your organization's networking and security requirements. A GKE cluster provisioned from Rancher can use isolated nodes by selecting "Private Cluster" in the Cluster Options (under "Show advanced options"). The control plane endpoint can optionally be made private by selecting "Enable Private Endpoint".
## Private Nodes
Because the nodes in a private cluster only have internal IP addresses, they will not be able to install the cluster agent and Rancher will not be able to fully manage the cluster. This can be overcome in a few ways.
### Cloud NAT
:::caution
Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing).
:::
If restricting outgoing internet access is not a concern for your organization, use Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service to allow nodes in the private network to access the internet, enabling them to download the required images from Docker Hub and contact the Rancher management server. This is the simplest solution.
### Private Registry
:::caution
This scenario is not officially supported, but is described for cases in which using the Cloud NAT service is not sufficient.
:::
If restricting both incoming and outgoing traffic to nodes is a requirement, follow the air-gapped installation instructions to set up a private container image [registry](../../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) on the VPC where the cluster is going to be, allowing the cluster nodes to access and download the images they need to run the cluster agent. If the control plane endpoint is also private, Rancher will need [direct access](#direct-access) to it.
## Private Control Plane Endpoint
If the cluster has a public endpoint exposed, Rancher will be able to reach the cluster, and no additional steps need to be taken. However, if the cluster has no public endpoint, then considerations must be made to ensure Rancher can access the cluster.
### Cloud NAT
:::caution
Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing).
:::
As above, if restricting outgoing internet access to the nodes is not a concern, then Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service can be used to allow the nodes to access the internet. While the cluster is provisioning, Rancher will provide a registration command to run on the cluster. Download the [kubeconfig](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl) for the new cluster and run the provided kubectl command on the cluster. Gaining access
to the cluster in order to run this command can be done by creating a temporary node or using an existing node in the VPC, or by logging on to or creating an SSH tunnel through one of the cluster nodes.
### Direct Access
If the Rancher server is run on the same VPC as the cluster's control plane, it will have direct access to the control plane's private endpoint. The cluster nodes will need to have access to a [private registry](#private-registry) to download images as described above.
You can also use services from Google such as [Cloud VPN](https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview) or [Cloud Interconnect VLAN](https://cloud.google.com/network-connectivity/docs/interconnect) to facilitate connectivity between your organization's network and your Google VPC.

View File

@@ -0,0 +1,551 @@
---
title: K3s Cluster Configuration Reference
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration"/>
</head>
This section covers the configuration options that are available in Rancher for a new or existing K3s Kubernetes cluster.
## Overview
You can configure the Kubernetes options one of two ways:
- [Rancher UI](#configuration-options-in-the-rancher-ui): Use the Rancher UI to select options that are commonly customized when setting up a Kubernetes cluster.
- [Cluster Config File](#cluster-config-file-reference): Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create a K3s config file. Using a config file lets you set any of the [options](https://rancher.com/docs/k3s/latest/en/installation/install-options/) available during a K3s installation.
## Editing Clusters in the Rancher UI
The Rancher UI provides two ways to edit a cluster:
1. With a form.
1. With YAML.
### Editing Clusters with a Form
The form covers the most frequently needed options for clusters.
To edit your cluster,
1. Click **☰ > Cluster Management**.
1. Go to the cluster you want to configure and click **⋮ > Edit Config**.
### Editing Clusters in YAML
For a complete reference of configurable options for K3s clusters in YAML, see the [K3s documentation](https://docs.k3s.io/installation/configuration).
To edit your cluster with YAML:
1. Click **☰ > Cluster Management**.
1. Go to the cluster you want to configure and click **⋮ > Edit as YAML**.
1. Edit the RKE options under the `rkeConfig` directive.
## Configuration Options in the Rancher UI
### Machine Pool Configuration
This subsection covers generic machine pool configurations. For specific infrastructure provider configurations, refer to the following:
- [Azure](../downstream-cluster-configuration/machine-configuration/azure.md)
- [DigitalOcean](../downstream-cluster-configuration/machine-configuration/digitalocean.md)
- [Amazon EC2](../downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [Google GCE](../downstream-cluster-configuration/machine-configuration/google-gce.md)
##### Pool Name
The name of the machine pool.
##### Machine Count
The number of machines in the pool.
##### Roles
Option to assign etcd, control plane, and worker roles to nodes.
#### Advanced
##### Auto Replace
The amount of time nodes can be unreachable before they are automatically deleted and replaced.
##### Drain Before Delete
Enables draining nodes by evicting all pods before the node is deleted.
##### Kubernetes Node Labels
Add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) to nodes to help with organization and object selection.
For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)
##### Taints
Add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to nodes, to prevent pods from being scheduled to or executed on the nodes, unless the pods have matching tolerations.
### Cluster Configuration
#### Basics
##### Kubernetes Version
The version of Kubernetes installed on your cluster nodes.
For details on upgrading or rolling back Kubernetes, refer to [this guide](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
##### Pod Security Admission Configuration Template
The default [pod security admission configuration template](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) for the cluster.
##### Encrypt Secrets
Option to enable or disable secrets encryption. When enabled, secrets will be encrypted using a AES-CBC key. If disabled, any previously secrets will not be readable until encryption is enabled again. Refer to the [K3s documentation](https://rancher.com/docs/k3s/latest/en/advanced/#secrets-encryption-config-experimental) for details.
##### Project Network Isolation
If your network provider allows project network isolation, you can choose whether to enable or disable inter-project communication.
##### SELinux
Option to enable or disable [SELinux](https://rancher.com/docs/k3s/latest/en/advanced/#selinux-support) support.
##### CoreDNS
By default, [CoreDNS](https://coredns.io/) is installed as the default DNS provider. If CoreDNS is not installed, an alternate DNS provider must be installed yourself. Refer to the [K3s documentation](https://rancher.com/docs/k3s/latest/en/networking/#coredns) for details.
##### Klipper Service LB
Option to enable or disable the [Klipper](https://github.com/rancher/klipper-lb) service load balancer. Refer to the [K3s documentation](https://rancher.com/docs/k3s/latest/en/networking/#service-load-balancer) for details.
##### Traefik Ingress
Option to enable or disable the [Traefik](https://traefik.io/) HTTP reverse proxy and load balancer. For more details and configuration options, see the [K3s documentation](https://rancher.com/docs/k3s/latest/en/networking/#traefik-ingress-controller).
##### Local Storage
Option to enable or disable [local storage](https://rancher.com/docs/k3s/latest/en/storage/) on the node(s).
##### Metrics Server
Option to enable or disable the [metrics server](https://github.com/kubernetes-incubator/metrics-server). If enabled, ensure port 10250 is opened for inbound TCP traffic.
#### Add-On Config
Additional Kubernetes manifests, managed as a [Add-on](https://kubernetes.io/docs/concepts/cluster-administration/addons/), to apply to the cluster on startup. Refer to the [K3s documentation](https://rancher.com/docs/k3s/latest/en/helm/#automatically-deploying-manifests-and-helm-charts) for details.
#### Agent Environment Vars
Option to set environment variables for [K3s agents](https://rancher.com/docs/k3s/latest/en/architecture/). The environment variables can be set using key value pairs. Refer to the [K3 documentation](https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/) for more details.
#### etcd
##### Automatic Snapshots
Option to enable or disable recurring etcd snapshots. If enabled, users have the option to configure the frequency of snapshots. For details, refer to the [K3s documentation](https://docs.k3s.io/cli/etcd-snapshot#creating-snapshots).
##### Metrics
Option to choose whether to expose etcd metrics to the public or only within the cluster.
#### Networking
##### Cluster CIDR
IPv4/IPv6 network CIDRs to use for pod IPs (default: `10.42.0.0/16`).
Example values:
- IPv4-only: `10.42.0.0/16`
- IPv6-only: `2001:cafe:42::/56`
- Dual-stack: `10.42.0.0/16,2001:cafe:42::/56`
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [K3s documentation: Dual-stack (IPv4 + IPv6) Networking](https://docs.k3s.io/networking/basic-network-options#dual-stack-ipv4--ipv6-networking)
- [K3s documentation: Single-stack IPv6 Networking](https://docs.k3s.io/networking/basic-network-options#single-stack-ipv6-networking)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
##### Service CIDR
IPv4/IPv6 network CIDRs to use for service IPs (default: `10.43.0.0/16`).
Example values:
- IPv4-only: `10.43.0.0/16`
- IPv6-only: `2001:cafe:43::/112`
- Dual-stack: `10.43.0.0/16,2001:cafe:43::/112`
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [K3s documentation: Dual-stack (IPv4 + IPv6) Networking](https://docs.k3s.io/networking/basic-network-options#dual-stack-ipv4--ipv6-networking)
- [K3s documentation: Single-stack IPv6 Networking](https://docs.k3s.io/networking/basic-network-options#single-stack-ipv6-networking)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
##### Cluster DNS
IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: `10.43.0.10`).
##### Cluster Domain
Select the domain for the cluster. The default is `cluster.local`.
##### NodePort Service Port Range
Option to change the range of ports that can be used for [NodePort services](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport). The default is `30000-32767`.
##### Truncate Hostnames
Option to truncate hostnames to 15 characters or fewer. You can only set this field during the initial creation of the cluster. You can't enable or disable the 15-character limit after cluster creation.
This setting only affects machine-provisioned clusters. Since custom clusters set hostnames during their own node creation process, which occurs outside of Rancher, this field doesn't restrict custom cluster hostname length.
Truncating hostnames in a cluster improves compatibility with Windows-based systems. Although Kubernetes allows hostnames up to 63 characters in length, systems that use NetBIOS restrict hostnames to 15 characters or fewer.
##### TLS Alternate Names
Add hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert.
##### Authorized Cluster Endpoint
Authorized Cluster Endpoint can be used to directly access the Kubernetes API server, without requiring communication through Rancher.
For more detail on how an authorized cluster endpoint works and why it is used, refer to the [architecture section.](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint)
We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.](../../rancher-manager-architecture/architecture-recommendations.md#architecture-for-an-authorized-cluster-endpoint-ace)
##### Stack Preference
Choose the networking stack for the cluster. This option affects:
- The address used for health and readiness probes of components such as Calico, etcd, kube-apiserver, kube-scheduler, kube-controller-manager, and kubelet.
- The server URL in the `authentication-token-webhook-config-file` for the Authorized Cluster Endpoint.
- The `advertise-client-urls` setting for etcd during snapshot restoration.
Options are `ipv4`, `ipv6`, `dual`:
- When set to `ipv4`, the cluster uses `127.0.0.1`
- When set to `ipv6`, the cluster uses `[::1]`
- When set to `dual`, the cluster uses `localhost`
The stack preference must match the clusters networking configuration:
- Set to `ipv4` for IPv4-only clusters
- Set to `ipv6` for IPv6-only clusters
- Set to `dual` for dual-stack clusters
:::caution
Ensuring the loopback address configuration is correct is critical for successful cluster provisioning.
For more information, refer to the [Node Requirements](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md) page.
:::
#### Registries
Select the image repository to pull Rancher images from. For more details and configuration options, see the [K3s documentation](https://rancher.com/docs/k3s/latest/en/installation/private-registry/).
#### Upgrade Strategy
##### Control Plane Concurrency
Select how many nodes can be upgraded at the same time. Can be a fixed number or percentage.
##### Worker Concurrency
Select how many nodes can be upgraded at the same time. Can be a fixed number or percentage.
##### Drain Nodes (Control Plane)
Option to remove all pods from the node prior to upgrading.
##### Drain Nodes (Worker Nodes)
Option to remove all pods from the node prior to upgrading.
#### Advanced
Option to set kubelet options for different nodes. For available options, refer to the [Kubernetes documentation](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/).
## Cluster Config File Reference
Editing clusters in YAML allows you to set configurations that are already listed in [Configuration Options in the Rancher UI](#configuration-options-in-the-rancher-ui), as well as set Rancher-specific parameters.
<details>
<summary>
<b>Example Cluster Config File Snippet</b>
</summary>
```yaml
apiVersion: provisioning.cattle.io/v1
kind: Cluster
spec:
cloudCredentialSecretName: cattle-global-data:cc-fllv6
clusterAgentDeploymentCustomization: {}
fleetAgentDeploymentCustomization: {}
kubernetesVersion: v1.26.7+k3s1
localClusterAuthEndpoint: {}
rkeConfig:
additionalManifest: ""
chartValues: {}
etcd:
snapshotRetention: 5
snapshotScheduleCron: 0 */5 * * *
machineGlobalConfig:
disable-apiserver: false
disable-cloud-controller: false
disable-controller-manager: false
disable-etcd: false
disable-kube-proxy: false
disable-network-policy: false
disable-scheduler: false
etcd-expose-metrics: false
kube-apiserver-arg:
- audit-policy-file=/etc/rancher/k3s/user-audit-policy.yaml
- audit-log-path=/etc/rancher/k3s/user-audit.logs
profile: null
secrets-encryption: false
machinePools:
- controlPlaneRole: true
etcdRole: true
machineConfigRef:
kind: Amazonec2Config
name: nc-test-pool1-pwl5h
name: pool1
quantity: 1
unhealthyNodeTimeout: 0s
workerRole: true
machineSelectorConfig:
- config:
docker: false
protect-kernel-defaults: false
selinux: false
machineSelectorFiles:
- fileSources:
- configMap:
name: ''
secret:
name: audit-policy
items:
- key: audit-policy
path: /etc/rancher/k3s/user-audit-policy.yaml
machineLabelSelector:
matchLabels:
rke.cattle.io/control-plane-role: 'true'
registries: {}
upgradeStrategy:
controlPlaneConcurrency: '1'
controlPlaneDrainOptions:
deleteEmptyDirData: true
disableEviction: false
enabled: false
force: false
gracePeriod: -1
ignoreDaemonSets: true
ignoreErrors: false
postDrainHooks: null
preDrainHooks: null
skipWaitForDeleteTimeoutSeconds: 0
timeout: 120
workerConcurrency: '1'
workerDrainOptions:
deleteEmptyDirData: true
disableEviction: false
enabled: false
force: false
gracePeriod: -1
ignoreDaemonSets: true
ignoreErrors: false
postDrainHooks: null
preDrainHooks: null
skipWaitForDeleteTimeoutSeconds: 0
timeout: 120
```
</details>
### additionalManifest
Specify additional manifests to deliver to the control plane nodes.
The value is a String, and will be placed at the path `/var/lib/rancher/k3s/server/manifests/rancher/addons.yaml` on target nodes.
Example:
```yaml
additionalManifest: |-
apiVersion: v1
kind: Namespace
metadata:
name: name-xxxx
```
:::note
If you want to customize system charts, you should use the `chartValues` field as described below.
Alternatives, such as using a HelmChartConfig to customize the system charts via `additionalManifest`, can cause unexpected behavior, due to having multiple HelmChartConfigs for the same chart.
:::
### chartValues
Specify the values for the system charts installed by K3s.
For more information about how K3s manges packaged components, please refer to [K3s documentation](https://docs.k3s.io/installation/packaged-components).
Example:
```yaml
chartValues:
chart-name:
key: value
```
### machineGlobalConfig
Specify K3s configurations. Any configuration change made here will apply to every node. The configuration options available in the [standalone version of k3s](https://docs.k3s.io/cli/server) can be applied here.
Example:
```yaml
machineGlobalConfig:
etcd-arg:
- key1=value1
- key2=value2
```
To make it easier to put files on nodes beforehand, Rancher expects the following values to be included in the configuration, while K3s expects the values to be entered as file paths:
- private-registry
- flannel-conf
Rancher delivers the files to the path `/var/lib/rancher/k3s/etc/config-files/<option>` in target nodes, and sets the proper options in the K3s server.
Example:
```yaml
apiVersion: provisioning.cattle.io/v1
kind: Cluster
spec:
rkeConfig:
machineGlobalConfig:
private-registry: |
mirrors:
docker.io:
endpoint:
- "http://mycustomreg.com:5000"
configs:
"mycustomreg:5000":
auth:
username: xxxxxx # this is the registry username
password: xxxxxx # this is the registry password
```
### machineSelectorConfig
`machineSelectorConfig` is the same as [`machineGlobalConfig`](#machineglobalconfig) except that a [label](#kubernetes-node-labels) selector can be specified with the configuration. The configuration will only be applied to nodes that match the provided label selector.
Multiple `config` entries are allowed, each specifying their own `machineLabelSelector`. A user can specify `matchExpressions`, `matchLabels`, both, or neither. Omitting the `machineLabelSelector` section of this field has the same effect as putting the config in the `machineGlobalConfig` section.
Example:
```yaml
machineSelectorConfig
- config:
config-key: config-value
machineLabelSelector:
matchExpressions:
- key: example-key
operator: string # Valid operators are In, NotIn, Exists and DoesNotExist.
values:
- example-value1
- example-value2
matchLabels:
key1: value1
key2: value2
```
### machineSelectorFiles
:::note
This feature is available in Rancher v2.7.2 and later.
:::
Deliver files to nodes, so that the files can be in place before initiating K3s server or agent processes.
The content of the file is retrieved from either a secret or a configmap. The target nodes are filtered by the `machineLabelSelector`.
Example :
```yaml
machineSelectorFiles:
- fileSources:
- secret:
items:
- key: example-key
path: path-to-put-the-file-on-nodes
permissions: 644 (optional)
hash: base64-encoded-hash-of-the-content (optional)
name: example-secret-name
machineLabelSelector:
matchExpressions:
- key: example-key
operator: string # Valid operators are In, NotIn, Exists and DoesNotExist.
values:
- example-value1
- example-value2
matchLabels:
key1: value1
key2: value2
- fileSources:
- configMap:
items:
- key: example-key
path: path-to-put-the-file-on-nodes
permissions: 644 (optional)
hash: base64-encoded-hash-of-the-content (optional)
name: example-configmap-name
machineLabelSelector:
matchExpressions:
- key: example-key
operator: string # Valid operators are In, NotIn, Exists and DoesNotExist.
values:
- example-value1
- example-value2
matchLabels:
key1: value1
key2: value2
```
The secret or configmap must meet the following requirements:
1. It must be in the `fleet-default` namespace where the Cluster object exists.
2. It must have the annotation `rke.cattle.io/object-authorized-for-clusters: cluster-name1,cluster-name2`, which permits the target clusters to use it.
:::tip
Rancher Dashboard provides an easy-to-use form for creating the secret or configmap.
:::
Example:
```yaml
apiVersion: v1
data:
audit-policy: >-
IyBMb2cgYWxsIHJlcXVlc3RzIGF0IHRoZSBNZXRhZGF0YSBsZXZlbC4KYXBpVmVyc2lvbjogYXVkaXQuazhzLmlvL3YxCmtpbmQ6IFBvbGljeQpydWxlczoKLSBsZXZlbDogTWV0YWRhdGE=
kind: Secret
metadata:
annotations:
rke.cattle.io/object-authorized-for-clusters: cluster1
name: name1
namespace: fleet-default
```

View File

@@ -0,0 +1,15 @@
---
title: Rancher Server Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration"/>
</head>
- [RKE2 Cluster Configuration](rke2-cluster-configuration.md)
- [K3s Cluster Configuration](k3s-cluster-configuration.md)
- [EKS Cluster Configuration](eks-cluster-configuration.md)
- [AKS Cluster Configuration](aks-cluster-configuration.md)
- [GKE Cluster Configuration](gke-cluster-configuration/gke-cluster-configuration.md)
- [Use Existing Nodes](use-existing-nodes/use-existing-nodes.md)
- [Sync Clusters](sync-clusters.md)

View File

@@ -0,0 +1,365 @@
---
title: RKE Cluster Configuration Reference
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration"/>
</head>
<EOLRKE1Warning />
When Rancher installs Kubernetes, it uses [RKE](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) or [RKE2](https://docs.rke2.io/) as the Kubernetes distribution.
This section covers the configuration options that are available in Rancher for a new or existing RKE Kubernetes cluster.
## Overview
You can configure the Kubernetes options one of two ways:
- [Rancher UI](#configuration-options-in-the-rancher-ui): Use the Rancher UI to select options that are commonly customized when setting up a Kubernetes cluster.
- [Cluster Config File](#rke-cluster-config-file-reference): Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML.
The RKE cluster config options are nested under the `rancher_kubernetes_engine_config` directive. For more information, see the section about the [cluster config file.](#rke-cluster-config-file-reference)
In [clusters launched by RKE](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md), you can edit any of the remaining options that follow.
For an example of RKE config file syntax, see the [RKE documentation](https://rancher.com/docs/rke/latest/en/example-yamls/).
The forms in the Rancher UI don't include all advanced options for configuring RKE. For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/)
## Editing Clusters with a Form in the Rancher UI
To edit your cluster,
1. In the upper left corner, click **☰ > Cluster Management**.
1. Go to the cluster you want to configure and click **⋮ > Edit Config**.
## Editing Clusters with YAML
Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML.
RKE clusters (also called RKE1 clusters) are edited differently than RKE2 and K3s clusters.
To edit an RKE config file directly from the Rancher UI,
1. Click **☰ > Cluster Management**.
1. Go to the RKE cluster you want to configure. Click and click **⋮ > Edit Config**. This take you to the RKE configuration form. Note: Because cluster provisioning changed in Rancher 2.6, the **⋮ > Edit as YAML** can be used for configuring RKE2 clusters, but it can't be used for editing RKE1 configuration.
1. In the configuration form, scroll down and click **Edit as YAML**.
1. Edit the RKE options under the `rancher_kubernetes_engine_config` directive.
## Configuration Options in the Rancher UI
:::tip
Some advanced configuration options are not exposed in the Rancher UI forms, but they can be enabled by editing the RKE cluster configuration file in YAML. For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/)
:::
### Kubernetes Version
The version of Kubernetes installed on your cluster nodes. Rancher packages its own version of Kubernetes based on [hyperkube](https://github.com/rancher/hyperkube).
For more detail, see [Upgrading Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
### Network Provider
The [Network Provider](https://kubernetes.io/docs/concepts/cluster-administration/networking/) that the cluster uses. For more details on the different networking providers, please view our [Networking FAQ](../../../faq/container-network-interface-providers.md).
:::caution
After you launch the cluster, you cannot change your network provider. Therefore, choose which network provider you want to use carefully, as Kubernetes doesn't allow switching between network providers. Once a cluster is created with a network provider, changing network providers would require you tear down the entire cluster and all its applications.
:::
Out of the box, Rancher is compatible with the following network providers:
- [Canal](https://github.com/projectcalico/canal)
- [Flannel](https://github.com/coreos/flannel#flannel)
- [Calico](https://docs.projectcalico.org/v3.11/introduction/)
- [Weave](https://github.com/weaveworks/weave)
<DeprecationWeave />
:::note Notes on Weave:
When Weave is selected as network provider, Rancher will automatically enable encryption by generating a random password. If you want to specify the password manually, please see how to configure your cluster using a [Config File](#rke-cluster-config-file-reference) and the [Weave Network Plug-in Options](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/#weave-network-plug-in-options).
:::
### Project Network Isolation
If your network provider allows project network isolation, you can choose whether to enable or disable inter-project communication.
Project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.
### Kubernetes Cloud Providers
You can configure a [Kubernetes cloud provider](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md). If you want to use dynamically provisioned [volumes and storage](../../../how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md) in Kubernetes, typically you must select the specific cloud provider in order to use it. For example, if you want to use Amazon EBS, you would need to select the `aws` cloud provider.
:::note
If the cloud provider you want to use is not listed as an option, you will need to use the [config file option](#rke-cluster-config-file-reference) to configure the cloud provider. Please reference the [RKE cloud provider documentation](https://rancher.com/docs/rke/latest/en/config-options/cloud-providers/) on how to configure the cloud provider.
:::
### Private Registries
The cluster-level private registry configuration is only used for provisioning clusters.
There are two main ways to set up private registries in Rancher: by setting up the [global default registry](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry.md) through the **Settings** tab in the global view, and by setting up a private registry in the advanced options in the cluster-level settings. The global default registry is intended to be used for air-gapped setups, for registries that do not require credentials. The cluster-level private registry is intended to be used in all setups in which the private registry requires credentials.
If your private registry requires credentials, you need to pass the credentials to Rancher by editing the cluster options for each cluster that needs to pull images from the registry.
The private registry configuration option tells Rancher where to pull the [system images](https://rancher.com/docs/rke/latest/en/config-options/system-images/) or [addon images](https://rancher.com/docs/rke/latest/en/config-options/add-ons/) that will be used in your cluster.
- **System images** are components needed to maintain the Kubernetes cluster.
- **Add-ons** are used to deploy several cluster components, including network plug-ins, the ingress controller, the DNS provider, or the metrics server.
For more information on setting up a private registry for components applied during the provisioning of the cluster, see the [RKE documentation on private registries](https://rancher.com/docs/rke/latest/en/config-options/private-registries/).
Rancher v2.6 introduced the ability to configure [ECR registries for RKE clusters](https://rancher.com/docs/rke/latest/en/config-options/private-registries/#amazon-elastic-container-registry-ecr-private-registry-setup).
### Authorized Cluster Endpoint
Authorized Cluster Endpoint (ACE) can be used to directly access the Kubernetes API server, without requiring communication through Rancher.
:::note
ACE is available on RKE, RKE2, and K3s clusters that are provisioned or registered with Rancher. It's not available on clusters in a hosted Kubernetes provider, such as Amazon's EKS.
:::
ACE must be set up [manually](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md#authorized-cluster-endpoint-support-for-rke2-and-k3s-clusters) on RKE2 and K3s clusters. In RKE, ACE is enabled by default in Rancher-launched Kubernetes clusters, using the IP of the node with the `controlplane` role and the default Kubernetes self-signed certificates.
For more detail on how an authorized cluster endpoint works and why it is used, refer to the [architecture section.](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint)
We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.](../../rancher-manager-architecture/architecture-recommendations.md#architecture-for-an-authorized-cluster-endpoint-ace)
### Node Pools
For information on using the Rancher UI to set up node pools in an RKE cluster, refer to [this page.](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md)
### NGINX Ingress
If you want to publish your applications in a high-availability configuration, and you're hosting your nodes with a cloud-provider that doesn't have a native load-balancing feature, enable this option to use NGINX Ingress within the cluster.
### Metrics Server Monitoring
Option to enable or disable [Metrics Server](https://rancher.com/docs/rke/latest/en/config-options/add-ons/metrics-server/).
Each cloud provider capable of launching a cluster using RKE can collect metrics and monitor for your cluster nodes. Enable this option to view your node metrics from your cloud provider's portal.
You must have an existing Pod Security Policy configured before you can use this option.
### Docker Version on Nodes
Configures whether nodes are allowed to run versions of Docker that Rancher doesn't officially support.
If you choose to require a supported Docker version, Rancher will stop pods from running on nodes that don't have a supported Docker version installed.
For details on which Docker versions were tested with each Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/)
### Docker Root Directory
If the nodes you are adding to the cluster have Docker configured with a non-default Docker Root Directory (default is `/var/lib/docker`), specify the correct Docker Root Directory in this option.
### Default Pod Security Policy
If you enable **Pod Security Policy Support**, use this drop-down to choose the pod security policy that's applied to the cluster.
### Node Port Range
Option to change the range of ports that can be used for [NodePort services](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport). Default is `30000-32767`.
### Recurring etcd Snapshots
Option to enable or disable [recurring etcd snapshots](https://rancher.com/docs/rke/latest/en/etcd-snapshots/#etcd-recurring-snapshots).
### Agent Environment Variables
Option to set environment variables for [rancher agents](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md). The environment variables can be set using key value pairs. If rancher agent requires use of proxy to communicate with Rancher server, `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables can be set using agent environment variables.
### Updating ingress-nginx
Clusters that were created before Kubernetes 1.16 will have an `ingress-nginx` `updateStrategy` of `OnDelete`. Clusters that were created with Kubernetes 1.16 or newer will have `RollingUpdate`.
If the `updateStrategy` of `ingress-nginx` is `OnDelete`, you will need to delete these pods to get the correct version for your deployment.
### Cluster Agent Configuration and Fleet Agent Configuration
You can configure the scheduling fields and resource limits for the Cluster Agent and the cluster's Fleet Agent. You can use these fields to customize tolerations, affinity rules, and resource requirements. Additional tolerations are appended to a list of default tolerations and control plane node taints. If you define custom affinity rules, they override the global default affinity setting. Defining resource requirements sets requests or limits where there previously were none.
:::note
With this option, it's possible to override or remove rules that are required for the functioning of the cluster. We strongly recommend against removing or overriding these and any other affinity rules, as this may cause unwanted side effects:
- `affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution` for `cattle-cluster-agent`
- `cluster-agent-default-affinity` for `cattle-cluster-agent`
- `fleet-agent-default-affinity` for `fleet-agent`
:::
If you downgrade Rancher to v2.7.4 or below, your changes will be lost and the agents will re-deploy without your customizations. The Fleet agent will fallback to using its built-in default values when it re-deploys. If the Fleet version doesn't change during the downgrade, the re-deploy won't be immediate.
## RKE Cluster Config File Reference
Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the [options available](https://rancher.com/docs/rke/latest/en/config-options/) in an RKE installation, except for `system_images` configuration. The `system_images` option is not supported when creating a cluster with the Rancher UI or API.
For the complete reference for configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/)
### Config File Structure in Rancher
RKE (Rancher Kubernetes Engine) is the tool that Rancher uses to provision Kubernetes clusters. Rancher's cluster config files used to have the same structure as [RKE config files,](https://rancher.com/docs/rke/latest/en/example-yamls/) but the structure changed so that in Rancher, RKE cluster config items are separated from non-RKE config items. Therefore, configuration for your cluster needs to be nested under the `rancher_kubernetes_engine_config` directive in the cluster config file. Cluster config files created with earlier versions of Rancher will need to be updated for this format. An example cluster config file is included below.
<details id="v2.3.0-cluster-config-file">
<summary>Example Cluster Config File</summary>
```yaml
#
# Cluster Config
#
docker_root_dir: /var/lib/docker
enable_cluster_alerting: false
enable_cluster_monitoring: false
enable_network_policy: false
local_cluster_auth_endpoint:
enabled: true
#
# Rancher Config
#
rancher_kubernetes_engine_config: # Your RKE template config goes here.
addon_job_timeout: 30
authentication:
strategy: x509
ignore_docker_version: true
#
# # Currently only nginx ingress provider is supported.
# # To disable ingress controller, set `provider: none`
# # To enable ingress on specific nodes, use the node_selector, eg:
# provider: nginx
# node_selector:
# app: ingress
#
ingress:
provider: nginx
kubernetes_version: v1.15.3-rancher3-1
monitoring:
provider: metrics-server
#
# If you are using calico on AWS
#
# network:
# plugin: calico
# calico_network_provider:
# cloud_provider: aws
#
# # To specify flannel interface
#
# network:
# plugin: flannel
# flannel_network_provider:
# iface: eth1
#
# # To specify flannel interface for canal plugin
#
# network:
# plugin: canal
# canal_network_provider:
# iface: eth1
#
network:
options:
flannel_backend_type: vxlan
plugin: canal
#
# services:
# kube-api:
# service_cluster_ip_range: 10.43.0.0/16
# kube-controller:
# cluster_cidr: 10.42.0.0/16
# service_cluster_ip_range: 10.43.0.0/16
# kubelet:
# cluster_domain: cluster.local
# cluster_dns_server: 10.43.0.10
#
services:
etcd:
backup_config:
enabled: true
interval_hours: 12
retention: 6
safe_timestamp: false
creation: 12h
extra_args:
election-timeout: 5000
heartbeat-interval: 500
gid: 0
retention: 72h
snapshot: false
uid: 0
kube_api:
always_pull_images: false
pod_security_policy: false
service_node_port_range: 30000-32767
ssh_agent_auth: false
windows_prefered_cluster: false
```
</details>
### Default DNS provider
The table below indicates what DNS provider is deployed by default. See [RKE documentation on DNS provider](https://rancher.com/docs/rke/latest/en/config-options/add-ons/dns/) for more information how to configure a different DNS provider. CoreDNS can only be used on Kubernetes v1.12.0 and higher.
| Rancher version | Kubernetes version | Default DNS provider |
|-------------|--------------------|----------------------|
| v2.2.5 and higher | v1.14.0 and higher | CoreDNS |
| v2.2.5 and higher | v1.13.x and lower | kube-dns |
| v2.2.4 and lower | any | kube-dns |
## Rancher Specific Parameters in YAML
Besides the RKE config file options, there are also Rancher specific settings that can be configured in the Config File (YAML):
### docker_root_dir
See [Docker Root Directory](#docker-root-directory).
### enable_cluster_monitoring
Option to enable or disable [Cluster Monitoring](../../../integrations-in-rancher/monitoring-and-alerting/monitoring-and-alerting.md).
### enable_network_policy
Option to enable or disable Project Network Isolation.
Project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.
### local_cluster_auth_endpoint
See [Authorized Cluster Endpoint](#authorized-cluster-endpoint).
Example:
```yaml
local_cluster_auth_endpoint:
enabled: true
fqdn: "FQDN"
ca_certs: |-
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
```
### Custom Network Plug-in
You can add a custom network plug-in by using the [user-defined add-on functionality](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/) of RKE. You define any add-on that you want deployed after the Kubernetes cluster is deployed.
There are two ways that you can specify an add-on:
- [In-line Add-ons](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#in-line-add-ons)
- [Referencing YAML Files for Add-ons](https://rancher.com/docs/rke/latest/en/config-options/add-ons/user-defined-add-ons/#referencing-yaml-files-for-add-ons)
For an example of how to configure a custom network plug-in by editing the `cluster.yml`, refer to the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/add-ons/network-plugins/custom-network-plugin-example)

View File

@@ -0,0 +1,580 @@
---
title: RKE2 Cluster Configuration Reference
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration"/>
</head>
This section covers the configuration options that are available in Rancher for a new or existing RKE2 Kubernetes cluster.
## Overview
You can configure the Kubernetes options in one of the two following ways:
- [Rancher UI](#configuration-options-in-the-rancher-ui): Use the Rancher UI to select options that are commonly customized when setting up a Kubernetes cluster.
- [Cluster Config File](#cluster-config-file-reference): Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE2 config file. Using a config file allows you to set many additional [options](https://docs.rke2.io/install/configuration) available for an RKE2 installation.
## Editing Clusters in the Rancher UI
The Rancher UI provides two ways to edit a cluster:
1. With a form.
1. With YAML.
### Editing Clusters with a Form
The form covers the most frequently needed options for clusters.
To edit your cluster,
1. Click **☰ > Cluster Management**.
1. Go to the cluster you want to configure and click **⋮ > Edit Config**.
### Editing Clusters in YAML
For a complete reference of configurable options for RKE2 clusters in YAML, see the [RKE2 documentation](https://docs.rke2.io/install/configuration).
To edit your cluster in YAML:
1. Click **☰ > Cluster Management**.
1. Go to the cluster you want to configure and click **⋮ > Edit as YAML**.
1. Edit the RKE options under the `rkeConfig` directive.
## Configuration Options in the Rancher UI
### Machine Pool Configuration
This subsection covers generic machine pool configurations. For specific infrastructure provider configurations, refer to the following:
- [Azure](../downstream-cluster-configuration/machine-configuration/azure.md)
- [DigitalOcean](../downstream-cluster-configuration/machine-configuration/digitalocean.md)
- [Amazon EC2](../downstream-cluster-configuration/machine-configuration/amazon-ec2.md)
- [Google GCE](../downstream-cluster-configuration/machine-configuration/google-gce.md)
##### Pool Name
The name of the machine pool.
##### Machine Count
The number of machines in the pool.
##### Roles
Option to assign etcd, control plane, and worker roles to nodes.
#### Advanced
##### Auto Replace
The amount of time nodes can be unreachable before they are automatically deleted and replaced.
##### Drain Before Delete
Enables draining nodes by evicting all pods before the node is deleted.
##### Kubernetes Node Labels
Add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) to nodes to help with organization and object selection.
For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)
##### Taints
Add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to nodes, to prevent pods from being scheduled to or executed on the nodes, unless the pods have matching tolerations.
### Cluster Configuration
#### Basics
##### Kubernetes Version
The version of Kubernetes installed on your cluster nodes.
For details on upgrading or rolling back Kubernetes, refer to [this guide](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
##### Container Network Provider
The [Network Provider](https://kubernetes.io/docs/concepts/cluster-administration/networking/) that the cluster uses.
:::caution
After you launch the cluster, you cannot change your network provider. Therefore, choose which network provider you want to use carefully, as Kubernetes doesn't allow switching between network providers. Once a cluster is created with a network provider, changing network providers would require you to tear down the entire cluster and all its applications.
:::
Out of the box, Rancher is compatible with the following network providers:
- [Canal](https://github.com/projectcalico/canal)
- [Cilium](https://cilium.io/)*
- [Calico](https://docs.projectcalico.org/v3.11/introduction/)
- [Flannel](https://github.com/flannel-io/flannel)
- [Multus](https://github.com/k8snetworkplumbingwg/multus-cni)
\* When using [project network isolation](#project-network-isolation) in the [Cilium CNI](../../../faq/container-network-interface-providers.md#cilium), it is possible to enable cross-node ingress routing. Click the [CNI provider docs](../../../faq/container-network-interface-providers.md#ingress-routing-across-nodes-in-cilium) to learn more.
For more details on the different networking providers and how to configure them, please view our [RKE2 documentation](https://docs.rke2.io/networking/basic_network_options).
:::caution
When using `cilium` or `multus,cilium` as your container network interface provider, ensure the **Enable IPv6 Support** option is also enabled.
:::
##### Cloud Provider
You can configure a [Kubernetes cloud provider](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md). If you want to use dynamically provisioned [volumes and storage](../../../how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md) in Kubernetes, typically you must select the specific cloud provider in order to use it. For example, if you want to use Amazon EBS, you would need to select the `aws` cloud provider.
:::note
If the cloud provider you want to use is not listed as an option, you will need to use the [config file option](#cluster-config-file-reference) to configure the cloud provider. Please reference [this documentation](https://rancher.com/docs/rke/latest/en/config-options/cloud-providers/) on how to configure the cloud provider.
:::
##### Pod Security Admission Configuration Template
The default [pod security admission configuration template](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md) for the cluster.
##### Worker Compliance Profile
Select a [compliance benchmark](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/compliance-scan-guides.md) to validate the system configuration against.
##### Project Network Isolation
If your network provider allows project network isolation, you can choose whether to enable or disable inter-project communication.
Project network isolation is available if you are using any RKE2 network plugin that supports the enforcement of Kubernetes network policies, such as Canal.
##### CoreDNS
By default, [CoreDNS](https://coredns.io/) is installed as the default DNS provider. If CoreDNS is not installed, an alternate DNS provider must be installed yourself. Refer to the [RKE2 documentation](https://docs.rke2.io/networking/networking_services#coredns) for additional CoreDNS configurations.
##### NGINX Ingress
If you want to publish your applications in a high-availability configuration, and you're hosting your nodes with a cloud-provider that doesn't have a native load-balancing feature, enable this option to use NGINX Ingress within the cluster. Refer to the [RKE2 documentation](https://docs.rke2.io/networking/networking_services#nginx-ingress-controller) for additional configuration options.
Refer to the [RKE2 documentation](https://docs.rke2.io/networking/networking_services#nginx-ingress-controller) for additional configuration options.
##### Metrics Server
Option to enable or disable [Metrics Server](https://rancher.com/docs/rke/latest/en/config-options/add-ons/metrics-server/).
Each cloud provider capable of launching a cluster using RKE2 can collect metrics and monitor for your cluster nodes. Enable this option to view your node metrics from your cloud provider's portal.
#### Add-On Config
Additional Kubernetes manifests, managed as an [Add-on](https://kubernetes.io/docs/concepts/cluster-administration/addons/), to apply to the cluster on startup. Refer to the [RKE2 documentation](https://docs.rke2.io/helm#automatically-deploying-manifests-and-helm-charts) for details.
#### Agent Environment Vars
Option to set environment variables for [Rancher agents](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/about-rancher-agents.md). The environment variables can be set using key value pairs. Refer to the [RKE2 documentation](https://docs.rke2.io/reference/linux_agent_config) for more details.
#### etcd
##### Automatic Snapshots
Option to enable or disable recurring etcd snapshots. If enabled, users have the option to configure the frequency of snapshots. For details, refer to the [RKE2 documentation](https://docs.rke2.io/datastore/backup_restore#creating-snapshots). Note that with RKE2, snapshots are stored on each etcd node. This varies from RKE1 which only stores one snapshot per cluster.
##### Metrics
Option to choose whether to expose etcd metrics to the public or only within the cluster.
#### Networking
##### Cluster CIDR
IPv4 and/or IPv6 network CIDRs to use for pod IPs (default: `10.42.0.0/16`).
Example values:
- IPv4-only: `10.42.0.0/16`
- IPv6-only: `2001:cafe:42::/56`
- Dual-stack: `10.42.0.0/16,2001:cafe:42::/56`
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [RKE2 documentation: Dual-stack configuration](https://docs.rke2.io/networking/basic_network_options#dual-stack-configuration)
- [RKE2 documentation: IPv6-only setup](https://docs.rke2.io/networking/basic_network_options#ipv6-setup)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
:::caution
When using `cilium` or `multus,cilium` as your container network interface provider, ensure the **Enable IPv6 Support** option is also enabled.
:::
##### Service CIDR
IPv4/IPv6 network CIDRs to use for service IPs (default: `10.43.0.0/16`).
Example values:
- IPv4-only: `10.43.0.0/16`
- IPv6-only: `2001:cafe:43::/112`
- Dual-stack: `10.43.0.0/16,2001:cafe:43::/112`
For additional requirements and limitations related to dual-stack or IPv6-only networking, see the following resources:
- [RKE2 documentation: Dual-stack configuration](https://docs.rke2.io/networking/basic_network_options#dual-stack-configuration)
- [RKE2 documentation: IPv6-only setup](https://docs.rke2.io/networking/basic_network_options#ipv6-setup)
:::caution
You must configure the Service CIDR when you first create the cluster. You cannot enable the Service CIDR on an existing cluster after it starts.
:::
:::caution
When using `cilium` or `multus,cilium` as your container network interface provider, ensure the **Enable IPv6 Support** option is also enabled.
:::
##### Cluster DNS
IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: `10.43.0.10`).
##### Cluster Domain
Select the domain for the cluster. The default is `cluster.local`.
##### NodePort Service Port Range
Option to change the range of ports that can be used for [NodePort services](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport). The default is `30000-32767`.
##### Truncate Hostnames
Option to truncate hostnames to 15 characters or fewer. You can only set this field during the initial creation of the cluster. You can't enable or disable the 15-character limit after cluster creation.
This setting only affects machine-provisioned clusters. Since custom clusters set hostnames during their own node creation process, which occurs outside of Rancher, this field doesn't restrict custom cluster hostname length.
Truncating hostnames in a cluster improves compatibility with Windows-based systems. Although Kubernetes allows hostnames up to 63 characters in length, systems that use NetBIOS restrict hostnames to 15 characters or fewer.
##### TLS Alternate Names
Add hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert.
##### Authorized Cluster Endpoint
Authorized Cluster Endpoint can be used to directly access the Kubernetes API server, without requiring communication through Rancher.
This is enabled by default in Rancher-launched Kubernetes clusters, using the IP of the node with the `controlplane` role and the default Kubernetes self signed certificates.
For more detail on how an authorized cluster endpoint works and why it is used, refer to the [architecture section.](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint)
We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.](../../rancher-manager-architecture/architecture-recommendations.md#architecture-for-an-authorized-cluster-endpoint-ace)
##### Stack Preference
Choose the networking stack for the cluster. This option affects:
- The address used for health and readiness probes of components such as Calico, etcd, kube-apiserver, kube-scheduler, kube-controller-manager, and kubelet.
- The server URL in the `authentication-token-webhook-config-file` for the Authorized Cluster Endpoint.
- The `advertise-client-urls` setting for etcd during snapshot restoration.
Options are `ipv4`, `ipv6`, `dual`:
- When set to `ipv4`, the cluster uses `127.0.0.1`
- When set to `ipv6`, the cluster uses `[::1]`
- When set to `dual`, the cluster uses `localhost`
The stack preference must match the clusters networking configuration:
- Set to `ipv4` for IPv4-only clusters
- Set to `ipv6` for IPv6-only clusters
- Set to `dual` for dual-stack clusters
:::caution
Ensuring the loopback address configuration is correct is critical for successful cluster provisioning.
For more information, refer to the [Node Requirements](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md) page.
:::
#### Registries
Select the image repository to pull Rancher images from. For more details and configuration options, see the [RKE2 documentation](https://docs.rke2.io/install/private_registry).
#### Upgrade Strategy
##### Control Plane Concurrency
Select how many nodes can be upgraded at the same time. Can be a fixed number or percentage.
##### Worker Concurrency
Select how many nodes can be upgraded at the same time. Can be a fixed number or percentage.
##### Drain Nodes (Control Plane)
Option to remove all pods from the node prior to upgrading.
##### Drain Nodes (Worker Nodes)
Option to remove all pods from the node prior to upgrading.
#### Advanced
Option to set kubelet options for different nodes. For available options, refer to the [Kubernetes documentation](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/).
## Cluster Config File Reference
Editing clusters in YAML allows you to set the [options available](https://docs.rke2.io/install/configuration) in an RKE2 installation, including those already listed in [Configuration Options in the Rancher UI](#configuration-options-in-the-rancher-ui), as well as set Rancher-specific parameters.
<details>
<summary>
<b>Example Cluster Config File Snippet</b>
</summary>
```yaml
apiVersion: provisioning.cattle.io/v1
kind: Cluster
spec:
cloudCredentialSecretName: cattle-global-data:cc-s879v
kubernetesVersion: v1.25.12+rke2r1
localClusterAuthEndpoint: {}
rkeConfig:
additionalManifest: ""
chartValues:
rke2-calico: {}
etcd:
snapshotRetention: 5
snapshotScheduleCron: 0 */5 * * *
machineGlobalConfig:
cni: calico
disable-kube-proxy: false
etcd-expose-metrics: false
profile: null
kube-apiserver-arg:
- audit-policy-file=/etc/rancher/rke2/user-audit-policy.yaml
- audit-log-path=/etc/rancher/rke2/user-audit.logs
machinePools:
- controlPlaneRole: true
etcdRole: true
machineConfigRef:
kind: Amazonec2Config
name: nc-test-pool1-pwl5h
name: pool1
quantity: 1
unhealthyNodeTimeout: 0s
workerRole: true
machineSelectorConfig:
- config:
protect-kernel-defaults: false
machineSelectorFiles:
- fileSources:
- configMap:
name: ''
secret:
name: audit-policy
items:
- key: audit-policy
path: /etc/rancher/rke2/user-audit-policy.yaml
machineLabelSelector:
matchLabels:
rke.cattle.io/control-plane-role: 'true'
registries: {}
upgradeStrategy:
controlPlaneConcurrency: "1"
controlPlaneDrainOptions:
deleteEmptyDirData: true
enabled: true
gracePeriod: -1
ignoreDaemonSets: true
timeout: 120
workerConcurrency: "1"
workerDrainOptions:
deleteEmptyDirData: true
enabled: true
gracePeriod: -1
ignoreDaemonSets: true
timeout: 120
```
</details>
### additionalManifest
Specify additional manifests to deliver to the control plane nodes.
The value is a String, and will be placed at the path `/var/lib/rancher/rke2/server/manifests/rancher/addons.yaml` on target nodes.
Example:
```yaml
additionalManifest: |-
apiVersion: v1
kind: Namespace
metadata:
name: name-xxxx
```
:::note
If you want to customize system charts, you should use the `chartValues` field as described below.
Alternatives, such as using a HelmChartConfig to customize the system charts via `additionalManifest`, can cause unexpected behavior, due to having multiple HelmChartConfigs for the same chart.
:::
### chartValues
Specify the values for the system charts installed by RKE2.
For more information about how RKE2 manges packaged components, please refer to [RKE2 documentation](https://docs.rke2.io/helm).
Example:
```yaml
chartValues:
chart-name:
key: value
```
### machineGlobalConfig
Specify RKE2 configurations. Any configuration change made here will apply to every node. The configuration options available in the [standalone version of RKE2](https://docs.rke2.io/reference/server_config) can be applied here.
Example:
```yaml
machineGlobalConfig:
etcd-arg:
- key1=value1
- key2=value2
```
There are some configuration options that can't be changed when provisioning via Rancher:
- data-dir (folder to hold state), which defaults to `/var/lib/rancher/rke2`.
To make it easier to put files on nodes beforehand, Rancher expects the following values to be included in the configuration, while RKE2 expects the values to be entered as file paths:
- audit-policy-file
- cloud-provider-config
- private-registry
Rancher delivers the files to the path `/var/lib/rancher/rke2/etc/config-files/<option>` in target nodes, and sets the proper options in the RKE2 server.
Example:
```yaml
apiVersion: provisioning.cattle.io/v1
kind: Cluster
spec:
rkeConfig:
machineGlobalConfig:
audit-policy-file:
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: RequestResponse
resources:
- group: ""
resources:
- pods
```
### machineSelectorConfig
`machineSelectorConfig` is the same as [`machineGlobalConfig`](#machineglobalconfig) except that a [label](#kubernetes-node-labels) selector can be specified with the configuration. The configuration will only be applied to nodes that match the provided label selector.
Multiple `config` entries are allowed, each specifying their own `machineLabelSelector`. A user can specify `matchExpressions`, `matchLabels`, both, or neither. Omitting the `machineLabelSelector` section of this field has the same effect as putting the config in the `machineGlobalConfig` section.
Example:
```yaml
machineSelectorConfig
- config:
config-key: config-value
machineLabelSelector:
matchExpressions:
- key: example-key
operator: string # Valid operators are In, NotIn, Exists and DoesNotExist.
values:
- example-value1
- example-value2
matchLabels:
key1: value1
key2: value2
```
### machineSelectorFiles
:::note
This feature is available in Rancher v2.7.2 and later.
:::
Deliver files to nodes, so that the files can be in place before initiating RKE2 server or agent processes.
The content of the file is retrieved from either a secret or a configmap. The target nodes are filtered by the `machineLabelSelector`.
Example :
```yaml
machineSelectorFiles:
- fileSources:
- secret:
items:
- key: example-key
path: path-to-put-the-file-on-nodes
permissions: 644 (optional)
hash: base64-encoded-hash-of-the-content (optional)
name: example-secret-name
machineLabelSelector:
matchExpressions:
- key: example-key
operator: string # Valid operators are In, NotIn, Exists and DoesNotExist.
values:
- example-value1
- example-value2
matchLabels:
key1: value1
key2: value2
- fileSources:
- configMap:
items:
- key: example-key
path: path-to-put-the-file-on-nodes
permissions: 644 (optional)
hash: base64-encoded-hash-of-the-content (optional)
name: example-configmap-name
machineLabelSelector:
matchExpressions:
- key: example-key
operator: string # Valid operators are In, NotIn, Exists and DoesNotExist.
values:
- example-value1
- example-value2
matchLabels:
key1: value1
key2: value2
```
The secret or configmap must meet the following requirements:
1. It must be in the `fleet-default` namespace where the Cluster object exists.
2. It must have the annotation `rke.cattle.io/object-authorized-for-clusters: cluster-name1,cluster-name2`, which permits the target clusters to use it.
:::tip
Rancher Dashboard provides an easy-to-use form for creating the secret or configmap.
:::
Example:
```yaml
apiVersion: v1
data:
audit-policy: >-
IyBMb2cgYWxsIHJlcXVlc3RzIGF0IHRoZSBNZXRhZGF0YSBsZXZlbC4KYXBpVmVyc2lvbjogYXVkaXQuazhzLmlvL3YxCmtpbmQ6IFBvbGljeQpydWxlczoKLSBsZXZlbDogTWV0YWRhdGE=
kind: Secret
metadata:
annotations:
rke.cattle.io/object-authorized-for-clusters: cluster1
name: name1
namespace: fleet-default
```

View File

@@ -0,0 +1,49 @@
---
title: Syncing Hosted Clusters
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/sync-clusters"/>
</head>
Syncing allows Rancher to update cluster values so that they're up to date with the corresponding cluster object hosted in AKS, EKS or GKE. This enables sources other than Rancher to own a hosted clusters state.
:::warning
You may accidentally overwrite the state from one source if you simultaneously process an update from another source. This may also occur if you process an update from one source within 5 minutes of finishing an update from another source.
:::
### How it works
There are two fields on the Rancher Cluster object that must be understood to understand how syncing works:
1. The config object for the cluster, located on the Spec of the Cluster:
* For AKS, the field is called AKSConfig
* For EKS, the field is called EKSConfig
* For GKE, the field is called GKEConfig
2. The UpstreamSpec object
* For AKS, this is located on the AKSStatus field on the Status of the Cluster.
* For EKS, this is located on the EKSStatus field on the Status of the Cluster.
* For GKE, this is located on the GKEStatus field on the Status of the Cluster.
The struct types that define these objects can be found in their corresponding operator projects:
* [aks-operator](https://github.com/rancher/aks-operator/blob/master/pkg/apis/aks.cattle.io/v1/types.go)
* [eks-operator](https://github.com/rancher/eks-operator/blob/master/pkg/apis/eks.cattle.io/v1/types.go)
* [gke-operator](https://github.com/rancher/gke-operator/blob/master/pkg/apis/gke.cattle.io/v1/types.go)
All fields are nillable, except for the following: the cluster name, the location (region or zone), Imported, and the cloud credential reference.
The AKSConfig, EKSConfig or GKEConfig represents the desired state. Nil values are ignored. Fields that are non-nil in the config object can be thought of as managed. When a cluster is created in Rancher, all fields are non-nil and therefore managed. When a pre-existing cluster is registered in Rancher all nillable fields are set to nil and aren't managed. Those fields become managed once their value has been changed by Rancher.
UpstreamSpec represents the cluster as it is in the hosted Kubernetes provider. It's refreshed every 5 minutes. After the UpstreamSpec is refreshed, Rancher checks if the cluster has an update in progress. If it's currently updating, nothing further is done. If it is not currently updating, any managed fields on AKSConfig, EKSConfig or GKEConfig are overwritten with their corresponding value from the recently updated UpstreamSpec.
:::warning
When you import a cluster from a cloud provider into Rancher, UpstreamSpec represents the cluster state and Config is empty. If you then update the imported cluster through the Rancher UI, both UpstreamSpec and Config become non-null. Any further updates to the cluster should be applied through Rancher. This is because there is no safe way to determine if changes originating from UpstreamSpec represent the desired state or just a mismatch with Config. If you update the imported cluster through the cloud provider console after you apply any updates through the Rancher UI, the controller will deploy a rollback and the content of Config will be considered the desired state.
:::
The effective desired state can be thought of as the UpstreamSpec, plus all non-nil fields in the AKSConfig, EKSConfig or GKEConfig. This is what is displayed in the UI.
If Rancher and another source attempt to update a cluster at the same time, or within 5 minutes of an update finishing, any managed fields are likely to get caught in a race condition. To use EKS as an example, a cluster may have PrivateAccess as a managed field. If PrivateAccess is false and then enabled in EKS console at 11:01, and tags are updated from Rancher before 11:05, then the value is likely to be overwritten. This can also occur if tags are updated while the cluster is still processing the update. The issue described in this example shouldn't occur if the cluster is registered and the PrivateAccess fields are nil.

View File

@@ -0,0 +1,151 @@
---
title: Launching Kubernetes on Existing Custom Nodes
description: To create a cluster with custom nodes, youll need to access servers in your cluster and provision them according to Rancher requirements
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes"/>
</head>
When you create a custom cluster, Rancher can use RKE2/K3s to create a Kubernetes cluster in on-prem bare-metal servers, on-prem virtual machines, or in any node hosted by an infrastructure provider.
To use this option, you need access to the servers that will be part of your Kubernetes cluster. Provision each server according to the [requirements](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md). Then, run the command provided in the Rancher UI on each server to convert it into a Kubernetes node.
This section describes how to set up a custom cluster.
## Creating a Cluster with Custom Nodes
:::note Want to use Windows hosts as Kubernetes workers?
See [Configuring Custom Clusters for Windows](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters.md) before you start.
:::
### 1. Provision a Linux Host
Begin creation of a custom cluster by provisioning a Linux host. Your host can be:
- A cloud-host virtual machine (VM)
- An on-prem VM
- A bare-metal server
If you want to reuse a node from a previous custom cluster, [clean the node](../../../../how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes.md) before using it in a cluster again. If you reuse a node that hasn't been cleaned, cluster provisioning may fail.
Provision the host according to the [installation requirements](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md) and the [checklist for production-ready clusters.](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/checklist-for-production-ready-clusters.md)
:::note IPv6-only cluster
For an IPv6-only cluster, ensure that your operating system correctly configures the `/etc/hosts` file.
```
::1 localhost
```
:::
### 2. Create the Custom Cluster
1. Click **☰ > Cluster Management**.
1. On the **Clusters** page, click **Create**.
1. Click **Custom**.
1. Enter a **Cluster Name**.
1. Use the **Cluster Configuration** section to set up the cluster. For more information, see [RKE2 Cluster Configuration Reference](../rke2-cluster-configuration.md) and [K3s Cluster Configuration Reference](../k3s-cluster-configuration.md).
:::note Windows nodes
To learn more about using Windows nodes as Kubernetes workers, see [Launching Kubernetes on Windows Clusters](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters.md).
:::
1. Click **Create**.
**Result:** The UI redirects to the **Registration** page, where you can generate the registration command for your nodes.
1. From **Node Role**, select the roles you want a cluster node to fill. You must provision at least one node for each role: etcd, worker, and control plane. A custom cluster requires all three roles to finish provisioning. For more information on roles, see [Roles for Nodes in Kubernetes Clusters](../../../kubernetes-concepts.md#roles-for-nodes-in-kubernetes-clusters).
:::note Bare-Metal Server
If you plan to dedicate bare-metal servers to each role, you must provision a bare-metal server for each role (i.e., provision multiple bare-metal servers).
:::note
1. **Optional**: Click **Show Advanced** to configure additional settings such as specifying the IP address(es), overriding the node hostname, or adding [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node
:::note
The **Node Public IP** and **Node Private IP** fields can accept either a single address or a comma-separated list of addresses (for example: `10.0.0.5,2001:db8::1`).
:::
:::note Ipv6-only or Dual-stack Cluster
In both IPv6-only and dual-stack clusters, you should specify the nodes **IPv6 address** as the **Node Private IP**.
:::
1. Copy the command displayed on screen to your clipboard.
1. Log in to your Linux host using your preferred shell, such as PuTTy or a remote Terminal connection. Run the command copied to your clipboard.
:::note
Repeat steps 7-10 if you want to dedicate specific hosts to specific node roles. Repeat the steps as many times as needed.
:::
**Result:**
The cluster is created and transitions to the **Updating** state while Rancher initializes and provisions cluster components.
You can access your cluster after its state is updated to **Active**.
**Active** clusters are assigned two Projects:
- `Default`, containing the `default` namespace
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
### 3. Amazon Only: Tag Resources
If you have configured your cluster to use Amazon as **Cloud Provider**, tag your AWS resources with a cluster ID.
[Amazon Documentation: Tagging Your Amazon EC2 Resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html)
:::note
You can use Amazon EC2 instances without configuring a cloud provider in Kubernetes. You only have to configure the cloud provider if you want to use specific Kubernetes cloud provider functionality. For more information, see [Kubernetes Cloud Providers](https://github.com/kubernetes/website/blob/release-1.18/content/en/docs/concepts/cluster-administration/cloud-providers.md)
:::
The following resources need to be tagged with a `ClusterID`:
- **Nodes**: All hosts added in Rancher.
- **Subnet**: The subnet used for your cluster
- **Security Group**: The security group used for your cluster.
:::note
Do not tag multiple security groups. Tagging multiple groups generates an error when creating Elastic Load Balancer.
:::
The tag that should be used is:
```
Key=kubernetes.io/cluster/<CLUSTERID>, Value=owned
```
`<CLUSTERID>` can be any string you choose. However, the same string must be used on every resource you tag. Setting the tag value to `owned` informs the cluster that all resources tagged with the `<CLUSTERID>` are owned and managed by this cluster.
If you share resources between clusters, you can change the tag to:
```
Key=kubernetes.io/cluster/CLUSTERID, Value=shared
```
## Optional Next Steps
After creating your cluster, you can access it through the Rancher UI. As a best practice, we recommend setting up these alternate ways of accessing your cluster:
- **Access your cluster with the kubectl CLI:** Follow [these steps](../../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#accessing-clusters-with-kubectl-from-your-workstation) to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher servers authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI.
- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps](../../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you cant connect to Rancher, you can still access the cluster.