Archive v2.5 docs (#1634)

* Update docusaurus.config.js/Remove v2.5 redirects and update v2.5 label and path

* Update version-2.5-sidebars.json with notice

* Remove v2.5 files/Add v2.5 files to archived_docs folder

* Fix broken link

* Fix broken link/typo

* Fix broken links
This commit is contained in:
Lucas Saintarbor
2025-02-18 14:08:09 -08:00
committed by GitHub
parent a2706ae028
commit 8c76072fd2
407 changed files with 27 additions and 1626 deletions
@@ -0,0 +1,425 @@
---
title: EKS Cluster Configuration Reference
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/eks-cluster-configuration"/>
</head>
<Tabs>
<TabItem value="Rancher v2.5.6+">
### Account Access
<a id="account-access-2-5-6"></a>
Complete each drop-down and field using the information obtained for your IAM policy.
| Setting | Description |
| ---------- | -------------------------------------------------------------------------------------------------------------------- |
| Region | From the drop-down choose the geographical region in which to build your cluster. |
| Cloud Credentials | Select the cloud credentials that you created for your IAM policy. For more information on creating cloud credentials in Rancher, refer to [this page.](../../user-settings/manage-cloud-credentials.md) |
### Service Role
<a id="service-role-2-5-6"></a>
Choose a [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html).
Service Role | Description
-------------|---------------------------
Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster.
Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you're already created within AWS. For more information on creating a custom service role in AWS, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#create-service-linked-role).
### Secrets Encryption
<a id="secrets-encryption-2-5-6"></a>
Optional: To encrypt secrets, select or enter a key created in [AWS Key Management Service (KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html)
### API Server Endpoint Access
<a id="api-server-endpoint-access-2-5-6"></a>
Configuring Public/Private API access is an advanced use case. For details, refer to the EKS cluster endpoint access control [documentation.](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html)
### Private-only API Endpoints
If you enable private and disable public API endpoint access when creating a cluster, then there is an extra step you must take in order for Rancher to connect to the cluster successfully. In this case, a pop-up will be displayed with a command that you will run on the cluster to register it with Rancher. Once the cluster is provisioned, you can run the displayed command anywhere you can connect to the cluster's Kubernetes API.
There are two ways to avoid this extra manual step:
- You can create the cluster with both private and public API endpoint access on cluster creation. You can disable public access after the cluster is created and in an active state and Rancher will continue to communicate with the EKS cluster.
- You can ensure that Rancher shares a subnet with the EKS cluster. Then security groups can be used to enable Rancher to communicate with the cluster's API endpoint. In this case, the command to register the cluster is not needed, and Rancher will be able to communicate with your cluster. For more information on configuring security groups, refer to the [security groups documentation](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html).
### Public Access Endpoints
<a id="public-access-endpoints-2-5-6"></a>
Optionally limit access to the public endpoint via explicit CIDR blocks.
If you limit access to specific CIDR blocks, then it is recommended that you also enable the private access to avoid losing network communication to the cluster.
One of the following is required to enable private access:
- Rancher's IP must be part of an allowed CIDR block
- Private access should be enabled, and Rancher must share a subnet with the cluster and have network access to the cluster, which can be configured with a security group
For more information about public and private access to the cluster endpoint, refer to the [Amazon EKS documentation.](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html)
### Subnet
<a id="subnet-2-5-6"></a>
| Option | Description |
| ------- | ------------ |
| Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC with 3 public subnets. |
| Custom: Choose from your existing VPC and Subnets | While provisioning your cluster, Rancher configures your Control Plane and nodes to use a VPC and Subnet that you've already [created in AWS](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html). |
For more information, refer to the AWS documentation for [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html). Follow one of the sets of instructions below based on your selection from the previous step.
- [What Is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html)
- [VPCs and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html)
### Security Group
<a id="security-group-2-5-6"></a>
Amazon Documentation:
- [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)
- [Security Groups for Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html)
- [Create a Security Group](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html#getting-started-create-security-group)
### Logging
<a id="logging-2-5-6"></a>
Configure control plane logs to send to Amazon CloudWatch. You are charged the standard CloudWatch Logs data ingestion and storage costs for any logs sent to CloudWatch Logs from your clusters.
Each log type corresponds to a component of the Kubernetes control plane. To learn more about these components, see [Kubernetes Components](https://kubernetes.io/docs/concepts/overview/components/) in the Kubernetes documentation.
For more information on EKS control plane logging, refer to the official [documentation.](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html)
### Managed Node Groups
<a id="managed-node-groups-2-5-6"></a>
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
For more information about how node groups work and how they are configured, refer to the [EKS documentation.](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html)
#### User-provided Launch Templates
You can provide your own launch template ID and version to configure the EC2 instances in a node group. If you provide the launch template, none of the template settings will be configurable from Rancher. You must set all of the required options listed below in your launch template.
Also, if you provide the launch template, you can only update the template version, not the template ID. To use a new template ID, create a new managed node group.
| Option | Description | Required/Optional |
| ------ | ----------- | ----------------- |
| Instance Type | Choose the [hardware specs](https://aws.amazon.com/ec2/instance-types/) for the instance you're provisioning. | Required |
| Image ID | Specify a custom AMI for the nodes. Custom AMIs used with EKS must be [configured properly](https://aws.amazon.com/premiumsupport/knowledge-center/eks-custom-linux-ami/) | Optional |
| Node Volume Size | The launch template must specify an EBS volume with the desired size | Required |
| SSH Key | A key to be added to the instances to provide SSH access to the nodes | Optional |
| User Data | Cloud init script in [MIME multi-part format](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-user-data) | Optional |
| Instance Resource Tags | Tag each EC2 instance in the node group | Optional |
#### Rancher-managed Launch Templates
If you do not specify a launch template, then you will be able to configure the above options in the Rancher UI and all of them can be updated after creation. In order to take advantage of all of these options, Rancher will create and manage a launch template for you. Each cluster in Rancher will have one Rancher-managed launch template and each managed node group that does not have a specified launch template will have one version of the managed launch template. The name of this launch template will have the prefix "rancher-managed-lt-" followed by the display name of the cluster. In addition, the Rancher-managed launch template will be tagged with the key "rancher-managed-template" and value "do-not-modify-or-delete" to help identify it as Rancher-managed. It is important that this launch template and its versions not be modified, deleted, or used with any other clusters or managed node groups. Doing so could result in your node groups being "degraded" and needing to be destroyed and recreated.
#### Custom AMIs
If you specify a custom AMI, whether in a launch template or in Rancher, then the image must be [configured properly](https://aws.amazon.com/premiumsupport/knowledge-center/eks-custom-linux-ami/) and you must provide user data to [bootstrap the node](https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-custom-ami). This is considered an advanced use case and understanding the requirements is imperative.
If you specify a launch template that does not contain a custom AMI, then Amazon will use the [EKS-optimized AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) for the Kubernetes version and selected region. You can also select a [GPU enabled instance](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html#gpu-ami) for workloads that would benefit from it.
>**Note**
>The GPU enabled instance setting in Rancher is ignored if a custom AMI is provided, either in the dropdown or in a launch template.
#### Spot instances
Spot instances are now [supported by EKS](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html#managed-node-group-capacity-types-spot). If a launch template is specified, Amazon recommends that the template not provide an instance type. Instead, Amazon recommends providing multiple instance types. If the "Request Spot Instances" checkbox is enabled for a node group, then you will have the opportunity to provide multiple instance types.
>**Note**
>Any selection you made in the instance type dropdown will be ignored in this situation and you must specify at least one instance type to the "Spot Instance Types" section. Furthermore, a launch template used with EKS cannot request spot instances. Requesting spot instances must be part of the EKS configuration.
#### Node Group Settings
The following settings are also configurable. All of these except for the "Node Group Name" are editable after the node group is created.
| Option | Description |
| ------- | ------------ |
| Node Group Name | The name of the node group. |
| Desired ASG Size | The desired number of instances. |
| Maximum ASG Size | The maximum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. |
| Minimum ASG Size | The minimum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. |
| Labels | Kubernetes labels applied to the nodes in the managed node group. Note: Invalid labels can prevent upgrades or can prevent Rancher from starting. For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set) |
| Tags | These are tags for the managed node group and do not propagate to any of the associated resources. |
</TabItem>
<TabItem value="Rancher v2.5.0-v2.5.5">
### Changes in Rancher v2.5
More EKS options can be configured when you create an EKS cluster in Rancher, including the following:
- Managed node groups
- Desired size, minimum size, maximum size (requires the Cluster Autoscaler to be installed)
- Control plane logging
- Secrets encryption with KMS
The following capabilities have been added for configuring EKS clusters in Rancher:
- GPU support
- Exclusively use managed nodegroups that come with the most up-to-date AMIs
- Add new nodes
- Upgrade nodes
- Add and remove node groups
- Disable and enable private access
- Add restrictions to public access
- Use your cloud credentials to create the EKS cluster instead of passing in your access key and secret key
Due to the way that the cluster data is synced with EKS, if the cluster is modified from another source, such as in the EKS console, and in Rancher within five minutes, it could cause some changes to be overwritten. For information about how the sync works and how to configure it, refer to [this section](#syncing).
### Account Access
<a id="account-access-2-5"></a>
Complete each drop-down and field using the information obtained for your IAM policy.
| Setting | Description |
| ---------- | -------------------------------------------------------------------------------------------------------------------- |
| Region | From the drop-down choose the geographical region in which to build your cluster. |
| Cloud Credentials | Select the cloud credentials that you created for your IAM policy. For more information on creating cloud credentials in Rancher, refer to [this page.](../../user-settings/manage-cloud-credentials.md) |
### Service Role
<a id="service-role-2-5"></a>
Choose a [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html).
Service Role | Description
-------------|---------------------------
Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster.
Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you're already created within AWS. For more information on creating a custom service role in AWS, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#create-service-linked-role).
### Secrets Encryption
<a id="secrets-encryption-2-5"></a>
Optional: To encrypt secrets, select or enter a key created in [AWS Key Management Service (KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html)
### API Server Endpoint Access
<a id="api-server-endpoint-access-2-5"></a>
Configuring Public/Private API access is an advanced use case. For details, refer to the EKS cluster endpoint access control [documentation.](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html)
### Private-only API Endpoints
If you enable private and disable public API endpoint access when creating a cluster, then there is an extra step you must take in order for Rancher to connect to the cluster successfully. In this case, a pop-up will be displayed with a command that you will run on the cluster to register it with Rancher. Once the cluster is provisioned, you can run the displayed command anywhere you can connect to the cluster's Kubernetes API.
There are two ways to avoid this extra manual step:
- You can create the cluster with both private and public API endpoint access on cluster creation. You can disable public access after the cluster is created and in an active state and Rancher will continue to communicate with the EKS cluster.
- You can ensure that Rancher shares a subnet with the EKS cluster. Then security groups can be used to enable Rancher to communicate with the cluster's API endpoint. In this case, the command to register the cluster is not needed, and Rancher will be able to communicate with your cluster. For more information on configuring security groups, refer to the [security groups documentation](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html).
### Public Access Endpoints
<a id="public-access-endpoints-2-5"></a>
Optionally limit access to the public endpoint via explicit CIDR blocks.
If you limit access to specific CIDR blocks, then it is recommended that you also enable the private access to avoid losing network communication to the cluster.
One of the following is required to enable private access:
- Rancher's IP must be part of an allowed CIDR block
- Private access should be enabled, and Rancher must share a subnet with the cluster and have network access to the cluster, which can be configured with a security group
For more information about public and private access to the cluster endpoint, refer to the [Amazon EKS documentation.](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html)
### Subnet
<a id="subnet-2-5"></a>
| Option | Description |
| ------- | ------------ |
| Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC with 3 public subnets. |
| Custom: Choose from your existing VPC and Subnets | While provisioning your cluster, Rancher configures your Control Plane and nodes to use a VPC and Subnet that you've already [created in AWS](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html). |
For more information, refer to the AWS documentation for [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html). Follow one of the sets of instructions below based on your selection from the previous step.
- [What Is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html)
- [VPCs and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html)
### Security Group
<a id="security-group-2-5"></a>
Amazon Documentation:
- [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)
- [Security Groups for Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html)
- [Create a Security Group](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html#getting-started-create-security-group)
### Logging
<a id="logging-2-5"></a>
Configure control plane logs to send to Amazon CloudWatch. You are charged the standard CloudWatch Logs data ingestion and storage costs for any logs sent to CloudWatch Logs from your clusters.
Each log type corresponds to a component of the Kubernetes control plane. To learn more about these components, see [Kubernetes Components](https://kubernetes.io/docs/concepts/overview/components/) in the Kubernetes documentation.
For more information on EKS control plane logging, refer to the official [documentation.](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html)
### Managed Node Groups
<a id="managed-node-groups-2-5"></a>
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
For more information about how node groups work and how they are configured, refer to the [EKS documentation.](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html)
Amazon will use the [EKS-optimized AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) for the Kubernetes version. You can configure whether the AMI has GPU enabled.
| Option | Description |
| ------- | ------------ |
| Instance Type | Choose the [hardware specs](https://aws.amazon.com/ec2/instance-types/) for the instance you're provisioning. |
| Maximum ASG Size | The maximum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. |
| Minimum ASG Size | The minimum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. |
</TabItem>
<TabItem value="Rancher prior to v2.5">
### Account Access
<a id="account-access-2-4"></a>
Complete each drop-down and field using the information obtained for your IAM policy.
| Setting | Description |
| ---------- | -------------------------------------------------------------------------------------------------------------------- |
| Region | From the drop-down choose the geographical region in which to build your cluster. |
| Access Key | Enter the access key that you created for your IAM policy. |
| Secret Key | Enter the secret key that you created for your IAM policy. |
### Service Role
<a id="service-role-2-4"></a>
Choose a [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html).
Service Role | Description
-------------|---------------------------
Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster.
Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you're already created within AWS. For more information on creating a custom service role in AWS, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#create-service-linked-role).
### Public IP for Worker Nodes
<a id="public-ip-for-worker-nodes-2-4"></a>
Your selection for this option determines what options are available for **VPC & Subnet**.
Option | Description
-------|------------
Yes | When your cluster nodes are provisioned, they're assigned a both a private and public IP address.
No: Private IPs only | When your cluster nodes are provisioned, they're assigned only a private IP address.<br/><br/>If you choose this option, you must also choose a **VPC & Subnet** that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane.
### VPC & Subnet
<a id="vpc-and-subnet-2-4"></a>
The available options depend on the [public IP for worker nodes.](#public-ip-for-worker-nodes)
Option | Description
-------|------------
Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC and Subnet.
Custom: Choose from your existing VPC and Subnets | While provisioning your cluster, Rancher configures your nodes to use a VPC and Subnet that you've already [created in AWS](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html). If you choose this option, complete the remaining steps below.
For more information, refer to the AWS documentation for [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html). Follow one of the sets of instructions below based on your selection from the previous step.
- [What Is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html)
- [VPCs and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html)
If you choose to assign a public IP address to your cluster's worker nodes, you have the option of choosing between a VPC that's automatically generated by Rancher (i.e., **Standard: Rancher generated VPC and Subnet**), or a VPC that you've already created with AWS (i.e., **Custom: Choose from your existing VPC and Subnets**). Choose the option that best fits your use case.
<details id="yes">
<summary>Click to expand</summary>
If you're using **Custom: Choose from your existing VPC and Subnets**:
(If you're using **Standard**, skip to the [instance options.)](#select-instance-options-2-4)
1. Make sure **Custom: Choose from your existing VPC and Subnets** is selected.
1. From the drop-down that displays, choose a VPC.
1. Click **Next: Select Subnets**. Then choose one of the **Subnets** that displays.
1. Click **Next: Select Security Group**.
</details>
If your worker nodes have Private IPs only, you must also choose a **VPC & Subnet** that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane.
<details id="no">
<summary>Click to expand</summary>
Follow the steps below.
>**Tip:** When using only private IP addresses, you can provide your nodes internet access by creating a VPC constructed with two subnets, a private set and a public set. The private set should have its route tables configured to point toward a NAT in the public set. For more information on routing traffic from private subnets, please see the [official AWS documentation](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html).
1. From the drop-down that displays, choose a VPC.
1. Click **Next: Select Subnets**. Then choose one of the **Subnets** that displays.
</details>
### Security Group
<a id="security-group-2-4"></a>
Amazon Documentation:
- [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)
- [Security Groups for Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html)
- [Create a Security Group](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html#getting-started-create-security-group)
### Instance Options
<a id="select-instance-options-2-4"></a>
Instance type and size of your worker nodes affects how many IP addresses each worker node will have available. See this [documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI) for more information.
Option | Description
-------|------------
Instance Type | Choose the [hardware specs](https://aws.amazon.com/ec2/instance-types/) for the instance you're provisioning.
Custom AMI Override | If you want to use a custom [Amazon Machine Image](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html#creating-an-ami) (AMI), specify it here. By default, Rancher will use the [EKS-optimized AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) for the EKS version that you chose.
Desired ASG Size | The number of instances that your cluster will provision.
User Data | Custom commands can to be passed to perform automated configuration tasks **WARNING: Modifying this may cause your nodes to be unable to join the cluster.** _Note: Available as of v2.2.0_
</TabItem>
</Tabs>
### Configuring the Refresh Interval
<Tabs>
<TabItem value="Rancher v2.5.8+">
The `eks-refresh-cron` setting is deprecated. It has been migrated to the `eks-refresh` setting, which is an integer representing seconds.
The default value is 300 seconds.
The syncing interval can be changed by running `kubectl edit setting eks-refresh`.
If the `eks-refresh-cron` setting was previously set, the migration will happen automatically.
The shorter the refresh window, the less likely any race conditions will occur, but it does increase the likelihood of encountering request limits that may be in place for AWS APIs.
</TabItem>
<TabItem value="Before v2.5.8">
It is possible to change the refresh interval through the setting `eks-refresh-cron`. This setting accepts values in the Cron format. The default is `*/5 * * * *`.
The shorter the refresh window, the less likely any race conditions will occur, but it does increase the likelihood of encountering request limits that may be in place for AWS APIs.
</TabItem>
</Tabs>
@@ -0,0 +1,455 @@
---
title: GKE Cluster Configuration Reference
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration"/>
</head>
<Tabs>
<TabItem value="Rancher v2.5.8+">
## Changes in v2.5.8
- We now support private GKE clusters. Note: This advanced setup can require more steps during the cluster provisioning process. For details, see [this section.](gke-private-clusters.md)
- [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc) are now supported.
- We now support more configuration options for Rancher managed GKE clusters:
- Project
- Network policy
- Network policy config
- Node pools and node configuration options:
- More image types are available for the nodes
- The maximum number of pods per node can be configured
- Node pools can be added while configuring the GKE cluster
- When provisioning a GKE cluster, you can now use reusable cloud credentials instead of using a service account token directly to create the cluster.
## Cluster Location
| Value | Description |
|--------|--------------|
| Location Type | Zonal or Regional. With GKE, you can create a cluster tailored to the availability requirements of your workload and your budget. By default, a cluster's nodes run in a single compute zone. When multiple zones are selected, the cluster's nodes will span multiple compute zones, while the controlplane is located in a single zone. Regional clusters increase the availability of the controlplane as well. For help choosing the type of cluster availability, refer to [these docs.](https://cloud.google.com/kubernetes-engine/docs/best-practices/scalability#choosing_a_regional_or_zonal_control_plane) |
| Zone | Each region in Compute engine contains a number of zones. For more information about available regions and zones, refer to [these docs.](https://cloud.google.com/compute/docs/regions-zones#available) |
| Additional Zones | For zonal clusters, you can select additional zones to create a [multi-zone cluster.](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#multi-zonal_clusters) |
| Region | For [regional clusters,](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#regional_clusters) you can select a region. For more information about available regions and zones, refer to [this section](https://cloud.google.com/compute/docs/regions-zones#available). The first part of each zone name is the name of the region. |
## Cluster Options
### Kubernetes Version
_Mutable: yes_
For more information on GKE Kubernetes versions, refer to [these docs.](https://cloud.google.com/kubernetes-engine/versioning)
### Container Address Range
_Mutable: no_
The IP address range for pods in the cluster. Must be a valid CIDR range, e.g. 10.42.0.0/16. If not specified, a random range is automatically chosen from 10.0.0.0/8 and will exclude ranges already allocated to VMs, other clusters, or routes. Automatically chosen ranges may conflict with reserved IP addresses, dynamic routes, or routes within VPCs peering with the cluster.
### Network
_Mutable: no_
The Compute Engine Network that the cluster connects to. Routes and firewalls will be created using this network. If using [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc), the VPC networks that are shared to your project will appear here. will be available to select in this field. For more information, refer to [this page](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets).
### Node Subnet / Subnet
_Mutable: no_
The Compute Engine subnetwork that the cluster connects to. This subnetwork must belong to the network specified in the **Network** field. Select an existing subnetwork, or select "Auto Create Subnetwork" to have one automatically created. If not using an existing network, **Subnetwork Name** is required to generate one. If using [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc), the VPC subnets that are shared to your project will appear here. If using a Shared VPC network, you cannot select "Auto Create Subnetwork". For more information, refer to [this page.](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets)
### Subnetwork Name
_Mutable: no_
Automatically create a subnetwork with the provided name. Required if "Auto Create Subnetwork" is selected for **Node Subnet** or **Subnet**. For more information on subnetworks, refer to [this page.](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets)
### Ip Aliases
_Mutable: no_
Enable [alias IPs](https://cloud.google.com/vpc/docs/alias-ip). This enables VPC-native traffic routing. Required if using [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc).
### Network Policy
_Mutable: yes_
Enable network policy enforcement on the cluster. A network policy defines the level of communication that can occur between pods and services in the cluster. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy)
### Node Ipv4 CIDR Block
_Mutable: no_
The IP address range of the instance IPs in this cluster. Can be set if "Auto Create Subnetwork" is selected for **Node Subnet** or **Subnet**. Must be a valid CIDR range, e.g. 10.96.0.0/14. For more information on how to determine the IP address range, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing)
### Cluster Secondary Range Name
_Mutable: no_
The name of an existing secondary range for Pod IP addresses. If selected, **Cluster Pod Address Range** will automatically be populated. Required if using a Shared VPC network.
### Cluster Pod Address Range
_Mutable: no_
The IP address range assigned to pods in the cluster. Must be a valid CIDR range, e.g. 10.96.0.0/11. If not provided, will be created automatically. Must be provided if using a Shared VPC network. For more information on how to determine the IP address range for your pods, refer to [this section.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing_secondary_range_pods)
### Services Secondary Range Name
_Mutable: no_
The name of an existing secondary range for service IP addresses. If selected, **Service Address Range** will be automatically populated. Required if using a Shared VPC network.
### Service Address Range
_Mutable: no_
The address range assigned to the services in the cluster. Must be a valid CIDR range, e.g. 10.94.0.0/18. If not provided, will be created automatically. Must be provided if using a Shared VPC network. For more information on how to determine the IP address range for your services, refer to [this section.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing_secondary_range_svcs)
### Private Cluster
_Mutable: no_
> Warning: private clusters require additional planning and configuration outside of Rancher. Refer to the [private cluster guide](gke-private-clusters.md).
Assign nodes only internal IP addresses. Private cluster nodes cannot access the public internet unless additional networking steps are taken in GCP.
### Enable Private Endpoint
> Warning: private clusters require additional planning and configuration outside of Rancher. Refer to the [private cluster guide](gke-private-clusters.md).
_Mutable: no_
Locks down external access to the control plane endpoint. Only available if **Private Cluster** is also selected. If selected, and if Rancher does not have direct access to the Virtual Private Cloud network the cluster is running in, Rancher will provide a registration command to run on the cluster to enable Rancher to connect to it.
### Master IPV4 CIDR Block
_Mutable: no_
The IP range for the control plane VPC.
### Master Authorized Network
_Mutable: yes_
Enable control plane authorized networks to block untrusted non-GCP source IPs from accessing the Kubernetes master through HTTPS. If selected, additional authorized networks may be added. If the cluster is created with a public endpoint, this option is useful for locking down access to the public endpoint to only certain networks, such as the network where your Rancher service is running. If the cluster only has a private endpoint, this setting is required.
## Additional Options
### Cluster Addons
Additional Kubernetes cluster components. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters#Cluster.AddonsConfig)
#### Horizontal Pod Autoscaling
_Mutable: yes_
The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload's CPU or memory consumption, or in response to custom metrics reported from within Kubernetes or external metrics from sources outside of your cluster. For more information, see [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler)
#### HTTP (L7) Load Balancing
_Mutable: yes_
HTTP (L7) Load Balancing distributes HTTP and HTTPS traffic to backends hosted on GKE. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer)
#### Network Policy Config (master only)
_Mutable: yes_
Configuration for NetworkPolicy. This only tracks whether the addon is enabled or not on the master, it does not track whether network policy is enabled for the nodes.
### Cluster Features (Alpha Features)
_Mutable: no_
Turns on all Kubernetes alpha API groups and features for the cluster. When enabled, the cluster cannot be upgraded and will be deleted automatically after 30 days. Alpha clusters are not recommended for production use as they are not covered by the GKE SLA. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/alpha-clusters)
### Logging Service
_Mutable: yes_
The logging service the cluster uses to write logs. Use either [Cloud Logging](https://cloud.google.com/logging) or no logging service in which case no logs are exported from the cluster.
### Monitoring Service
_Mutable: yes_
The monitoring service the cluster uses to write metrics. Use either [Cloud Monitoring](https://cloud.google.com/monitoring) or monitoring service in which case no metrics are exported from the cluster.
### Maintenance Window
_Mutable: yes_
Set the start time for a 4 hour maintenance window. The time is specified in the UTC time zone using the HH:MM format. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/maintenance-windows-and-exclusions)
## Node Pools
In this section, enter details describing the configuration of each node in the node pool.
### Kubernetes Version
_Mutable: yes_
The Kubernetes version for each node in the node pool. For more information on GKE Kubernetes versions, refer to [these docs.](https://cloud.google.com/kubernetes-engine/versioning)
### Image Type
_Mutable: yes_
The node operating system image. For more information for the node image options that GKE offers for each OS, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#available_node_images)
> Note: the default option is "Container-Optimized OS with Docker". The read-only filesystem on GCP's Container-Optimized OS is not compatible with the [legacy logging](https://github.com/rancher/rancher-docs/tree/main/archived_docs/en/version-2.0-2.4/explanations/integrations-in-rancher/cluster-logging/cluster-logging.md) implementation in Rancher. If you need to use the legacy logging feature, select "Ubuntu with Docker" or "Ubuntu with Containerd". The [logging feature as of v2.5](../../../../explanations/integrations-in-rancher/logging/logging.md) is compatible with the Container-Optimized OS image.
> Note: if selecting "Windows Long Term Service Channel" or "Windows Semi-Annual Channel" for the node pool image type, you must also add at least one Container-Optimized OS or Ubuntu node pool.
### Machine Type
_Mutable: no_
The virtualized hardware resources available to node instances. For more information on Google Cloud machine types, refer to [this page.](https://cloud.google.com/compute/docs/machine-types#machine_types)
### Root Disk Type
_Mutable: no_
Standard persistent disks are backed by standard hard disk drives (HDD), while SSD persistent disks are backed by solid state drives (SSD). For more information, refer to [this section.](https://cloud.google.com/compute/docs/disks)
### Local SSD Disks
_Mutable: no_
Configure each node's local SSD disk storage in GB. Local SSDs are physically attached to the server that hosts your VM instance. Local SSDs have higher throughput and lower latency than standard persistent disks or SSD persistent disks. The data that you store on a local SSD persists only until the instance is stopped or deleted. For more information, see [this section.](https://cloud.google.com/compute/docs/disks#localssds)
### Preemptible nodes (beta)
_Mutable: no_
Preemptible nodes, also called preemptible VMs, are Compute Engine VM instances that last a maximum of 24 hours in general, and provide no availability guarantees. For more information, see [this page.](https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms)
### Taints
_Mutable: no_
When you apply a taint to a node, only Pods that tolerate the taint are allowed to run on the node. In a GKE cluster, you can apply a taint to a node pool, which applies the taint to all nodes in the pool.
### Node Labels
_Mutable: no_
You can apply labels to the node pool, which applies the labels to all nodes in the pool.
Invalid labels can prevent upgrades or can prevent Rancher from starting. For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)
## Group Details
In this section, enter details describing the node pool.
### Name
_Mutable: no_
Enter a name for the node pool.
### Initial Node Count
_Mutable: yes_
Integer for the starting number of nodes in the node pool.
### Max Pod Per Node
_Mutable: no_
GKE has a hard limit of 110 Pods per node. For more information on the Kubernetes limits, see [this section.](https://cloud.google.com/kubernetes-engine/docs/best-practices/scalability#dimension_limits)
### Autoscaling
_Mutable: yes_
Node pool autoscaling dynamically creates or deletes nodes based on the demands of your workload. For more information, see [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler)
### Auto Repair
_Mutable: yes_
GKE's node auto-repair feature helps you keep the nodes in your cluster in a healthy, running state. When enabled, GKE makes periodic checks on the health state of each node in your cluster. If a node fails consecutive health checks over an extended time period, GKE initiates a repair process for that node. For more information, see the section on [auto-repairing nodes.](https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair)
### Auto Upgrade
_Mutable: yes_
When enabled, the auto-upgrade feature keeps the nodes in your cluster up-to-date with the cluster control plane (master) version when your control plane is [updated on your behalf.](https://cloud.google.com/kubernetes-engine/upgrades#automatic_cp_upgrades) For more information about auto-upgrading nodes, see [this page.](https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades)
### Access Scopes
_Mutable: no_
Access scopes are the legacy method of specifying permissions for your nodes.
- **Allow default access:** The default access for new clusters is the [Compute Engine default service account.](https://cloud.google.com/compute/docs/access/service-accounts?hl=en_US#default_service_account)
- **Allow full access to all Cloud APIs:** Generally, you can just set the cloud-platform access scope to allow full access to all Cloud APIs, then grant the service account only relevant IAM roles. The combination of access scopes granted to the virtual machine instance and the IAM roles granted to the service account determines the amount of access the service account has for that instance.
- **Set access for each API:** Alternatively, you can choose to set specific scopes that permit access to the particular API methods that the service will call.
For more information, see the [section about enabling service accounts for a VM.](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances)
### Configuring the Refresh Interval
The refresh interval can be configured through the setting "gke-refresh", which is an integer representing seconds.
The default value is 300 seconds.
The syncing interval can be changed by running `kubectl edit setting gke-refresh`.
The shorter the refresh window, the less likely any race conditions will occur, but it does increase the likelihood of encountering request limits that may be in place for GCP APIs.
</TabItem>
<TabItem value="Rancher before v2.5.8">
## Labels & Annotations
Add Kubernetes [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) to the cluster.
Invalid labels can prevent upgrades or can prevent Rancher from starting. For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)
## Kubernetes Options
### Location Type
Zonal or Regional. With GKE, you can create a cluster tailored to the availability requirements of your workload and your budget. By default, a cluster's nodes run in a single compute zone. When multiple zones are selected, the cluster's nodes will span multiple compute zones, while the controlplane is located in a single zone. Regional clusters increase the availability of the controlplane as well. For help choosing the type of cluster availability, refer to [these docs.](https://cloud.google.com/kubernetes-engine/docs/best-practices/scalability#choosing_a_regional_or_zonal_control_plane)
For [regional clusters,](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#regional_clusters) you can select a region. For more information about available regions and zones, refer to [this section](https://cloud.google.com/compute/docs/regions-zones#available). The first part of each zone name is the name of the region.
The location type can't be changed after the cluster is created.
### Zone
Each region in Compute engine contains a number of zones.
For more information about available regions and zones, refer to [these docs.](https://cloud.google.com/compute/docs/regions-zones#available)
### Additional Zones
For zonal clusters, you can select additional zones to create a [multi-zone cluster.](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#multi-zonal_clusters)
### Kubernetes Version
Link to list of GKE kubernetes versions
### Container Address Range
The IP address range for pods in the cluster. Must be a valid CIDR range, e.g. 10.42.0.0/16. If not specified, a random range is automatically chosen from 10.0.0.0/8 and will exclude ranges already allocated to VMs, other clusters, or routes. Automatically chosen ranges may conflict with reserved IP addresses, dynamic routes, or routes within VPCs peering with the cluster.
### Alpha Features
Turns on all Kubernetes alpha API groups and features for the cluster. When enabled, the cluster cannot be upgraded and will be deleted automatically after 30 days. Alpha clusters are not recommended for production use as they are not covered by the GKE SLA. For more information, refer to [this page](https://cloud.google.com/kubernetes-engine/docs/concepts/alpha-clusters).
### Legacy Authorization
This option is deprecated and it is recommended to leave it disabled. For more information, see [this page.](https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster#leave_abac_disabled)
### Stackdriver Logging
Enable logging with Google Cloud's Operations Suite, formerly called Stackdriver. For details, see the [documentation.](https://cloud.google.com/logging/docs/basic-concepts)
### Stackdriver Monitoring
Enable monitoring with Google Cloud's Operations Suite, formerly called Stackdriver. For details, see the [documentation.](https://cloud.google.com/monitoring/docs/monitoring-overview)
### Kubernetes Dashboard
Enable the [Kubernetes dashboard add-on.](https://cloud.google.com/kubernetes-engine/docs/concepts/dashboards#kubernetes_dashboard) Starting with GKE v1.15, you will no longer be able to enable the Kubernetes Dashboard by using the add-on API.
### Http Load Balancing
Set up [HTTP(S) load balancing.](https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer) To use Ingress, you must have the HTTP(S) Load Balancing add-on enabled.
### Horizontal Pod Autoscaling
The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload's CPU or memory consumption, or in response to custom metrics reported from within Kubernetes or external metrics from sources outside of your cluster. For more information, see the [documentation.](https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler)
### Maintenance Window
Set the start time for a 4 hour maintenance window. The time is specified in the UTC time zone using the HH:MM format. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/maintenance-windows-and-exclusions)
### Network
The Compute Engine Network that the cluster connects to. Routes and firewalls will be created using this network. If using [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc), the VPC networks that are shared to your project will appear here. will be available to select in this field. For more information, refer to [this page](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets).
### Node Subnet / Subnet
The Compute Engine subnetwork that the cluster connects to. This subnetwork must belong to the network specified in the **Network** field. Select an existing subnetwork, or select "Auto Create Subnetwork" to have one automatically created. If not using an existing network, **Subnetwork Name** is required to generate one. If using [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc), the VPC subnets that are shared to your project will appear here. If using a Shared VPC network, you cannot select "Auto Create Subnetwork". For more information, refer to [this page.](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets)
### Ip Aliases
Enable [alias IPs](https://cloud.google.com/vpc/docs/alias-ip). This enables VPC-native traffic routing. Required if using [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc).
### Pod address range
When you create a VPC-native cluster, you specify a subnet in a VPC network. The cluster uses three unique subnet IP address ranges for nodes, pods, and services. For more information on IP address ranges, see [this section.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing)
### Service address range
When you create a VPC-native cluster, you specify a subnet in a VPC network. The cluster uses three unique subnet IP address ranges for nodes, pods, and services. For more information on IP address ranges, see [this section.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing)
### Cluster Labels
A [cluster label](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-managing-labels) is a key-value pair that helps you organize your Google Cloud clusters. You can attach a label to each resource, then filter the resources based on their labels. Information about labels is forwarded to the billing system, so you can break down your billing charges by label.
## Node Options
### Node Count
Integer for the starting number of nodes in the node pool.
### Machine Type
For more information on Google Cloud machine types, refer to [this page.](https://cloud.google.com/compute/docs/machine-types#machine_types)
### Image Type
Ubuntu or Container-Optimized OS images are available.
For more information about GKE node image options, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#available_node_images)
### Root Disk Type
Standard persistent disks are backed by standard hard disk drives (HDD), while SSD persistent disks are backed by solid state drives (SSD). For more information, refer to [this section.](https://cloud.google.com/compute/docs/disks)
### Root Disk Size
The size in GB of the [root disk.](https://cloud.google.com/compute/docs/disks)
### Local SSD disks
Configure each node's local SSD disk storage in GB.
Local SSDs are physically attached to the server that hosts your VM instance. Local SSDs have higher throughput and lower latency than standard persistent disks or SSD persistent disks. The data that you store on a local SSD persists only until the instance is stopped or deleted. For more information, see [this section.](https://cloud.google.com/compute/docs/disks#localssds)
### Preemptible nodes (beta)
Preemptible nodes, also called preemptible VMs, are Compute Engine VM instances that last a maximum of 24 hours in general, and provide no availability guarantees. For more information, see [this page.](https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms)
### Auto Upgrade
> Note: Enabling the Auto Upgrade feature for Nodes is not recommended.
When enabled, the auto-upgrade feature keeps the nodes in your cluster up-to-date with the cluster control plane (master) version when your control plane is [updated on your behalf.](https://cloud.google.com/kubernetes-engine/upgrades#automatic_cp_upgrades) For more information about auto-upgrading nodes, see [this page.](https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades)
### Auto Repair
GKE's node auto-repair feature helps you keep the nodes in your cluster in a healthy, running state. When enabled, GKE makes periodic checks on the health state of each node in your cluster. If a node fails consecutive health checks over an extended time period, GKE initiates a repair process for that node. For more information, see the section on [auto-repairing nodes.](https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair)
### Node Pool Autoscaling
Enable node pool autoscaling based on cluster load. For more information, see the documentation on [adding a node pool with autoscaling.](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler#adding_a_node_pool_with_autoscaling)
### Taints
When you apply a taint to a node, only Pods that tolerate the taint are allowed to run on the node. In a GKE cluster, you can apply a taint to a node pool, which applies the taint to all nodes in the pool.
### Node Labels
You can apply labels to the node pool, which applies the labels to all nodes in the pool.
Invalid labels can prevent upgrades or can prevent Rancher from starting. For details on label syntax requirements, see the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set)
## Security Options
### Service Account
Create a [Service Account](https://console.cloud.google.com/projectselector/iam-admin/serviceaccounts) with a JSON private key and provide the JSON here. See [Google Cloud docs](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances) for more info about creating a service account. These IAM roles are required: Compute Viewer (`roles/compute.viewer`), (Project) Viewer (`roles/viewer`), Kubernetes Engine Admin (`roles/container.admin`), Service Account User (`roles/iam.serviceAccountUser`). More info on roles can be found [here.](https://cloud.google.com/kubernetes-engine/docs/how-to/iam-integration)
### Access Scopes
Access scopes are the legacy method of specifying permissions for your nodes.
- **Allow default access:** The default access for new clusters is the [Compute Engine default service account.](https://cloud.google.com/compute/docs/access/service-accounts?hl=en_US#default_service_account)
- **Allow full access to all Cloud APIs:** Generally, you can just set the cloud-platform access scope to allow full access to all Cloud APIs, then grant the service account only relevant IAM roles. The combination of access scopes granted to the virtual machine instance and the IAM roles granted to the service account determines the amount of access the service account has for that instance.
- **Set access for each API:** Alternatively, you can choose to set specific scopes that permit access to the particular API methods that the service will call.
For more information, see the [section about enabling service accounts for a VM.](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances)
</TabItem>
</Tabs>
@@ -0,0 +1,47 @@
---
title: Private Clusters
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/gke-cluster-configuration/gke-private-clusters"/>
</head>
_Available as of v2.5.8_
In GKE, [private clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept) are clusters whose nodes are isolated from inbound and outbound traffic by assigning them internal IP addresses only. Private clusters in GKE have the option of exposing the control plane endpoint as a publicly accessible address or as a private address. This is different from other Kubernetes providers, which may refer to clusters with private control plane endpoints as "private clusters" but still allow traffic to and from nodes. You may want to create a cluster with private nodes, with or without a public control plane endpoint, depending on your organization's networking and security requirements. A GKE cluster provisioned from Rancher can use isolated nodes by selecting "Private Cluster" in the Cluster Options (under "Show advanced options"). The control plane endpoint can optionally be made private by selecting "Enable Private Endpoint".
### Private Nodes
Because the nodes in a private cluster only have internal IP addresses, they will not be able to install the cluster agent and Rancher will not be able to fully manage the cluster. This can be overcome in a few ways.
#### Cloud NAT
>**Note**
>Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing).
If restricting outgoing internet access is not a concern for your organization, use Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service to allow nodes in the private network to access the internet, enabling them to download the required images from Docker Hub and contact the Rancher management server. This is the simplest solution.
#### Private registry
>**Note**
>This scenario is not officially supported, but is described for cases in which using the Cloud NAT service is not sufficient.
If restricting both incoming and outgoing traffic to nodes is a requirement, follow the air-gapped installation instructions to set up a private container image [registry](https://rancher.com/docs/rancher/v2.5/en/installation/other-installation-methods/air-gap/) on the VPC where the cluster is going to be, allowing the cluster nodes to access and download the images they need to run the cluster agent. If the control plane endpoint is also private, Rancher will need [direct access](#direct-access) to it.
### Private Control Plane Endpoint
If the cluster has a public endpoint exposed, Rancher will be able to reach the cluster, and no additional steps need to be taken. However, if the cluster has no public endpoint, then considerations must be made to ensure Rancher can access the cluster.
#### Cloud NAT
>**Note**
>Cloud NAT will [incur charges](https://cloud.google.com/nat/pricing).
As above, if restricting outgoing internet access to the nodes is not a concern, then Google's [Cloud NAT](https://cloud.google.com/nat/docs/using-nat) service can be used to allow the nodes to access the internet. While the cluster is provisioning, Rancher will provide a registration command to run on the cluster. Download the [kubeconfig](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl) for the new cluster and run the provided kubectl command on the cluster. Gaining access
to the cluster in order to run this command can be done by creating a temporary node or using an existing node in the VPC, or by logging on to or creating an SSH tunnel through one of the cluster nodes.
#### Direct access
If the Rancher server is run on the same VPC as the cluster's control plane, it will have direct access to the control plane's private endpoint. The cluster nodes will need to have access to a [private registry](#private-registry) to download images as described above.
You can also use services from Google such as [Cloud VPN](https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview) or [Cloud Interconnect VLAN](https://cloud.google.com/network-connectivity/docs/interconnect) to facilitate connectivity between your organization's network and your Google VPC.
@@ -0,0 +1,14 @@
---
title: Rancher Server Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration"/>
</head>
- [RKE1 Cluster Configuration](rke1-cluster-configuration.md)
- [EKS Cluster Configuration](eks-cluster-configuration.md)
- [GKE Cluster Configuration](gke-cluster-configuration/gke-cluster-configuration.md)
- [Use Existing Nodes](use-existing-nodes/use-existing-nodes.md)
- [Sync Clusters](sync-clusters.md)
- [RancherD Configuration Reference](rancherd-configuration-reference.md)
@@ -0,0 +1,316 @@
---
title: RancherD Configuration Reference
---
> **Note:** RancherD was an experimental feature available as part of Rancher v2.5.4 through v2.5.10 but is now deprecated and not available for recent releases.
In RancherD, a server node is defined as a machine (bare-metal or virtual) running the `rancherd server` command. The server runs the Kubernetes API as well as Kubernetes workloads.
An agent node is defined as a machine running the `rancherd agent` command. They don't run the Kubernetes API. To add nodes designated to run your apps and services, join agent nodes to your cluster.
In the RancherD installation instructions, we recommend running three server nodes in the Rancher server cluster. Agent nodes are not required.
- [Certificates for the Rancher Server](#certificates-for-the-rancher-server)
- [Node Taints](#node-taints)
- [Customizing the RancherD Helm Chart](#customizing-the-rancherd-helm-chart)
- [RancherD Server CLI Options](#rancherd-server-cli-options)
- [RancherD Agent CLI Options](#rancherd-agent-cli-options)
## Certificates for the Rancher Server
Rancherd does not use cert-manager to provision certs. Instead RancherD allows you to bring your own self-signed or trusted certs by storing the .pem files in `/etc/rancher/ssl/`. When doing this you should also set the `publicCA` parameter to `true` in your HelmChartConfig. For more information on the HelmChartConfig, refer to the section about [customizing the RancherD Helm chart.](#customizing-the-rancherd-helm-chart)
Private key: `/etc/rancher/ssl/key.pem`
Certificate: `/etc/rancher/ssl/cert.pem`
CA Certificate(self-signed): `/etc/rancher/ssl/cacerts.pem`
Additional CA Certificate: `/etc/ssl/certs/ca-additional.pem`
## Node Taints
By default, server nodes will be schedulable and thus your workloads can get launched on them. If you wish to have a dedicated control plane where no user workloads will run, you can use taints. The node-taint parameter will allow you to configure nodes with taints. Here is an example of adding a node taint to the `config.yaml`:
```
node-taint:
- "CriticalAddonsOnly=true:NoExecute"
```
## Customizing the RancherD Helm Chart
Rancher is launched as a [Helm](https://helm.sh/) chart using the clusters [Helm integration.](https://docs.rke2.io/helm) This means that you can easily customize the application through a manifest file describing your custom parameters.
The RancherD chart provisions Rancher in a daemonset. It exposes hostport `8080/8443` down to the container port (`80/443`), and uses hostpath to mount certs if needed.
RancherD uses `helm-controller` to bootstrap the RancherD chart. To provide a customized `values.yaml` file, the configuration options must be passed in through the `helm-controller` custom resource definition.
Here is an example of the manifest:
```yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rancher
namespace: kube-system
spec:
valuesContent: |
publicCA: true
```
Put this manifest on your host in `/var/lib/rancher/rke2/server/manifests` before running RancherD.
### Common Options
| Parameter | Default Value | Description |
| ------------------------------ | ----------------------------------------------------- | -------------------------------------------- |
| `addLocal` | "auto" | ***string*** - Have Rancher detect and import the local Rancher server cluster |
| `auditLog.destination` | "sidecar" | ***string*** - Stream to sidecar container console or hostPath volume - *"sidecar, hostPath"* |
| `auditLog.hostPath` | "/var/log/rancher/audit" | ***string*** - log file destination on host (only applies when **auditLog.destination** is set to **hostPath**) |
| `auditLog.level` | 0 | ***int*** - set the [API Audit Log level](https://rancher.com/docs/rancher/v2.5/en/installation/api-auditing). 0 is off. [0-3] |
| `auditLog.maxAge` | 1 | ***int*** - maximum number of days to retain old audit log files (only applies when **auditLog.destination** is set to **hostPath**) |
| `auditLog.maxBackups` | 1 | int - maximum number of audit log files to retain (only applies when **auditLog.destination** is set to **hostPath**) |
| `auditLog.maxSize` | 100 | ***int*** - maximum size in megabytes of the audit log file before it gets rotated (only applies when **auditLog.destination** is set to **hostPath**) |
| `debug` | false | ***bool*** - set debug flag on rancher server |
| `extraEnv` | [] | ***list*** - set additional environment variables for Rancher |
| `imagePullSecrets` | [] | ***list*** - list of names of Secret resource containing private registry credentials |
| `proxy` | " " | ***string** - HTTP[S] proxy server for Rancher |
| `noProxy` | "127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16" | ***string*** - comma separated list of hostnames or ip address not to use the proxy |
| `resources` | {} | ***map*** - rancher pod resource requests & limits |
| `rancherImage` | "rancher/rancher" | ***string*** - rancher image source |
| `rancherImageTag` | same as chart version | ***string*** - rancher/rancher image tag |
| `rancherImagePullPolicy` | "IfNotPresent" | ***string*** - Override imagePullPolicy for rancher server images - *"Always", "Never", "IfNotPresent"* |
| `systemDefaultRegistry` | "" | ***string*** - private registry to be used for all system Docker images, e.g., [http://registry.example.com/] |
| `useBundledSystemChart` | false | ***bool*** - select to use the system-charts packaged with Rancher server. This option is used for air gapped installations. |
| `publicCA` | false | ***bool*** - Set to true if your cert is signed by a public CA |
## RancherD Server CLI Options
The command to run the Rancher management server is:
```
rancherd server [OPTIONS]
```
It can be run with the following options:
### Config
| Option | Description |
|--------|-------------|
| `--config FILE, -c FILE` | Load configuration from FILE (default: "/etc/rancher/rke2/config.yaml") |
### Logging
| Option | Description |
|--------|-------------|
| `--debug` | Turn on debug logs |
### Listener
| Option | Description |
|--------|-------------|
| `--bind-address value` | RancherD bind address (default: 0.0.0.0) |
| `--advertise-address value` | IP address that apiserver uses to advertise to members of the cluster (default: node-external-ip/node-ip) |
| `--tls-san value` | Add hostname or IP as a Subject Alternative Name in the TLS cert |
### Data
| Option | Description |
|--------|-------------|
| `--data-dir value, -d value` | Folder to hold state (default: "/var/lib/rancher/rancherd") |
### Networking
| Option | Description |
|--------|-------------|
| `--cluster-cidr value` | Network CIDR to use for pod IPs (default: "10.42.0.0/16") |
| `--service-cidr value` | Network CIDR to use for services IPs (default: "10.43.0.0/16") |
| `--cluster-dns value` | Cluster IP for coredns service. Should be in your service-cidr range (default: 10.43.0.10) |
| `--cluster-domain value` | Cluster Domain (default: "cluster.local") |
### Cluster
| Option | Description |
|--------|-------------|
| `--token value, -t value` | Shared secret used to join a server or agent to a cluster |
| `--token-file value` | File containing the cluster-secret/token |
### Client
| Option | Description |
|--------|-------------|
| `--write-kubeconfig value, -o value` | Write kubeconfig for admin client to this file |
| `--write-kubeconfig-mode value` | Write kubeconfig with this mode |
### Flags
| Option | Description |
|--------|-------------|
| `--kube-apiserver-arg value` | Customized flag for kube-apiserver process |
| `--kube-scheduler-arg value` | Customized flag for kube-scheduler process |
| `--kube-controller-manager-arg value` | Customized flag for kube-controller-manager process |
### Database
| Option | Description |
|--------|-------------|
| `--etcd-disable-snapshots` | Disable automatic etcd snapshots |
| `--etcd-snapshot-schedule-cron value` | Snapshot interval time in cron spec. eg. every 5 hours '* */5 * * *' (default: "0 */12 * * *") |
| `--etcd-snapshot-retention value` | Number of snapshots to retain (default: 5) |
| `--etcd-snapshot-dir value` | Directory to save db snapshots. (Default location: ${data-dir}/db/snapshots) |
| `--cluster-reset-restore-path value` | Path to snapshot file to be restored |
### System Images Registry
| Option | Description |
|--------|-------------|
| `--system-default-registry value` | Private registry to be used for all system Docker images |
### Components
| Option | Description |
|--------|-------------|
| `--disable value` | Do not deploy packaged components and delete any deployed components (valid items: rancherd-canal, rancherd-coredns, rancherd-ingress, rancherd-kube-proxy, rancherd-metrics-server) |
### Cloud Provider
| Option | Description |
|--------|-------------|
| `--cloud-provider-name value` | Cloud provider name |
| `--cloud-provider-config value` | Cloud provider configuration file path |
### Security
| Option | Description |
|--------|-------------|
| `--profile value` | Validate system configuration against the selected benchmark (valid items: cis-1.5) |
### Agent Node
| Option | Description |
|--------|-------------|
| `--node-name value` | Node name |
| `--node-label value` | Registering and starting kubelet with set of labels |
| `--node-taint value` | Registering kubelet with set of taints |
| `--protect-kernel-defaults` | Kernel tuning behavior. If set, error if kernel tunables are different than kubelet defaults. |
| `--selinux` | Enable SELinux in containerd |
### Agent Runtime
| Option | Description |
|--------|-------------|
| `--container-runtime-endpoint value` | Disable embedded containerd and use alternative CRI implementation |
| `--snapshotter value` | Override default containerd snapshotter (default: "overlayfs") |
| `--private-registry value` | Private registry configuration file (default: "/etc/rancher/rke2/registries.yaml") |
### Agent Networking
| Option | Description |
|--------|-------------|
| `--node-ip value, -i value` | IP address to advertise for node |
| `--resolv-conf value` | Kubelet resolv.conf file |
### Agent Flags
| Option | Description |
|--------|-------------|
| `--kubelet-arg value` | Customized flag for kubelet process |
| `--kube-proxy-arg value` | Customized flag for kube-proxy process |
### Experimental
| Option | Description |
|--------|-------------|
| `--agent-token value` | Shared secret used to join agents to the cluster, but not servers |
| `--agent-token-file value` | File containing the agent secret |
| `--server value, -s value` | Server to connect to, used to join a cluster |
| `--cluster-reset` | Forget all peers and become sole member of a new cluster |
| `--secrets-encryption` | Enable Secret encryption at rest |
## RancherD Agent CLI Options
The following command is used to run the RancherD agent:
```
rancherd agent [OPTIONS]
```
The following options are available.
### Config
| Option | Description |
|--------|-------------|
| `--config FILE, -c FILE` | Load configuration from FILE (default: "/etc/rancher/rke2/config.yaml") |
### Data
| Option | Description |
|--------|-------------|
| `--data-dir value, -d value` | Folder to hold state (default: "/var/lib/rancher/rancherd") |
### Logging
| Option | Description |
|--------|-------------|
| `--debug` | Turn on debug logs |
### Cluster
| Option | Description |
|--------|-------------|
| `--token value, -t value` | Token to use for authentication |
| `--token-file value` | Token file to use for authentication |
| `--server value, -s value` | Server to connect to |
### Agent Node
| Option | Description |
|--------|-------------|
| `--node-name value` | Node name |
| `--node-label value` | Registering and starting kubelet with set of labels |
| `--node-taint value` | Registering kubelet with set of taints |
| `--selinux` | Enable SELinux in containerd |
| `--protect-kernel-defaults` | Kernel tuning behavior. If set, error if kernel tunables are different than kubelet defaults. |
### Agent Runtime
| Option | Description |
|--------|-------------|
| `--container-runtime-endpoint value` | Disable embedded containerd and use alternative CRI implementation |
| `--snapshotter value` | Override default containerd snapshotter (default: "overlayfs") |
| `--private-registry value` | Private registry configuration file (default: "/etc/rancher/rke2/registries.yaml") |
### Agent Networking
| Option | Description |
|--------|-------------|
| `--node-ip value, -i value` | IP address to advertise for node |
| `--resolv-conf value` | Kubelet resolv.conf file |
### Agent Flags
| Option | Description |
|--------|-------------|
| `--kubelet-arg value` | Customized flag for kubelet process |
| `--kube-proxy-arg value` | Customized flag for kube-proxy process |
### System Images Registry
| Option | Description |
|--------|-------------|
| `--system-default-registry value` | Private registry to be used for all system Docker images |
### Cloud Provider
| Option | Description |
|--------|-------------|
| `--cloud-provider-name value` | Cloud provider name |
| `--cloud-provider-config value` | Cloud provider configuration file path |
### Security
| Option | Description |
|--------|-------------|
| `--profile value` | Validate system configuration against the selected benchmark (valid items: cis-1.5) |
@@ -0,0 +1,82 @@
---
title: RKE Cluster Configuration
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration"/>
</head>
In [clusters launched by RKE](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md), you can edit any of the remaining options that follow.
- [Configuration Options in the Rancher UI](#configuration-options-in-the-rancher-ui)
- [Editing Clusters with YAML](#editing-clusters-with-yaml)
- [Updating ingress-nginx](#updating-ingress-nginx)
## Configuration Options in the Rancher UI
To edit your cluster, open the **Global** view, make sure the **Clusters** tab is selected, and then select **&#8942; > Edit** for the cluster that you want to edit.
Some advanced configuration options are not exposed in the Rancher UI forms, but they can be enabled by editing the RKE cluster configuration file in YAML. For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/)
### Kubernetes Version
The version of Kubernetes installed on each cluster node. For more detail, see [Upgrading Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
### Network Provider
The \container networking interface (CNI) that powers networking for your cluster.<br/><br/>**Note:** You can only choose this option while provisioning your cluster. It cannot be edited later.
### Project Network Isolation
If your network provider allows project network isolation, you can choose whether to enable or disable inter-project communication.
Before Rancher v2.5.8, project network isolation is only available if you are using the Canal network plugin for RKE.
In v2.5.8+, project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.
### Nginx Ingress
If you want to publish your applications in a high-availability configuration, and you're hosting your nodes with a cloud-provider that doesn't have a native load-balancing feature, enable this option to use Nginx ingress within the cluster.
### Metrics Server Monitoring
Each cloud provider capable of launching a cluster using RKE can collect metrics and monitor for your cluster nodes. Enable this option to view your node metrics from your cloud provider's portal.
### Pod Security Policy Support
Enables [pod security policies](../../../how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/create-pod-security-policies.md) for the cluster. After enabling this option, choose a policy using the **Default Pod Security Policy** drop-down.
### Docker version on nodes
Configures whether nodes are allowed to run versions of Docker that Rancher doesn't officially support. If you choose to require a supported Docker version, Rancher will stop pods from running on nodes that don't have a supported Docker version installed.
### Docker Root Directory
The directory on your cluster nodes where you've installed Docker. If you install Docker on your nodes to a non-default directory, update this path.
### Default Pod Security Policy
If you enable **Pod Security Policy Support**, use this drop-down to choose the pod security policy that's applied to the cluster.
### Cloud Provider
If you're using a cloud provider to host cluster nodes launched by RKE, enable [this option](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/set-up-cloud-providers/set-up-cloud-providers.md) so that you can use the cloud provider's native features. If you want to store persistent data for your cloud-hosted cluster, this option is required.
## Editing Clusters with YAML
Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML.
- To edit an RKE config file directly from the Rancher UI, click **Edit as YAML**.
- To read from an existing RKE file, click **Read from File**.
![image](/img/cluster-options-yaml.png)
For an example of RKE config file syntax, see the [RKE documentation](https://rancher.com/docs/rke/latest/en/example-yamls/).
For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/)
## Updating ingress-nginx
Clusters that were created before Kubernetes 1.16 will have an `ingress-nginx` `updateStrategy` of `OnDelete`. Clusters that were created with Kubernetes 1.16 or newer will have `RollingUpdate`.
If the `updateStrategy` of `ingress-nginx` is `OnDelete`, you will need to delete these pods to get the correct version for your deployment.
@@ -0,0 +1,38 @@
---
title: Syncing Hosted Clusters
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/sync-clusters"/>
</head>
Syncing is the feature for EKS and GKE clusters that causes Rancher to update the clusters' values so they are up to date with their corresponding cluster object in the hosted Kubernetes provider. This enables Rancher to not be the sole owner of a hosted clusters state. Its largest limitation is that processing an update from Rancher and another source at the same time or within 5 minutes of one finishing may cause the state from one source to completely overwrite the other.
### How it works
There are two fields on the Rancher Cluster object that must be understood to understand how syncing works:
1. The config object for the cluster, located on the Spec of the Cluster:
* For EKS, the field is called EKSConfig
* For GKE, the field is called GKEConfig
2. The UpstreamSpec object
* For EKS, this is located on the EKSStatus field on the Status of the Cluster.
* For GKE, this is located on the GKEStatus field on the Status of the Cluster.
The struct types that define these objects can be found in their corresponding operator projects:
* [eks-operator](https://github.com/rancher/eks-operator/blob/master/pkg/apis/eks.cattle.io/v1/types.go)
* [gke-operator](https://github.com/rancher/gke-operator/blob/master/pkg/apis/gke.cattle.io/v1/types.go)
All fields with the exception of the cluster name, the location (region or zone), Imported, and the cloud credential reference, are nillable on this Spec object.
The EKSConfig or GKEConfig represents desired state for its non-nil values. Fields that are non-nil in the config object can be thought of as “managed". When a cluster is created in Rancher, all fields are non-nil and therefore “managed”. When a pre-existing cluster is registered in rancher all nillable fields are nil and are not “managed”. Those fields become managed once their value has been changed by Rancher.
UpstreamSpec represents the cluster as it is in the hosted Kubernetes provider and is refreshed on an interval of 5 minutes. After the UpstreamSpec has been refreshed, Rancher checks if the cluster has an update in progress. If it is updating, nothing further is done. If it is not currently updating, any “managed” fields on EKSConfig or GKEConfig are overwritten with their corresponding value from the recently updated UpstreamSpec.
The effective desired state can be thought of as the UpstreamSpec + all non-nil fields in the EKSConfig or GKEConfig. This is what is displayed in the UI.
If Rancher and another source attempt to update a cluster at the same time or within the 5 minute refresh window of an update finishing, then it is likely any “managed” fields can be caught in a race condition. To use EKS as an example, a cluster may have PrivateAccess as a managed field. If PrivateAccess is false and then enabled in EKS console, then finishes at 11:01, and then tags are updated from Rancher before 11:05 the value will likely be overwritten. This would also occur if tags were updated while the cluster was processing the update. If the cluster was registered and the PrivateAccess fields was nil then this issue should not occur in the aforementioned case.
@@ -0,0 +1,57 @@
---
title: Rancher Agent Options
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options"/>
</head>
Rancher deploys an agent on each node to communicate with the node. This pages describes the options that can be passed to the agent. To use these options, you will need to [create a cluster with custom nodes](use-existing-nodes.md) and add the options to the generated `docker run` command when adding a node.
For an overview of how Rancher communicates with downstream clusters using node agents, refer to the [architecture section.](../../../rancher-manager-architecture/communicating-with-downstream-user-clusters.md#3-node-agents)
## General options
| Parameter | Environment variable | Description |
| ---------- | -------------------- | ----------- |
| `--server` | `CATTLE_SERVER` | The configured Rancher `server-url` setting which the agent connects to |
| `--token` | `CATTLE_TOKEN` | Token that is needed to register the node in Rancher |
| `--ca-checksum` | `CATTLE_CA_CHECKSUM` | The SHA256 checksum of the configured Rancher `cacerts` setting to validate |
| `--node-name` | `CATTLE_NODE_NAME` | Override the hostname that is used to register the node (defaults to `hostname -s`) |
| `--label` | `CATTLE_NODE_LABEL` | Add node labels to the node. For multiple labels, pass additional `--label` options. (`--label key=value`) |
| `--taints` | `CATTLE_NODE_TAINTS` | Add node taints to the node. For multiple taints, pass additional `--taints` options. (`--taints key=value:effect`) |
## Role options
| Parameter | Environment variable | Description |
| ---------- | -------------------- | ----------- |
| `--all-roles` | `ALL=true` | Apply all roles (`etcd`,`controlplane`,`worker`) to the node |
| `--etcd` | `ETCD=true` | Apply the role `etcd` to the node |
| `--controlplane` | `CONTROL=true` | Apply the role `controlplane` to the node |
| `--worker` | `WORKER=true` | Apply the role `worker` to the node |
## IP address options
| Parameter | Environment variable | Description |
| ---------- | -------------------- | ----------- |
| `--address` | `CATTLE_ADDRESS` | The IP address the node will be registered with (defaults to the IP used to reach `8.8.8.8`) |
| `--internal-address` | `CATTLE_INTERNAL_ADDRESS` | The IP address used for inter-host communication on a private network |
### Dynamic IP address options
For automation purposes, you can't have a specific IP address in a command as it has to be generic to be used for every node. For this, we have dynamic IP address options. They are used as a value to the existing IP address options. This is supported for `--address` and `--internal-address`.
| Value | Example | Description |
| ---------- | -------------------- | ----------- |
| Interface name | `--address eth0` | The first configured IP address will be retrieved from the given interface |
| `ipify` | `--address ipify` | Value retrieved from `https://api.ipify.org` will be used |
| `awslocal` | `--address awslocal` | Value retrieved from `http://169.254.169.254/latest/meta-data/local-ipv4` will be used |
| `awspublic` | `--address awspublic` | Value retrieved from `http://169.254.169.254/latest/meta-data/public-ipv4` will be used |
| `doprivate` | `--address doprivate` | Value retrieved from `http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address` will be used |
| `dopublic` | `--address dopublic` | Value retrieved from `http://169.254.169.254/metadata/v1/interfaces/public/0/ipv4/address` will be used |
| `azprivate` | `--address azprivate` | Value retrieved from `http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/privateIpAddress?api-version=2017-08-01&format=text` will be used |
| `azpublic` | `--address azpublic` | Value retrieved from `http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/publicIpAddress?api-version=2017-08-01&format=text` will be used |
| `gceinternal` | `--address gceinternal` | Value retrieved from `http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip` will be used |
| `gceexternal` | `--address gceexternal` | Value retrieved from `http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip` will be used |
| `packetlocal` | `--address packetlocal` | Value retrieved from `https://metadata.packet.net/2009-04-04/meta-data/local-ipv4` will be used |
| `packetpublic` | `--address packetlocal` | Value retrieved from `https://metadata.packet.net/2009-04-04/meta-data/public-ipv4` will be used |
@@ -0,0 +1,120 @@
---
title: Launching Kubernetes on Existing Custom Nodes
description: To create a cluster with custom nodes, youll need to access servers in your cluster and provision them according to Rancher requirements
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes"/>
</head>
When you create a custom cluster, Rancher uses RKE (the Rancher Kubernetes Engine) to create a Kubernetes cluster in on-prem bare-metal servers, on-prem virtual machines, or in any node hosted by an infrastructure provider.
To use this option you'll need access to servers you intend to use in your Kubernetes cluster. Provision each server according to the [requirements](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md), which includes some hardware specifications and Docker. After you install Docker on each server, you willl also run the command provided in the Rancher UI on each server to turn each one into a Kubernetes node.
This section describes how to set up a custom cluster.
## Creating a Cluster with Custom Nodes
>**Want to use Windows hosts as Kubernetes workers?**
>
>See [Configuring Custom Clusters for Windows](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-windows-clusters/use-windows-clusters.md) before you start.
### 1. Provision a Linux Host
Begin creation of a custom cluster by provisioning a Linux host. Your host can be:
- A cloud-host virtual machine (VM)
- An on-prem VM
- A bare-metal server
If you want to reuse a node from a previous custom cluster, [clean the node](../../../../how-to-guides/advanced-user-guides/manage-clusters/clean-cluster-nodes.md) before using it in a cluster again. If you reuse a node that hasn't been cleaned, cluster provisioning may fail.
Provision the host according to the [installation requirements](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md) and the [checklist for production-ready clusters.](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/checklist-for-production-ready-clusters/checklist-for-production-ready-clusters.md)
### 2. Create the Custom Cluster
Clusters won't begin provisioning until all three node roles (worker, etcd and controlplane) are present.
1. From the **Clusters** page, click **Add Cluster**.
2. Choose **Custom**.
3. Enter a **Cluster Name**.
4. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
5. Use **Cluster Options** to choose the version of Kubernetes, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on **Show advanced options.**
>**Using Windows nodes as Kubernetes workers?**
>
>- See [Enable the Windows Support Option](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-windows-clusters/use-windows-clusters.md).
>- The only Network Provider available for clusters with Windows support is Flannel.
6. <a id="step-6"></a>Click **Next**.
7. From **Node Role**, choose the roles that you want filled by a cluster node. You must provision at least one node for each role: `etcd`, `worker`, and `control plane`. All three roles are required for a custom cluster to finish provisioning. For more information on roles, see [this section.](../../../kubernetes-concepts.md#roles-for-nodes-in-kubernetes-clusters)
>**Notes:**
>
>- Using Windows nodes as Kubernetes workers? See [this section](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-windows-clusters/use-windows-clusters.md).
>- Bare-Metal Server Reminder: If you plan on dedicating bare-metal servers to each role, you must provision a bare-metal server for each role (i.e. provision multiple bare-metal servers).
8. <a id="step-8"></a>**Optional**: Click **[Show advanced options](rancher-agent-options.md)** to specify IP address(es) to use when registering the node, override the hostname of the node, or to add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node.
9. Copy the command displayed on screen to your clipboard.
10. Log in to your Linux host using your preferred shell, such as PuTTy or a remote Terminal connection. Run the command copied to your clipboard.
>**Note:** Repeat steps 7-10 if you want to dedicate specific hosts to specific node roles. Repeat the steps as many times as needed.
11. When you finish running the command(s) on your Linux host(s), click **Done**.
**Result:**
Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster.
You can access your cluster after its state is updated to **Active.**
**Active** clusters are assigned two Projects:
- `Default`, containing the `default` namespace
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
### 3. Amazon Only: Tag Resources
If you have configured your cluster to use Amazon as **Cloud Provider**, tag your AWS resources with a cluster ID.
[Amazon Documentation: Tagging Your Amazon EC2 Resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html)
>**Note:** You can use Amazon EC2 instances without configuring a cloud provider in Kubernetes. You only have to configure the cloud provider if you want to use specific Kubernetes cloud provider functionality. For more information, see [Kubernetes Cloud Providers](https://github.com/kubernetes/website/blob/release-1.18/content/en/docs/concepts/cluster-administration/cloud-providers.md)
The following resources need to be tagged with a `ClusterID`:
- **Nodes**: All hosts added in Rancher.
- **Subnet**: The subnet used for your cluster
- **Security Group**: The security group used for your cluster.
>**Note:** Do not tag multiple security groups. Tagging multiple groups generates an error when creating Elastic Load Balancer.
The tag that should be used is:
```
Key=kubernetes.io/cluster/<CLUSTERID>, Value=owned
```
`<CLUSTERID>` can be any string you choose. However, the same string must be used on every resource you tag. Setting the tag value to `owned` informs the cluster that all resources tagged with the `<CLUSTERID>` are owned and managed by this cluster.
If you share resources between clusters, you can change the tag to:
```
Key=kubernetes.io/cluster/CLUSTERID, Value=shared
```
## Optional Next Steps
After creating your cluster, you can access it through the Rancher UI. As a best practice, we recommend setting up these alternate ways of accessing your cluster:
- **Access your cluster with the kubectl CLI:** Follow [these steps](../../../../how-to-guides/advanced-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#accessing-clusters-with-kubectl-from-your-workstation) to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher servers authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI.
- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps](../../../../how-to-guides/advanced-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you cant connect to Rancher, you can still access the cluster.