mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-16 10:03:28 +00:00
Merge pull request #3207 from cmurphy/gke-config-ref
Update v2.5.8 GKE config reference
This commit is contained in:
+59
-23
@@ -44,35 +44,63 @@ The IP address range for pods in the cluster. Must be a valid CIDR range, e.g. 1
|
||||
|
||||
### Network
|
||||
|
||||
The Compute Engine Network that the cluster connects to. Routes and firewalls will be created using this network. For more information, refer to [this page](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets).
|
||||
The Compute Engine Network that the cluster connects to. Routes and firewalls will be created using this network. If using [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc), the VPC networks that are shared to your project will appear here. will be available to select in this field. For more information, refer to [this page](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets).
|
||||
|
||||
### Node Subnet
|
||||
### Node Subnet / Subnet
|
||||
|
||||
The Compute Engine subnetwork that the cluster connects to. This subnetwork must belong to the network specified in the **Network** field. For more information, refer to [this page.](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets)
|
||||
The Compute Engine subnetwork that the cluster connects to. This subnetwork must belong to the network specified in the **Network** field. Select an existing subnetwork, or select "Auto Create Subnetwork" to have one automatically created. If not using an existing network, **Subnetwork Name** is required to generate one. If using [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc), the VPC subnets that are shared to your project will appear here. If using a Shared VPC network, you cannot select "Auto Create Subnetwork". For more information, refer to [this page.](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets)
|
||||
|
||||
### Subnetwork Name
|
||||
|
||||
Automatically create a subnetwork with the provided name. Required if "Auto Create Subnetwork" is selected for **Node Subnet** or **Subnet**. For more information on subnetworks, refer to [this page.](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets)
|
||||
|
||||
### Ip Aliases
|
||||
|
||||
Enable [alias IPs](https://cloud.google.com/vpc/docs/alias-ip) for Pod IPs. When enabled, at least two secondary ranges in the subnetwork are required. One for the [cluster pod IPs](#cluster-pod-address-range) and another for [services.](#service-address-range)
|
||||
Enable [alias IPs](https://cloud.google.com/vpc/docs/alias-ip). This enables VPC-native traffic routing. Required if using [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc).
|
||||
|
||||
### Network Policy
|
||||
|
||||
Enable network policy enforcement on the cluster. If enabling on an existing cluster, the **Network Policy Config** addon must be enabled first. A network policy defines the level of communication that can occur between pods and services in the cluster. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy)
|
||||
|
||||
### Subnetwork Name (required)
|
||||
|
||||
Automatically create a subnetwork with the provided name. For more information on subnetworks, refer to [this page.](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets)
|
||||
|
||||
### Cluster Pod Address Range
|
||||
|
||||
The IP address range assigned to pods in the cluster. Must be a valid CIDR range, e.g. 10.96.0.0/11. For more information on how to determine the IP address range for your pods, refer to [this section.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing_secondary_range_pods)
|
||||
Enable network policy enforcement on the cluster. A network policy defines the level of communication that can occur between pods and services in the cluster. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy)
|
||||
|
||||
### Node Ipv4 CIDR Block
|
||||
|
||||
The IP address range of the instance IPs in this cluster. Must be a valid CIDR range, e.g. 10.96.0.0/14. For more information on how to determine the IP address range, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing)
|
||||
The IP address range of the instance IPs in this cluster. Can be set if "Auto Create Subnetwork" is selected for **Node Subnet** or **Subnet**. Must be a valid CIDR range, e.g. 10.96.0.0/14. For more information on how to determine the IP address range, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing)
|
||||
|
||||
### Cluster Secondary Range Name
|
||||
|
||||
The name of an existing secondary range for Pod IP addresses. If selected, **Cluster Pod Address Range** will automatically be populated. Required if using a Shared VPC network.
|
||||
|
||||
### Cluster Pod Address Range
|
||||
|
||||
The IP address range assigned to pods in the cluster. Must be a valid CIDR range, e.g. 10.96.0.0/11. If not provided, will be created automatically. Must be provided if using a Shared VPC network. For more information on how to determine the IP address range for your pods, refer to [this section.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing_secondary_range_pods)
|
||||
|
||||
### Services Secondary Range Name
|
||||
|
||||
The name of an existing secondary range for service IP addresses. If selected, **Service Address Range** will be automatically populated. Required if using a Shared VPC network.
|
||||
|
||||
### Service Address Range
|
||||
|
||||
The address range assigned to the services in the cluster. Must be a valid CIDR range, e.g. 10.94.0.0/18. For more information on how to determine the IP address range for your servicess, refer to [this section.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing_secondary_range_svcs)
|
||||
The address range assigned to the services in the cluster. Must be a valid CIDR range, e.g. 10.94.0.0/18. If not provided, will be created automatically. Must be provided if using a Shared VPC network. For more information on how to determine the IP address range for your servicess, refer to [this section.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing_secondary_range_svcs)
|
||||
|
||||
### Private Cluster
|
||||
|
||||
> Warning: private clusters require additional planning and configuration outside of Rancher. Refer to the [private cluster guide]({{< baseurl >}}/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/#private-clusters).
|
||||
|
||||
Assign nodes only internal IP addresses. Private cluster nodes cannot access the public internet unless additional networking steps are taken in GCP.
|
||||
|
||||
### Enable Private Endpoint
|
||||
|
||||
> Warning: private clusters require additional planning and configuration outside of Rancher. Refer to the [private cluster guide]({{< baseurl >}}/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke/#private-clusters).
|
||||
|
||||
Locks down external access to the control plane endpoint. Only available if **Private Cluster** is also selected. If selected, and if Rancher does not have direct access to the Virtual Private Cloud network the cluster is running in, Rancher will provide a registration command to run on the cluster to enable Rancher to connect to it.
|
||||
|
||||
### Master IPV4 CIDR Block
|
||||
|
||||
The IP range for the control plane VPC.
|
||||
|
||||
### Master Authorized Network
|
||||
|
||||
Enable control plane authorized networks to block untrusted non-GCP source IPs from accessing the Kubernetes master through HTTPS. If selected, additional authorized networks may be added. If the cluster is created with a public endpoint, this option is useful for locking down access to the public endpoint to only certain networks, such as the network where your Rancher service is running. If the cluster only has a private endpoint, this setting is required.
|
||||
|
||||
# Additional Options
|
||||
|
||||
@@ -86,7 +114,7 @@ The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by a
|
||||
|
||||
#### HTTP (L7) Load Balancing
|
||||
|
||||
HTTP (L7) Load Balancing distributes HTTP and HTTPS traffic to backends hosted on GKE. For more information, refer to [this page](https://cloud.google.com/load-balancing/docs/l7-internal) on Internal HTTP Load Balancing and [this page](https://cloud.google.com/load-balancing/docs/https) on External HTTP Load Balancing.
|
||||
HTTP (L7) Load Balancing distributes HTTP and HTTPS traffic to backends hosted on GKE. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer)
|
||||
|
||||
#### Network Policy Config (master only)
|
||||
|
||||
@@ -100,7 +128,6 @@ Turns on all Kubernetes alpha API groups and features for the cluster. When enab
|
||||
|
||||
The logging service the cluster uses to write logs. Use either [Cloud Logging](https://cloud.google.com/logging) or no logging service in which case no logs are exported from the cluster.
|
||||
|
||||
|
||||
### Monitoring Service
|
||||
|
||||
The monitoring service the cluster uses to write metrics. Use either [Cloud Monitoring](https://cloud.google.com/monitoring) or monitoring service in which case no metrics are exported from the cluster.
|
||||
@@ -108,22 +135,27 @@ The monitoring service the cluster uses to write metrics. Use either [Cloud Moni
|
||||
|
||||
### Maintenance Window
|
||||
|
||||
Set the start time for a 4 hour maintenance window. The time is specified in the UTC time zone using the HH:MM format.
|
||||
Set the start time for a 4 hour maintenance window. The time is specified in the UTC time zone using the HH:MM format. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/maintenance-windows-and-exclusions)
|
||||
|
||||
# Node Pools
|
||||
|
||||
In this section, enter details describing the configuration of each node in the node pool.
|
||||
|
||||
### Kubernetes Version
|
||||
|
||||
The Kubernetes version for each node in the node pool. For more information on GKE Kubernetes versions, refer to [these docs.](https://cloud.google.com/kubernetes-engine/versioning)
|
||||
|
||||
### Image Type
|
||||
|
||||
For more information for the node image options that GKE offers for each OS, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#available_node_images)
|
||||
The node operating system image. For more information for the node image options that GKE offers for each OS, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#available_node_images)
|
||||
|
||||
> Note: the default option is "Container-Optimized OS with Docker". The read-only filesystem on GCP's Container-Optimized OS is not compatible with the [legacy logging]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-admin/tools/logging) implementation in Rancher. If you need to use the legacy logging feature, select "Ubuntu with Docker" or "Ubuntu with Containerd". The [logging feature as of v2.5]({{<baseurl>}}/rancher/v2.5/en/logging) is compatible with the Container-Optimized OS image.
|
||||
|
||||
> Note: if selecting "Windows Long Term Service Channel" or "Windows Semi-Annual Channel" for the node pool image type, you must also add at least one Container-Optimized OS or Ubuntu node pool.
|
||||
|
||||
### Machine Type
|
||||
|
||||
For more information on Google Cloud machine types, refer to [this page.](https://cloud.google.com/compute/docs/machine-types#machine_types)
|
||||
The virtualized hardware resources available to node instances. For more information on Google Cloud machine types, refer to [this page.](https://cloud.google.com/compute/docs/machine-types#machine_types)
|
||||
|
||||
### Root Disk Type
|
||||
|
||||
@@ -151,18 +183,20 @@ You can apply labels to the node pool, which applies the labels to all nodes in
|
||||
In this section, enter details describing the node pool.
|
||||
|
||||
### Name
|
||||
Enter a name for the node group.
|
||||
|
||||
Enter a name for the node pool.
|
||||
|
||||
### Initial Node Count
|
||||
|
||||
Integer for the starting number of nodes in the node pool.
|
||||
|
||||
### Max Pod Per Node
|
||||
|
||||
GKE has a hard limit of 110 Pods per node. For more information on the Kubernetes limits, see [this section.](https://cloud.google.com/kubernetes-engine/docs/best-practices/scalability#dimension_limits)
|
||||
|
||||
### Horizontal Pod Autoscaling
|
||||
### Autoscaling
|
||||
|
||||
The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload's CPU or memory consumption, or in response to custom metrics reported from within Kubernetes or external metrics from sources outside of your cluster. For more information, see [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler)
|
||||
Node pool autoscaling dynamically creates or deletes nodes based on the demands of your workload. For more information, see [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler)
|
||||
|
||||
### Auto Repair
|
||||
|
||||
@@ -170,6 +204,8 @@ GKE's node auto-repair feature helps you keep the nodes in your cluster in a hea
|
||||
|
||||
### Auto Upgrade
|
||||
|
||||
> Note: Enabling the Auto Upgrade feature for Nodes is not recommended.
|
||||
|
||||
When enabled, the auto-upgrade feature keeps the nodes in your cluster up-to-date with the cluster control plane (master) version when your control plane is [updated on your behalf.](https://cloud.google.com/kubernetes-engine/upgrades#automatic_cp_upgrades) For more information about auto-upgrading nodes, see [this page.](https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades)
|
||||
|
||||
### Access Scopes
|
||||
|
||||
Reference in New Issue
Block a user