* gerundify Configuration for Storage Classes in Azure * gerundify Authentication Config * gerundify Bootstrap Password * gerundify and disambiguate manage-PSP * gerundify and disambiguate create-PSP * syncing? Syncing what? * updating label for Configure OpenLDAP * updating label for authentication config * updating label for configuring microsoft ad and shibboleth (saml) * updating label for configuring microsoft ad and shibboleth (saml) in sidebars (last commit covered versioned docs) * label + title for one of several sets of pages entitled role-based access control * gerundify Upgrade a Hardened Custom/Imported Cluster to Kubernetes v1.25 * re-order title of Certificate Troubleshooting so verb comes first * added 'about' to title because the page is about drivers that provision, not how to provision a driver * template enforcement > enforcing templates * update label to match title - deploy > deploying * updated label to matach title - manage clusters > cluster administration * gerundify set up in infrastructure setup section * typo fix - spacing * matching label to title - launch > launching * label matching title - set up part 2 * label matching title - windows clusters * how to configure > configuring * using > launching (partially matches label to title) * ingress config > configging an ingress * match label to title - project administration * match label to title - project resource quotas * match title to label - monitoring/alerting guides * matching label to title - enabling * gerundify allow unsupported storage drivers * Update openapi/swagger.json * lost metadata syntax re-applied * Apply suggestions from code review
3.5 KiB
title
| title |
|---|
| Syncing Hosted Clusters |
Syncing allows Rancher to update cluster values so that they're up to date with the corresponding cluster object hosted in AKS, EKS or GKE. This enables sources other than Rancher to own a hosted cluster’s state.
:::warning You may accidentally overwrite the state from one source if you simultaneously process an update from another source. This may also occur if you process an update from one source within 5 minutes of finishing an update from another source. :::
How it works
There are two fields on the Rancher Cluster object that must be understood to understand how syncing works:
-
The config object for the cluster, located on the Spec of the Cluster:
- For AKS, the field is called AKSConfig
- For EKS, the field is called EKSConfig
- For GKE, the field is called GKEConfig
-
The UpstreamSpec object
- For AKS, this is located on the AKSStatus field on the Status of the Cluster.
- For EKS, this is located on the EKSStatus field on the Status of the Cluster.
- For GKE, this is located on the GKEStatus field on the Status of the Cluster.
The struct types that define these objects can be found in their corresponding operator projects:
All fields are nillable, except for the following: the cluster name, the location (region or zone), Imported, and the cloud credential reference.
The AKSConfig, EKSConfig or GKEConfig represents the desired state. Nil values are ignored. Fields that are non-nil in the config object can be thought of as managed. When a cluster is created in Rancher, all fields are non-nil and therefore managed. When a pre-existing cluster is registered in Rancher all nillable fields are set to nil and aren't managed. Those fields become managed once their value has been changed by Rancher.
UpstreamSpec represents the cluster as it is in the hosted Kubernetes provider. It's refreshed every 5 minutes. After the UpstreamSpec is refreshed, Rancher checks if the cluster has an update in progress. If it's currently updating, nothing further is done. If it is not currently updating, any managed fields on AKSConfig, EKSConfig or GKEConfig are overwritten with their corresponding value from the recently updated UpstreamSpec.
The effective desired state can be thought of as the UpstreamSpec, plus all non-nil fields in the AKSConfig, EKSConfig or GKEConfig. This is what is displayed in the UI.
If Rancher and another source attempt to update a cluster at the same time, or within 5 minutes of an update finishing, any managed fields are likely to get caught in a race condition. To use EKS as an example, a cluster may have PrivateAccess as a managed field. If PrivateAccess is false and then enabled in EKS console at 11:01, and tags are updated from Rancher before 11:05, then the value is likely to be overwritten. This can also occur if tags are updated while the cluster is still processing the update. The issue described in this example shouldn't occur if the cluster is registered and the PrivateAccess fields are nil.