mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-04-24 23:35:38 +00:00
Apply copy edits
This commit is contained in:
committed by
GitHub
parent
e7677c93b9
commit
caeabf724e
@@ -88,11 +88,11 @@ This [tutorial](https://aws.amazon.com/blogs/opensource/managing-eks-clusters-ra
|
||||
|
||||
## Minimum EKS Permissions
|
||||
|
||||
Documented here is a minimum set of permissions necessary to use all functionality of the EKS driver in Rancher. Additional permissions are required for Rancher to provision the `Service Role` and `VPC` resources. Optionally these resources can be created **before** the cluster creation and will be selectable when defining the cluster configuration.
|
||||
These are the minimum set of permissions necessary to access the full functionality of Rancher's EKS driver. You'll need additional permissions for Rancher to provision the `Service Role` and `VPC` resources. If you create these resources **before** you create the cluster, they'll be available when you configure the cluster.
|
||||
|
||||
Resource | Description
|
||||
---------|------------
|
||||
Service Role | The service role provides Kubernetes the permissions it requires to manage resources on your behalf. Rancher can create the service role with the following [Service Role Permissions](#service-role-permissions).
|
||||
Service Role | Provides permissions that allow Kubernetes to manage resources on your behalf. Rancher can create the service role with the following [Service Role Permissions](#service-role-permissions).
|
||||
VPC | Provides isolated network resources utilised by EKS and worker nodes. Rancher can create the VPC resources with the following [VPC Permissions](#vpc-permissions).
|
||||
|
||||
|
||||
@@ -210,7 +210,7 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b
|
||||
|
||||
### Service Role Permissions
|
||||
|
||||
Permissions required for Rancher to create service role on users behalf during the EKS cluster creation process.
|
||||
These are permissions that are needed during EKS cluster creation, so Rancher can create a service role on the users' behalf.
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -244,7 +244,7 @@ Permissions required for Rancher to create service role on users behalf during t
|
||||
}
|
||||
```
|
||||
|
||||
When an EKS cluster is created, Rancher will create a service role with the following trust policy:
|
||||
When you create an EKS cluster, Rancher creates a service role with the following trust policy:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -262,7 +262,7 @@ When an EKS cluster is created, Rancher will create a service role with the foll
|
||||
}
|
||||
```
|
||||
|
||||
This role will also have two role policy attachments with the following policies ARNs:
|
||||
This role also has two role policy attachments with the following policies' ARNs:
|
||||
|
||||
```
|
||||
arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
|
||||
@@ -271,7 +271,7 @@ arn:aws:iam::aws:policy/AmazonEKSServicePolicy
|
||||
|
||||
### VPC Permissions
|
||||
|
||||
Permissions required for Rancher to create VPC and associated resources.
|
||||
These are permissions that are needed by Rancher to create a Virtual Private Cloud (VPC) and associated resources.
|
||||
|
||||
```json
|
||||
{
|
||||
|
||||
@@ -2,7 +2,11 @@
|
||||
title: Syncing
|
||||
---
|
||||
|
||||
Syncing is the feature for AKS, EKS and GKE clusters that causes Rancher to update the clusters' values so they are up to date with their corresponding cluster object in the hosted Kubernetes provider. This enables Rancher to not be the sole owner of a hosted cluster’s state. Its largest limitation is that processing an update from Rancher and another source at the same time or within 5 minutes of one finishing may cause the state from one source to completely overwrite the other.
|
||||
Syncing allows Rancher to update cluster values so that they're up to date with the corresponding cluster object hosted in AKS, EKS or GKE. This enables sources other than Rancher to own a hosted cluster’s state.
|
||||
|
||||
:::warning
|
||||
You may accidentally overwrite the state from one source if you simultaneously process an update from another source. This may also occur if you process an update from one source within 5 minutes of finishing an update from another source.
|
||||
:::
|
||||
|
||||
### How it works
|
||||
|
||||
@@ -26,12 +30,12 @@ The struct types that define these objects can be found in their corresponding o
|
||||
* [eks-operator](https://github.com/rancher/eks-operator/blob/master/pkg/apis/eks.cattle.io/v1/types.go)
|
||||
* [gke-operator](https://github.com/rancher/gke-operator/blob/master/pkg/apis/gke.cattle.io/v1/types.go)
|
||||
|
||||
All fields with the exception of the cluster name, the location (region or zone), Imported, and the cloud credential reference, are nillable on this Spec object.
|
||||
All fields are nillable, except for the following: the cluster name, the location (region or zone), Imported, and the cloud credential reference.
|
||||
|
||||
The AKSConfig, EKSConfig or GKEConfig represents desired state for its non-nil values. Fields that are non-nil in the config object can be thought of as “managed". When a cluster is created in Rancher, all fields are non-nil and therefore “managed”. When a pre-existing cluster is registered in rancher all nillable fields are nil and are not “managed”. Those fields become managed once their value has been changed by Rancher.
|
||||
The AKSConfig, EKSConfig or GKEConfig represents the desired state. Nil values are ignored. Fields that are non-nil in the config object can be thought of as managed. When a cluster is created in Rancher, all fields are non-nil and therefore managed. When a pre-existing cluster is registered in Rancher all nillable fields are set to nil and aren't managed. Those fields become managed once their value has been changed by Rancher.
|
||||
|
||||
UpstreamSpec represents the cluster as it is in the hosted Kubernetes provider and is refreshed on an interval of 5 minutes. After the UpstreamSpec has been refreshed, Rancher checks if the cluster has an update in progress. If it is updating, nothing further is done. If it is not currently updating, any “managed” fields on AKSConfig, EKSConfig or GKEConfig are overwritten with their corresponding value from the recently updated UpstreamSpec.
|
||||
UpstreamSpec represents the cluster as it is in the hosted Kubernetes provider. It's refreshed every 5 minutes. After the UpstreamSpec is refreshed, Rancher checks if the cluster has an update in progress. If it's currently updating, nothing further is done. If it is not currently updating, any managed fields on AKSConfig, EKSConfig or GKEConfig are overwritten with their corresponding value from the recently updated UpstreamSpec.
|
||||
|
||||
The effective desired state can be thought of as the UpstreamSpec + all non-nil fields in the AKSConfig, EKSConfig or GKEConfig. This is what is displayed in the UI.
|
||||
The effective desired state can be thought of as the UpstreamSpec, plus all non-nil fields in the AKSConfig, EKSConfig or GKEConfig. This is what is displayed in the UI.
|
||||
|
||||
If Rancher and another source attempt to update a cluster at the same time or within the 5 minute refresh window of an update finishing, then it is likely any “managed” fields can be caught in a race condition. To use EKS as an example, a cluster may have PrivateAccess as a managed field. If PrivateAccess is false and then enabled in EKS console, then finishes at 11:01, and then tags are updated from Rancher before 11:05 the value will likely be overwritten. This would also occur if tags were updated while the cluster was processing the update. If the cluster was registered and the PrivateAccess fields was nil then this issue should not occur in the aforementioned case.
|
||||
If Rancher and another source attempt to update a cluster at the same time, or within 5 minutes of an update finishing, any managed fields are likely to get caught in a race condition. To use EKS as an example, a cluster may have PrivateAccess as a managed field. If PrivateAccess is false and then enabled in EKS console at 11:01, and tags are updated from Rancher before 11:05, then the value is likely to be overwritten. This can also occur if tags are updated while the cluster is still processing the update. The issue described in this example shouldn't occur if the cluster is registered and the PrivateAccess fields are nil.
|
||||
|
||||
Reference in New Issue
Block a user