示例:`http://schemas.xmlsoap.org/claims/Group` |
| Rancher API 主机 | Rancher Server 的 URL。 |
-| 私钥/证书 | 在 Rancher 和你的 AD FS 之间创建安全外壳(SSH)的密钥/证书对。确保将 Common Name (CN) 设置为 Rancher Server URL。
[证书创建命令](#cert-command) |
+| 私钥/证书 | 在 Rancher 和你的 AD FS 之间创建安全外壳(SSH)的密钥/证书对。确保将 Common Name (CN) 设置为 Rancher Server URL。
[证书创建命令](#example-certificate-creation-command) |
| 元数据 XML | 从 AD FS 服务器导出的 `federationmetadata.xml` 文件。
你可以在 `https:///federationmetadata/2007-06/federationmetadata.xml` 找到该文件。 |
-
-
-:::tip
+### Example Certificate Creation Command
你可以使用 openssl 命令生成证书。例如:
```
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com"
```
-
-:::
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-openldap/configure-openldap.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-openldap/configure-openldap.md
index 7594371a296..652e2457f37 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-openldap/configure-openldap.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-openldap/configure-openldap.md
@@ -53,4 +53,4 @@ title: 配置 OpenLDAP
## 附录:故障排除
-如果在测试与 OpenLDAP 服务器的连接时遇到问题,请首先仔细检查为 ServiceAccount 输入的凭证以及搜索库配置。你还可以检查 Rancher 日志来查明问题的原因。调试日志可能包含有关错误的更详细信息。详情请参见[如何启用调试日志](../../../../faq/technical-items.md#how-can-i-enable-debug-logging)。
+如果在测试与 OpenLDAP 服务器的连接时遇到问题,请首先仔细检查为 ServiceAccount 输入的凭证以及搜索库配置。你还可以检查 Rancher 日志来查明问题的原因。调试日志可能包含有关错误的更详细信息。详情请参见[如何启用调试日志](../../../../faq/technical-items.md#如何启用调试日志记录)。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-shibboleth-saml/configure-shibboleth-saml.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-shibboleth-saml/configure-shibboleth-saml.md
index 285a5d3e6aa..fe0c5cde49b 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-shibboleth-saml/configure-shibboleth-saml.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/configure-shibboleth-saml/configure-shibboleth-saml.md
@@ -101,4 +101,4 @@ SAML 协议不支持用户或用户组的搜索或查找。因此,如果你没
## 故障排除
-如果在测试与 OpenLDAP 服务器的连接时遇到问题,请首先仔细检查为 ServiceAccount 输入的凭证以及搜索库配置。你还可以检查 Rancher 日志来查明问题的原因。调试日志可能包含有关错误的更详细信息。详情请参见[如何启用调试日志](../../../../faq/technical-items.md#how-can-i-enable-debug-logging)。
+如果在测试与 OpenLDAP 服务器的连接时遇到问题,请首先仔细检查为 ServiceAccount 输入的凭证以及搜索库配置。你还可以检查 Rancher 日志来查明问题的原因。调试日志可能包含有关错误的更详细信息。详情请参见[如何启用调试日志](../../../../faq/technical-items.md#如何启用调试日志记录)。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md
index 7f7606413f1..7eb2cd0457e 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md
@@ -254,7 +254,7 @@ inheritedClusterRoles:
只有在以下情况下,你才能将全局角色分配给组:
-- 你已设置[外部认证](../authentication-config/authentication-config.md#external-vs-local-authentication)
+- 你已设置[外部认证](../authentication-config/authentication-config.md#外部认证与本地认证)
- 外部认证服务支持[用户组](../authentication-config/manage-users-and-groups.md)
- 你已使用外部认证服务设置了至少一个用户组。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards.md
index 1f56b62762f..7c16ac10192 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards.md
@@ -15,12 +15,12 @@ PSS 定义了工作负载的安全级别。PSA 描述了 Pod 安全上下文和
必须在删除 PodSecurityPolicy 对象_之前_添加新的策略执行机制。否则,你可能会为集群内的特权升级攻击创造机会。
:::
-### 从 Rancher 维护的应用程序和市场工作负载中删除 PodSecurityPolicies {#remove-psp-rancher-workloads}
+### 从 Rancher 维护的应用程序和市场工作负载中删除 PodSecurityPolicies
Rancher v2.7.2 提供了 Rancher 维护的 Helm Chart 的新主要版本。v102.x.y 允许你删除与以前的 Chart 版本一起安装的 PSP。这个新版本使用标准化的 `global.cattle.psp.enabled` 开关(默认关闭)替换了非标准的 PSP 开关。
你必须在_仍使用 Kubernetes v1.24_ 时执行以下步骤:
-1. 根据需要配置 PSA 控制器。你可以使用 Rancher 的内置 [PSA 配置模板](#psa-config-templates),或创建自定义模板并将其应用于正在迁移的集群。
+1. 根据需要配置 PSA 控制器。你可以使用 Rancher 的内置 [PSA 配置模板](#pod-安全准入配置模板),或创建自定义模板并将其应用于正在迁移的集群。
1. 将活动的 PSP 映射到 Pod 安全标准:
1. 查看集群中哪些 PSP 仍处于活动状态:
@@ -108,14 +108,14 @@ Helm 尝试在集群中查询存储在先前版本的数据 blob 中的对象时
#### 将 Chart 升级到支持 Kubernetes v1.25 的版本
-清理了具有 PSP 的所有版本后,你就可以继续升级了。对于 Rancher 维护的工作负载,请按照本文档[从 Rancher 维护的应用程序和市场工作负载中删除 PodSecurityPolicies](#remove-psp-rancher-workloads) 部分中的步骤进行操作。
+清理了具有 PSP 的所有版本后,你就可以继续升级了。对于 Rancher 维护的工作负载,请按照本文档[从 Rancher 维护的应用程序和市场工作负载中删除 PodSecurityPolicies](#从-rancher-维护的应用程序和市场工作负载中删除-podsecuritypolicies) 部分中的步骤进行操作。
如果工作负载不是由 Rancher 维护的,请参阅对应的提供商的文档。
:::caution
不要跳过此步骤。与 Kubernetes v1.25 不兼容的应用程序不能保证在清理后正常工作。
:::
-## Pod 安全准入配置模板 {#psa-config-templates}
+## Pod 安全准入配置模板
Rancher 提供了 PSA 配置模板。它们是可以应用到集群的预定义安全配置。Rancher 管理员(或具有权限的人员)可以[创建、管理和编辑](./psa-config-templates.md) PSA 模板。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher.md
index cf023f56e0a..f5cf0dee5d1 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher.md
@@ -36,7 +36,7 @@ Rancher 必须是 2.5.0 或更高版本。
:::note
-使用 `backup-restore` operator 执行恢复后,Fleet 中会出现一个已知问题:用于 `clientSecretName` 和 `helmSecretName` 的密文不包含在 Fleet 的 Git 仓库中。请参见[此处](../deploy-apps-across-clusters/fleet.md#故障排除)获得解决方法。
+使用 `backup-restore` operator 执行恢复后,Fleet 中会出现一个已知问题:用于 `clientSecretName` 和 `helmSecretName` 的密文不包含在 Fleet 的 Git 仓库中。请参见[此处](../../../integrations-in-rancher/fleet/overview.md#故障排除)获得解决方法。
:::
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md
index 44c89d5068d..be271903dfa 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md
@@ -45,11 +45,11 @@ import ClusterCapabilitiesTable from '../../../shared-files/\_cluster-capabiliti
Rancher 可以在亚马逊 EC2、DigitalOcean、Azure 或 vSphere 等基础设施提供商中动态调配节点,然后在这些节点上安装 Kubernetes。
-使用 Rancher,你可以基于[节点模板](../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md#node-templates)创建节点池。该模板定义了用于在云提供商中启动节点的参数。
+使用 Rancher,你可以基于[节点模板](../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md#节点模板)创建节点池。该模板定义了用于在云提供商中启动节点的参数。
使用基础设施提供商托管的节点的一个好处是,如果某个节点失去了与集群的连接,Rancher 可以自动替换掉它,从而保持预期的集群配置。
-可用于创建节点模板的云提供商是由 Rancher UI 中激活的[节点驱动程序](../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md#node-drivers)决定的。
+可用于创建节点模板的云提供商是由 Rancher UI 中激活的[节点驱动程序](../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md#主机驱动)决定的。
有关详细信息,请参阅[基础设施提供商托管的节点](../launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md)部分。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md
index 3c31569bf95..2b432553db9 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/amazon.md
@@ -158,3 +158,571 @@ weight: 1
### 使用 Amazon Elastic Container Registry (ECR)
在将[创建 IAM 角色并附加到实例](#1-创建-iam-角色并附加到实例)中的 IAM 配置文件附加到实例时,kubelet 组件能够自动获取 ECR 凭证。使用低于 v1.15.0 的 Kubernetes 版本时,需要在集群中配置 Amazon 云提供商。从 Kubernetes 版本 v1.15.0 开始,kubelet 无需在集群中配置 Amazon 云提供商即可获取 ECR 凭证。
+
+### Using the Out-of-Tree AWS Cloud Provider
+
+
+
+
+1. [Node name conventions and other prerequisites](https://cloud-provider-aws.sigs.k8s.io/prerequisites/) must be followed for the cloud provider to find the instance correctly.
+
+2. Rancher managed RKE2/K3s clusters don't support configuring `providerID`. However, the engine will set the node name correctly if the following configuration is set on the provisioning cluster object:
+
+```yaml
+spec:
+ rkeConfig:
+ machineGlobalConfig:
+ cloud-provider-name: aws
+```
+
+This option will be passed to the configuration of the various Kubernetes components that run on the node, and must be overridden per component to prevent the in-tree provider from running unintentionally:
+
+
+**Override on Etcd:**
+
+```yaml
+spec:
+ rkeConfig:
+ machineSelectorConfig:
+ - config:
+ kubelet-arg:
+ - cloud-provider=external
+ machineLabelSelector:
+ matchExpressions:
+ - key: rke.cattle.io/etcd-role
+ operator: In
+ values:
+ - 'true'
+```
+
+**Override on Control Plane:**
+
+```yaml
+spec:
+ rkeConfig:
+ machineSelectorConfig:
+ - config:
+ disable-cloud-controller: true
+ kube-apiserver-arg:
+ - cloud-provider=external
+ kube-controller-manager-arg:
+ - cloud-provider=external
+ kubelet-arg:
+ - cloud-provider=external
+ machineLabelSelector:
+ matchExpressions:
+ - key: rke.cattle.io/control-plane-role
+ operator: In
+ values:
+ - 'true'
+```
+
+**Override on Worker:**
+
+```yaml
+spec:
+ rkeConfig:
+ machineSelectorConfig:
+ - config:
+ kubelet-arg:
+ - cloud-provider=external
+ machineLabelSelector:
+ matchExpressions:
+ - key: rke.cattle.io/worker-role
+ operator: In
+ values:
+ - 'true'
+```
+
+2. Select `Amazon` if relying on the above mechanism to set the provider ID. Otherwise, select **External (out-of-tree)** cloud provider, which sets `--cloud-provider=external` for Kubernetes components.
+
+3. Specify the `aws-cloud-controller-manager` Helm chart as an additional manifest to install:
+
+```yaml
+spec:
+ rkeConfig:
+ additionalManifest: |-
+ apiVersion: helm.cattle.io/v1
+ kind: HelmChart
+ metadata:
+ name: aws-cloud-controller-manager
+ namespace: kube-system
+ spec:
+ chart: aws-cloud-controller-manager
+ repo: https://kubernetes.github.io/cloud-provider-aws
+ targetNamespace: kube-system
+ bootstrap: true
+ valuesContent: |-
+ hostNetworking: true
+ nodeSelector:
+ node-role.kubernetes.io/control-plane: "true"
+ args:
+ - --configure-cloud-routes=false
+ - --v=5
+ - --cloud-provider=aws
+```
+
+
+
+
+
+1. [Node name conventions and other prerequisites ](https://cloud-provider-aws.sigs.k8s.io/prerequisites/) must be followed so that the cloud provider can find the instance. Rancher provisioned clusters don't support configuring `providerID`.
+
+:::note
+
+If you use IP-based naming, the nodes must be named after the instance followed by the regional domain name (`ip-xxx-xxx-xxx-xxx.ec2..internal`). If you have a custom domain name set in the DHCP options, you must set `--hostname-override` on `kube-proxy` and `kubelet` to match this naming convention.
+
+:::
+
+To meet node naming conventions, Rancher allows setting `useInstanceMetadataHostname` when the `External Amazon` cloud provider is selected. Enabling `useInstanceMetadataHostname` will query ec2 metadata service and set `/hostname` as `hostname-override` for `kubelet` and `kube-proxy`:
+
+```yaml
+rancher_kubernetes_engine_config:
+ cloud_provider:
+ name: external-aws
+ useInstanceMetadataHostname: true
+```
+
+You must not enable `useInstanceMetadataHostname` when setting custom values for `hostname-override` for custom clusters. When you create a [custom cluster](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/use-existing-nodes.md), add [`--node-name`](../../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/rancher-agent-options.md) to the `docker run` node registration command to set `hostname-override` — for example, `"$(hostname -f)"`. This can be done manually or by using **Show Advanced Options** in the Rancher UI to add **Node Name**.
+
+2. Select the cloud provider.
+
+Selecting **External Amazon (out-of-tree)** sets `--cloud-provider=external` and enables `useInstanceMetadataHostname`. As mentioned in step 1, enabling `useInstanceMetadataHostname` will query the EC2 metadata service and set `http://169.254.169.254/latest/meta-data/hostname` as `hostname-override` for `kubelet` and `kube-proxy`.
+
+:::note
+
+You must disable `useInstanceMetadataHostname` when setting a custom node name for custom clusters via `node-name`.
+
+:::
+
+```yaml
+rancher_kubernetes_engine_config:
+ cloud_provider:
+ name: external-aws
+ useInstanceMetadataHostname: true/false
+```
+
+Existing clusters that use an **External** cloud provider will set `--cloud-provider=external` for Kubernetes components but won't set the node name.
+
+3. Install the AWS cloud controller manager after the cluster finishes provisioning. Note that the cluster isn't successfully provisioned and nodes are still in an `uninitialized` state until you deploy the cloud controller manager. This can be done manually, or via [Helm charts in UI](#helm-chart-installation-from-ui).
+
+Refer to the offical AWS upstream documentation for the [cloud controller manager](https://kubernetes.github.io/cloud-provider-aws).
+
+
+
+
+### Helm Chart Installation from CLI
+
+
+
+
+Official upstream docs for [Helm chart installation](https://github.com/kubernetes/cloud-provider-aws/tree/master/charts/aws-cloud-controller-manager) can be found on GitHub.
+
+1. Add the Helm repository:
+
+```shell
+helm repo add aws-cloud-controller-manager https://kubernetes.github.io/cloud-provider-aws
+helm repo update
+```
+
+2. Create a `values.yaml` file with the following contents to override the default `values.yaml`:
+
+```yaml
+# values.yaml
+hostNetworking: true
+tolerations:
+ - effect: NoSchedule
+ key: node.cloudprovider.kubernetes.io/uninitialized
+ value: 'true'
+ - effect: NoSchedule
+ value: 'true'
+ key: node-role.kubernetes.io/control-plane
+nodeSelector:
+ node-role.kubernetes.io/control-plane: 'true'
+args:
+ - --configure-cloud-routes=false
+ - --use-service-account-credentials=true
+ - --v=2
+ - --cloud-provider=aws
+clusterRoleRules:
+ - apiGroups:
+ - ""
+ resources:
+ - events
+ verbs:
+ - create
+ - patch
+ - update
+ - apiGroups:
+ - ""
+ resources:
+ - nodes
+ verbs:
+ - '*'
+ - apiGroups:
+ - ""
+ resources:
+ - nodes/status
+ verbs:
+ - patch
+ - apiGroups:
+ - ""
+ resources:
+ - services
+ verbs:
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - ""
+ resources:
+ - services/status
+ verbs:
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - ''
+ resources:
+ - serviceaccounts
+ verbs:
+ - create
+ - get
+ - apiGroups:
+ - ""
+ resources:
+ - persistentvolumes
+ verbs:
+ - get
+ - list
+ - update
+ - watch
+ - apiGroups:
+ - ""
+ resources:
+ - endpoints
+ verbs:
+ - create
+ - get
+ - list
+ - watch
+ - update
+ - apiGroups:
+ - coordination.k8s.io
+ resources:
+ - leases
+ verbs:
+ - create
+ - get
+ - list
+ - watch
+ - update
+ - apiGroups:
+ - ""
+ resources:
+ - serviceaccounts/token
+ verbs:
+ - create
+```
+
+3. Install the Helm chart:
+
+```shell
+helm upgrade --install aws-cloud-controller-manager aws-cloud-controller-manager/aws-cloud-controller-manager --values values.yaml
+```
+
+Verify that the Helm chart installed successfully:
+
+```shell
+helm status -n kube-system aws-cloud-controller-manager
+```
+
+4. (Optional) Verify that the cloud controller manager update succeeded:
+
+```shell
+kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager
+```
+
+
+
+
+
+Official upstream docs for [Helm chart installation](https://github.com/kubernetes/cloud-provider-aws/tree/master/charts/aws-cloud-controller-manager) can be found on GitHub.
+
+1. Add the Helm repository:
+
+```shell
+helm repo add aws-cloud-controller-manager https://kubernetes.github.io/cloud-provider-aws
+helm repo update
+```
+
+2. Create a `values.yaml` file with the following contents, to override the default `values.yaml`:
+
+```yaml
+# values.yaml
+hostNetworking: true
+tolerations:
+ - effect: NoSchedule
+ key: node.cloudprovider.kubernetes.io/uninitialized
+ value: 'true'
+ - effect: NoSchedule
+ value: 'true'
+ key: node-role.kubernetes.io/controlplane
+nodeSelector:
+ node-role.kubernetes.io/controlplane: 'true'
+args:
+ - --configure-cloud-routes=false
+ - --use-service-account-credentials=true
+ - --v=2
+ - --cloud-provider=aws
+clusterRoleRules:
+ - apiGroups:
+ - ""
+ resources:
+ - events
+ verbs:
+ - create
+ - patch
+ - update
+ - apiGroups:
+ - ""
+ resources:
+ - nodes
+ verbs:
+ - '*'
+ - apiGroups:
+ - ""
+ resources:
+ - nodes/status
+ verbs:
+ - patch
+ - apiGroups:
+ - ""
+ resources:
+ - services
+ verbs:
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - ""
+ resources:
+ - services/status
+ verbs:
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - ''
+ resources:
+ - serviceaccounts
+ verbs:
+ - create
+ - get
+ - apiGroups:
+ - ""
+ resources:
+ - persistentvolumes
+ verbs:
+ - get
+ - list
+ - update
+ - watch
+ - apiGroups:
+ - ""
+ resources:
+ - endpoints
+ verbs:
+ - create
+ - get
+ - list
+ - watch
+ - update
+ - apiGroups:
+ - coordination.k8s.io
+ resources:
+ - leases
+ verbs:
+ - create
+ - get
+ - list
+ - watch
+ - update
+ - apiGroups:
+ - ""
+ resources:
+ - serviceaccounts/token
+ verbs:
+ - create
+```
+
+3. Install the Helm chart:
+
+```shell
+helm upgrade --install aws-cloud-controller-manager -n kube-system aws-cloud-controller-manager/aws-cloud-controller-manager --values values.yaml
+```
+
+Verify that the Helm chart installed successfully:
+
+```shell
+helm status -n kube-system aws-cloud-controller-manager
+```
+
+4. If present, edit the Daemonset to remove the default node selector `node-role.kubernetes.io/control-plane: ""`:
+
+```shell
+kubectl edit daemonset aws-cloud-controller-manager -n kube-system
+```
+
+5. (Optional) Verify that the cloud controller manager update succeeded:
+
+```shell
+kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager
+```
+
+
+
+
+### Helm Chart Installation from UI
+
+
+
+
+1. Click **☰**, then select the name of the cluster from the left navigation.
+
+2. Select **Apps** > **Repositories**.
+
+3. Click the **Create** button.
+
+4. Enter `https://kubernetes.github.io/cloud-provider-aws` in the **Index URL** field.
+
+5. Select **Apps** > **Charts** from the left navigation and install **aws-cloud-controller-manager**.
+
+6. Select the namespace, `kube-system`, and enable **Customize Helm options before install**.
+
+7. Add the following container arguments:
+
+```yaml
+ - '--use-service-account-credentials=true'
+ - '--configure-cloud-routes=false'
+```
+
+8. Add `get` to `verbs` for `serviceaccounts` resources in `clusterRoleRules`. This allows the cloud controller manager to get service accounts upon startup.
+
+```yaml
+ - apiGroups:
+ - ''
+ resources:
+ - serviceaccounts
+ verbs:
+ - create
+ - get
+```
+
+9. Rancher-provisioned RKE2 nodes are tainted `node-role.kubernetes.io/control-plane`. Update tolerations and the nodeSelector:
+
+```yaml
+tolerations:
+ - effect: NoSchedule
+ key: node.cloudprovider.kubernetes.io/uninitialized
+ value: 'true'
+ - effect: NoSchedule
+ value: 'true'
+ key: node-role.kubernetes.io/control-plane
+
+```
+
+```yaml
+nodeSelector:
+ node-role.kubernetes.io/control-plane: 'true'
+```
+
+:::note
+
+There's currently a [known issue](https://github.com/rancher/dashboard/issues/9249) where nodeSelector can't be updated from the Rancher UI. Continue installing the chart and then edit the Daemonset manually to set the `nodeSelector`:
+
+```yaml
+nodeSelector:
+ node-role.kubernetes.io/control-plane: 'true'
+```
+
+:::
+
+10. Install the chart and confirm that the Daemonset `aws-cloud-controller-manager` is running. Verify `aws-cloud-controller-manager` pods are running in target namespace (`kube-system` unless modified in step 6).
+
+
+
+
+
+1. Click **☰**, then select the name of the cluster from the left navigation.
+
+2. Select **Apps** > **Repositories**.
+
+3. Click the **Create** button.
+
+4. Enter `https://kubernetes.github.io/cloud-provider-aws` in the **Index URL** field.
+
+5. Select **Apps** > **Charts** from the left navigation and install **aws-cloud-controller-manager**.
+
+6. Select the namespace, `kube-system`, and enable **Customize Helm options before install**.
+
+7. Add the following container arguments:
+
+```yaml
+ - '--use-service-account-credentials=true'
+ - '--configure-cloud-routes=false'
+```
+
+8. Add `get` to `verbs` for `serviceaccounts` resources in `clusterRoleRules`. This allows the cloud controller manager to get service accounts upon startup:
+
+```yaml
+ - apiGroups:
+ - ''
+ resources:
+ - serviceaccounts
+ verbs:
+ - create
+ - get
+```
+
+9. Rancher-provisioned RKE nodes are tainted `node-role.kubernetes.io/controlplane`. Update tolerations and the nodeSelector:
+
+```yaml
+tolerations:
+ - effect: NoSchedule
+ key: node.cloudprovider.kubernetes.io/uninitialized
+ value: 'true'
+ - effect: NoSchedule
+ value: 'true'
+ key: node-role.kubernetes.io/controlplane
+
+```
+
+```yaml
+nodeSelector:
+ node-role.kubernetes.io/controlplane: 'true'
+```
+
+:::note
+
+There's currently a [known issue](https://github.com/rancher/dashboard/issues/9249) where `nodeSelector` can't be updated from the Rancher UI. Continue installing the chart and then Daemonset manually to set the `nodeSelector`:
+
+``` yaml
+nodeSelector:
+ node-role.kubernetes.io/controlplane: 'true'
+```
+
+:::
+
+10. Install the chart and confirm that the Daemonset `aws-cloud-controller-manager` deploys successfully:
+
+```shell
+kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager
+```
+
+
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/kubernetes-resources-setup.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/kubernetes-resources-setup.md
index 4dd86bb2a6c..8b80ea3272d 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/kubernetes-resources-setup.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/kubernetes-resources-setup/kubernetes-resources-setup.md
@@ -30,10 +30,10 @@ title: Kubernetes 资源
Rancher 支持两种类型的负载均衡器:
-- [Layer-4 负载均衡器](load-balancer-and-ingress-controller/layer-4-and-layer-7-load-balancing.md#layer-4-load-balancer#四层负载均衡器)
-- [Layer-7 负载均衡器](load-balancer-and-ingress-controller/layer-4-and-layer-7-load-balancing.md#七层负载均衡器)
+- [Layer-4 负载均衡器](./load-balancer-and-ingress-controller/layer-4-and-layer-7-load-balancing.md#四层负载均衡器)
+- [Layer-7 负载均衡器](./load-balancer-and-ingress-controller/layer-4-and-layer-7-load-balancing.md#七层负载均衡器)
-有关详细信息,请参阅[负载均衡器](load-balancer-and-ingress-controller/layer-4-and-layer-7-load-balancing.md)。
+有关详细信息,请参阅[负载均衡器](./load-balancer-and-ingress-controller/layer-4-and-layer-7-load-balancing.md)。
#### Ingress
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/nutanix/nutanix.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/nutanix/nutanix.md
index 885bbb38847..83d63e83a38 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/nutanix/nutanix.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/nutanix/nutanix.md
@@ -13,9 +13,9 @@ Rancher 可以在 AOS (AHV) 中配置节点并在其上安装 Kubernetes。在 A
Nutanix 集群可能由多组具有不同属性(例如内存或 vCPU 数量)的 VM 组成。这种分组允许对每个 Kubernetes 角色的节点大小进行细粒度控制。
-- [创建 Nutanix 集群](provision-kubernetes-clusters-in-aos.md#creating-a-nutanix-aos-cluster)
-- [配置存储](provision-kubernetes-clusters-in-aos.md)
+- [创建 Nutanix 集群](./provision-kubernetes-clusters-in-aos.md#创建-nutanix-aos-集群)
+- [配置存储](./provision-kubernetes-clusters-in-aos.md)
## 创建 Nutanix 集群
-在[本节](provision-kubernetes-clusters-in-aos.md)中,你将学习如何使用 Rancher 在 Nutanix AOS 中安装 [RKE](https://rancher.com/docs/rke/latest/en/) Kubernetes 集群。
+在[本节](./provision-kubernetes-clusters-in-aos.md)中,你将学习如何使用 Rancher 在 Nutanix AOS 中安装 [RKE](https://rancher.com/docs/rke/latest/en/) Kubernetes 集群。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/manage-clusters.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/manage-clusters.md
index f7d9bb6b990..676c25fb2a9 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/manage-clusters.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/manage-clusters.md
@@ -16,7 +16,7 @@ title: 集群管理
## 在 Rancher 中管理集群
-将集群[配置到 Rancher](../kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md) 之后,[集群所有者](../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#cluster-roles)需要管理这些集群。管理集群的选项如下:
+将集群[配置到 Rancher](../kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md) 之后,[集群所有者](../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#集群角色)需要管理这些集群。管理集群的选项如下:
import ClusterCapabilitiesTable from '../../../shared-files/_cluster-capabilities-table.md';
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/fleet/overview.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/fleet/overview.md
index 55b89b8ee08..7f2c1f26083 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/fleet/overview.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/fleet/overview.md
@@ -12,7 +12,7 @@ Fleet 是 Rancher 的一个独立项目,可以通过 Helm 安装在任何 Kube
## 架构
-有关 Fleet 如何运作的信息,请参阅[架构](./architecture)页面。
+有关 Fleet 如何运作的信息,请参阅[架构](./architecture.md)页面。
## 在 Rancher UI 中访问 Fleet
@@ -39,7 +39,7 @@ Fleet 预安装在 Rancher 中,并由 Rancher UI 中的**持续交付**选项
## Windows 支持
-有关对具有 Windows 节点的集群的支持的详细信息,请参阅 [Windows 支持](./windows-support)页面。
+有关对具有 Windows 节点的集群的支持的详细信息,请参阅 [Windows 支持](./windows-support.md)页面。
## GitHub 仓库
@@ -47,7 +47,7 @@ Fleet Helm charts 可在[此处](https://github.com/rancher/fleet/releases)获
## 在代理后使用 Fleet
-有关在代理后面使用 Fleet 的详细信息,请参阅[在代理后使用 Fleet](./use-fleet-behind-a-proxy)页面。
+有关在代理后面使用 Fleet 的详细信息,请参阅[在代理后使用 Fleet](./use-fleet-behind-a-proxy.md)页面。
## Helm Chart 依赖
@@ -57,7 +57,7 @@ git 仓库中的 Helm Chart 必须在 Chart 子目录中包含其依赖。 你
## 故障排除
-- **已知问题**:Fleet gitrepos 的 clientSecretName 和 helmSecretName 密文不包含在 [backup-restore-operator](../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher.md#1-install-the-rancher-backup-operator) 创建的备份或恢复中。一旦有永久的解决方案,我们将更新社区内容。
+- **已知问题**:Fleet gitrepos 的 clientSecretName 和 helmSecretName 密文不包含在 [backup-restore-operator](../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher.md#1-安装-rancher-backup-operator) 创建的备份或恢复中。一旦有永久的解决方案,我们将更新社区内容。
- **临时解决方法**:默认情况下,用户定义的密文不会在 Fleet 中备份。如果执行灾难恢复或将 Rancher 迁移到新集群,则有必要重新创建密文。要修改 ResourceSet 以包含要备份的额外资源,请参阅文档[此处](https://github.com/rancher/backup-restore-operator#user-flow)。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/harvester/overview.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/harvester/overview.md
index ac720bec060..796257c729f 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/harvester/overview.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/harvester/overview.md
@@ -28,7 +28,7 @@ Harvester 功能开关用于管理对 Rancher 中虚拟化管理(VM)页面
Harvester 允许通过 Harvester UI 上传和显示 `.ISO` 镜像,但 Rancher UI 是不支持的。这是因为 `.ISO` 镜像通常需要额外的设置,这会干扰干净的部署(即无需用户干预),并且它们通常不用于云环境。
-如需了解 Rancher 中主机驱动的更多详细信息,请单击[此处](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/about-provisioning-drivers#主机驱动)。
+如需了解 Rancher 中主机驱动的更多详细信息,请单击[此处](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/about-provisioning-drivers.md#主机驱动)。
### 端口要求
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/kubernetes-distributions/kubernetes-distributions.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/kubernetes-distributions/kubernetes-distributions.md
index b45cce6b4d7..61916c318be 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/kubernetes-distributions/kubernetes-distributions.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/kubernetes-distributions/kubernetes-distributions.md
@@ -13,7 +13,7 @@ K3s 是一款轻量级、完全兼容的 Kubernetes 发行版,专为一系列
### K3s 与 Rancher
- Rancher 允许在一系列平台上轻松配置 K3s,包括 Amazon EC2、DigitalOcean、Azure、vSphere 或现有服务器。
-- Kubernetes 集群的标准 Rancher 管理,包括所有概述[集群管理功能](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup#cluster-management-capabilities-by-cluster-type)。
+- Kubernetes 集群的标准 Rancher 管理,包括所有概述[集群管理功能](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md#按集群类型划分的集群管理功能)。
## RKE2
@@ -31,4 +31,4 @@ RKE2 的主要特性包括:
## RKE2 与 Rancher
- Rancher 允许在一系列平台上轻松配置 RKE2,包括 Amazon EC2、DigitalOcean、Azure、vSphere 或现有服务器。
-- Kubernetes 集群的标准 Rancher 管理,包括所有概述[集群管理功能](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup#cluster-management-capabilities-by-cluster-type)。
+- Kubernetes 集群的标准 Rancher 管理,包括所有概述[集群管理功能](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup//kubernetes-clusters-in-rancher-setup.md#按集群类型划分的集群管理功能)。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md
index dfeefda6f97..d068c7c77fa 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/integrations-in-rancher/monitoring-and-alerting/rbac-for-monitoring.md
@@ -107,7 +107,7 @@ Monitoring 还会创建其他 `ClusterRole`,这些角色默认情况下不会
| 角色 | 用途 |
| ------------------------------| ---------------------------|
-| monitoring-ui-view | _自 Monitoring v2 14.5.100+ 起可用_ 此 ClusterRole 允许用户在 Rancher UI 中查看指定集群的指标图。这是通过授予对外部监控 UI 的只读访问权限来实现的。具有此角色的用户有权限列出 Prometheus、Alertmanager 和 Grafana 端点,并通过 Rancher 代理向 Prometheus、Grafana 和 Alertmanager UI 发出 GET 请求。 |
+| monitoring-ui-view | _自 Monitoring v2 14.5.100+ 起可用_ 此 ClusterRole 允许用户在 Rancher UI 中查看指定集群的指标图。这是通过授予对外部监控 UI 的只读访问权限来实现的。具有此角色的用户有权限列出 Prometheus、Alertmanager 和 Grafana 端点,并通过 Rancher 代理向 Prometheus、Grafana 和 Alertmanager UI 发出 GET 请求。 |
### 使用 kubectl 分配 Role 和 ClusterRole
@@ -203,7 +203,7 @@ Rancher 部署的默认角色(即 cluster-owner、cluster-member、project-own
| Rancher 角色 | Kubernetes ClusterRole | 可用 Rancher 版本 | 可用 Monitoring V2 版本 |
|--------------------------|-------------------------------|-------|------|
-| 查看 Monitoring\* | [monitoring-ui-view](#monitoring-ui-view) | 2.4.8+ | 9.4.204+ |
+| 查看 Monitoring\* | [monitoring-ui-view](#其他监控集群角色) | 2.4.8+ | 9.4.204+ |
\* 如果某个用户绑定了 Rancher 的 **View Monitoring** 角色,该用户只有在有 UI 链接时才有权访问外部 Monitoring UI。要访问 Monitoring Pane 以获取这些链接,用户必须是至少一个项目的项目成员。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/cluster-configuration.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/cluster-configuration.md
index 9bc1db8d00b..dcb0c7adaf6 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/cluster-configuration.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/cluster-configuration.md
@@ -14,12 +14,12 @@ title: 集群配置
集群配置选项取决于 Kubernetes 集群的类型:
-- [RKE 集群配置](rancher-server-configuration/rke1-cluster-configuration.md)
-- [RKE2 集群配置](rancher-server-configuration/rke2-cluster-configuration.md)
-- [K3s 集群配置](rancher-server-configuration/k3s-cluster-configuration.md)
-- [EKS 集群配置](rancher-server-configuration/eks-cluster-configuration.md)
-- [GKE 集群配置](gke-cluster-configuration.md)
-- [AKS 集群配置](rancher-server-configuration/aks-cluster-configuration.md)
+- [RKE 集群配置](./rancher-server-configuration/rke1-cluster-configuration.md)
+- [RKE2 集群配置](./rancher-server-configuration/rke2-cluster-configuration.md)
+- [K3s 集群配置](./rancher-server-configuration/k3s-cluster-configuration.md)
+- [EKS 集群配置](./rancher-server-configuration/eks-cluster-configuration.md)
+- [GKE 集群配置](./rancher-server-configuration/gke-cluster-configuration/gke-cluster-configuration.md)
+- [AKS 集群配置](./rancher-server-configuration/aks-cluster-configuration.md)
### 不同类型集群的管理功能
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/downstream-cluster-configuration/downstream-cluster-configuration.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/downstream-cluster-configuration/downstream-cluster-configuration.md
index 591285c378e..70e32b32c13 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/downstream-cluster-configuration/downstream-cluster-configuration.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/downstream-cluster-configuration/downstream-cluster-configuration.md
@@ -6,4 +6,4 @@ title: 下游集群配置
-以下文档将讨论[节点模板配置](./node-template-configuration.md)和[主机配置](./machine-configuration.md)。
+以下文档将讨论[节点模板配置](./node-template-configuration/node-template-configuration.md)和[主机配置](./machine-configuration/machine-configuration.md)。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md
index 96e2cc56603..38f7277a323 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md
@@ -110,7 +110,7 @@ Rancher 与以下开箱即用的网络提供商兼容:
所有 CNI 网络插件都支持[双栈](https://docs.rke2.io/install/network_options#dual-stack-configuration)网络。要在双栈模式下配置 RKE2,请为你的[集群 CIDR](#集群-cidr) 和/或 [Service CIDR](#service-cidr) 设置有效的 IPv4/IPv6 CIDR。
-###### 额外配置 {#dual-stack-additional-config}
+###### 额外配置
使用 `cilium` 或 `multus,cilium` 作为容器网络接口提供商时,请确保**启用 IPv6 支持**选项。
@@ -182,7 +182,7 @@ Rancher 与以下开箱即用的网络提供商兼容:
要配置[双栈](https://docs.rke2.io/install/network_options#dual-stack-configuration)模式,请输入有效的 IPv4/IPv6 CIDR。例如 `10.42.0.0/16,2001:cafe:42:0::/56`。
-使用 `cilium` 或 `multus,cilium` 作为[容器网络](#容器网络提供商)接口提供商时,你需要进行[附加配置](#dual-stack-additional-config)。
+使用 `cilium` 或 `multus,cilium` 作为[容器网络](#容器网络提供商)接口提供商时,你需要进行[附加配置](#额外配置)。
#### Service CIDR
@@ -192,7 +192,7 @@ Rancher 与以下开箱即用的网络提供商兼容:
要配置[双栈](https://docs.rke2.io/install/network_options#dual-stack-configuration)模式,请输入有效的 IPv4/IPv6 CIDR。例如 `10.42.0.0/16,2001:cafe:42:0::/56`。
-使用 `cilium` 或 `multus,cilium` 作为[容器网络](#容器网络提供商)接口提供商时,你需要进行[附加配置](#dual-stack-additional-config)。
+使用 `cilium` 或 `multus,cilium` 作为[容器网络](#容器网络提供商)接口提供商时,你需要进行[附加配置](#额外配置)。
#### 集群 DNS
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/use-existing-nodes.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/use-existing-nodes.md
index c4e1efd5335..3a33851ca92 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/use-existing-nodes.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/use-existing-nodes.md
@@ -17,7 +17,7 @@ description: 要创建具有自定义节点的集群,你需要访问集群中
:::note 使用 Windows 主机作为 Kubernetes Worker 节点?
-在开始之前,请参阅[配置 Windows 自定义集群](use-windows-clusters.md)。
+在开始之前,请参阅[配置 Windows 自定义集群](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters.md)。
:::
@@ -137,5 +137,5 @@ Key=kubernetes.io/cluster/CLUSTERID, Value=shared
创建集群后,你可以通过 Rancher UI 访问集群。最佳实践建议你设置以下访问集群的备用方式:
-- **通过 kubectl CLI 访问你的集群**:按照[这些步骤](../../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#accessing-clusters-with-kubectl-from-your-workstation)在你的工作站上使用 kubectl 访问集群。在这种情况下,你将通过 Rancher Server 的认证代理进行认证,然后 Rancher 会让你连接到下游集群。此方法允许你在没有 Rancher UI 的情况下管理集群。
-- **通过 kubectl CLI 使用授权的集群端点访问你的集群**:按照[这些步骤](../../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster)直接使用 kubectl 访问集群,而无需通过 Rancher 进行认证。我们建议设置此替代方法来访问集群,以便在无法连接到 Rancher 时访问集群。
+- **通过 kubectl CLI 访问你的集群**:按照[这些步骤](../../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#在工作站使用-kubectl-访问集群)在你的工作站上使用 kubectl 访问集群。在这种情况下,你将通过 Rancher Server 的认证代理进行认证,然后 Rancher 会让你连接到下游集群。此方法允许你在没有 Rancher UI 的情况下管理集群。
+- **通过 kubectl CLI 使用授权的集群端点访问你的集群**:按照[这些步骤](../../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#直接使用下游集群进行身份验证)直接使用 kubectl 访问集群,而无需通过 Rancher 进行认证。我们建议设置此替代方法来访问集群,以便在无法连接到 Rancher 时访问集群。