diff --git a/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md b/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md index c40d3000816..a0f3271be5b 100644 --- a/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md +++ b/docs/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md @@ -70,7 +70,7 @@ StatefulSets manage the deployment and scaling of Pods while maintaining a stick 1. Click **StatefulSet**. 1. In the **Volume Claim Templates** tab, click **Add Claim Template**. 1. Enter a name for the persistent volume. -1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. +1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Launch**. @@ -84,7 +84,7 @@ To attach the PVC to an existing workload, 1. Go to the workload that will use storage provisioned with the StorageClass that you cared at click **⋮ > Edit Config**. 1. In the **Volume Claim Templates** section, click **Add Claim Template**. 1. Enter a persistent volume name. -1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. +1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Save**. diff --git a/docs/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md b/docs/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md index 7daaab8504b..6a95a95b3ae 100644 --- a/docs/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md +++ b/docs/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md @@ -304,7 +304,7 @@ cloud-provider|-|Cloud provider type| |max-node-provision-time|"15m"|Maximum time CA waits for node to be provisioned| |nodes|-|sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: `::`| |node-group-auto-discovery|-|One or more definition(s) of node group auto-discovery. A definition is expressed `:[[=]]`| -|estimator|-|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]| +|estimator|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]| |expander|"random"|Type of node group expander to be used in scale up. Available values: `["random","most-pods","least-waste","price","priority"]`| |ignore-daemonsets-utilization|false|Should CA ignore DaemonSet pods when calculating resource utilization for scaling down| |ignore-mirror-pods-utilization|false|Should CA ignore Mirror pods when calculating resource utilization for scaling down| diff --git a/docs/troubleshooting/other-troubleshooting-tips/networking.md b/docs/troubleshooting/other-troubleshooting-tips/networking.md index d67a1cdb793..d0af8a967c3 100644 --- a/docs/troubleshooting/other-troubleshooting-tips/networking.md +++ b/docs/troubleshooting/other-troubleshooting-tips/networking.md @@ -107,20 +107,3 @@ When the MTU is incorrectly configured (either on hosts running Rancher, nodes i * `read tcp: i/o timeout` See [Google Cloud VPN: MTU Considerations](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu) for an example how to configure MTU correctly when using Google Cloud VPN between Rancher and cluster nodes. - -### Resolved issues - -#### Overlay network broken when using Canal/Flannel due to missing node annotations - -| | | -|------------|------------| -| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) | -| Resolved in | v2.1.2 | - -To check if your cluster is affected, the following command will list nodes that are broken (this command requires `jq` to be installed): - -``` -kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name' -``` - -If there is no output, the cluster is not affected. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md index 6582e5e0f50..8ea6ecae323 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md @@ -66,7 +66,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘 1. 单击 **StatefulSet**。 1. 在**卷声明模板**选项卡上,单击**添加声明模板**。 1. 输入持久卷的名称。 -1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 +1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。 1. 点击**启动**。 @@ -80,7 +80,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘 1. 单击 **⋮ > 编辑配置**,转到使用由 StorageClass 配置的存储的工作负载。 1. 在**卷声明模板**中,单击**添加声明模板**。 1. 输入持久卷名称。 -1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 +1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。 1. 单击**保存**。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md index a262560e90f..7b2627dc6a1 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md @@ -300,7 +300,7 @@ title: 通过 AWS EC2 Auto Scaling 组使用 Cluster Autoscaler | max-node-provision-time | "15m" | CA 等待节点配置的最长时间 | | nodes | - | 以云提供商接受的格式设置节点组的最小、最大大小和其他配置数据。可以多次使用。格式是 `::`。 | | node-group-auto-discovery | - | 节点组自动发现的一个或多个定义。定义表示为 `:[[=]]` | -| estimator | - | "binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] | +| estimator |"binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] | | expander | "random" | 要在扩容中使用的节点组扩展器的类型。可用值:`["random","most-pods","least-waste","price","priority"]` | | ignore-daemonsets-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 DaemonSet pod | | ignore-mirror-pods-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 Mirror pod | diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/troubleshooting/other-troubleshooting-tips/networking.md b/i18n/zh/docusaurus-plugin-content-docs/current/troubleshooting/other-troubleshooting-tips/networking.md index 47c806b0278..32bdd10c27c 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/troubleshooting/other-troubleshooting-tips/networking.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/troubleshooting/other-troubleshooting-tips/networking.md @@ -102,20 +102,3 @@ title: 网络 * `read tcp: i/o timeout` 有关在 Rancher 和集群节点之间使用 Google Cloud VPN 时如何正确配置 MTU 的示例,请参阅 [Google Cloud VPN:MTU 注意事项](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu)。 - -### 已解决的问题 - -#### 由于缺少节点注释,使用 Canal/Flannel 时覆盖网络中断 - -| | | -|------------|------------| -| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) | -| 解决于 | v2.1.2 | - -要检查你的集群是否受到影响,运行以下命令来列出损坏的节点(此命令要求安装 `jq`): - -``` -kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name' -``` - -如果没有输出,则集群没有影响。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md index 6582e5e0f50..8ea6ecae323 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md @@ -66,7 +66,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘 1. 单击 **StatefulSet**。 1. 在**卷声明模板**选项卡上,单击**添加声明模板**。 1. 输入持久卷的名称。 -1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 +1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。 1. 点击**启动**。 @@ -80,7 +80,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘 1. 单击 **⋮ > 编辑配置**,转到使用由 StorageClass 配置的存储的工作负载。 1. 在**卷声明模板**中,单击**添加声明模板**。 1. 输入持久卷名称。 -1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 +1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。 1. 单击**保存**。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md index a262560e90f..7b2627dc6a1 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md @@ -300,7 +300,7 @@ title: 通过 AWS EC2 Auto Scaling 组使用 Cluster Autoscaler | max-node-provision-time | "15m" | CA 等待节点配置的最长时间 | | nodes | - | 以云提供商接受的格式设置节点组的最小、最大大小和其他配置数据。可以多次使用。格式是 `::`。 | | node-group-auto-discovery | - | 节点组自动发现的一个或多个定义。定义表示为 `:[[=]]` | -| estimator | - | "binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] | +| estimator |"binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] | | expander | "random" | 要在扩容中使用的节点组扩展器的类型。可用值:`["random","most-pods","least-waste","price","priority"]` | | ignore-daemonsets-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 DaemonSet pod | | ignore-mirror-pods-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 Mirror pod | diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/troubleshooting/other-troubleshooting-tips/networking.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.6/troubleshooting/other-troubleshooting-tips/networking.md index 47c806b0278..32bdd10c27c 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.6/troubleshooting/other-troubleshooting-tips/networking.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.6/troubleshooting/other-troubleshooting-tips/networking.md @@ -102,20 +102,3 @@ title: 网络 * `read tcp: i/o timeout` 有关在 Rancher 和集群节点之间使用 Google Cloud VPN 时如何正确配置 MTU 的示例,请参阅 [Google Cloud VPN:MTU 注意事项](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu)。 - -### 已解决的问题 - -#### 由于缺少节点注释,使用 Canal/Flannel 时覆盖网络中断 - -| | | -|------------|------------| -| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) | -| 解决于 | v2.1.2 | - -要检查你的集群是否受到影响,运行以下命令来列出损坏的节点(此命令要求安装 `jq`): - -``` -kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name' -``` - -如果没有输出,则集群没有影响。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md index 6582e5e0f50..8ea6ecae323 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md @@ -66,7 +66,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘 1. 单击 **StatefulSet**。 1. 在**卷声明模板**选项卡上,单击**添加声明模板**。 1. 输入持久卷的名称。 -1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 +1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。 1. 点击**启动**。 @@ -80,7 +80,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘 1. 单击 **⋮ > 编辑配置**,转到使用由 StorageClass 配置的存储的工作负载。 1. 在**卷声明模板**中,单击**添加声明模板**。 1. 输入持久卷名称。 -1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 +1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。 1. 单击**保存**。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md index a262560e90f..7b2627dc6a1 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md @@ -300,7 +300,7 @@ title: 通过 AWS EC2 Auto Scaling 组使用 Cluster Autoscaler | max-node-provision-time | "15m" | CA 等待节点配置的最长时间 | | nodes | - | 以云提供商接受的格式设置节点组的最小、最大大小和其他配置数据。可以多次使用。格式是 `::`。 | | node-group-auto-discovery | - | 节点组自动发现的一个或多个定义。定义表示为 `:[[=]]` | -| estimator | - | "binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] | +| estimator |"binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] | | expander | "random" | 要在扩容中使用的节点组扩展器的类型。可用值:`["random","most-pods","least-waste","price","priority"]` | | ignore-daemonsets-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 DaemonSet pod | | ignore-mirror-pods-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 Mirror pod | diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.7/troubleshooting/other-troubleshooting-tips/networking.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.7/troubleshooting/other-troubleshooting-tips/networking.md index 47c806b0278..32bdd10c27c 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.7/troubleshooting/other-troubleshooting-tips/networking.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.7/troubleshooting/other-troubleshooting-tips/networking.md @@ -102,20 +102,3 @@ title: 网络 * `read tcp: i/o timeout` 有关在 Rancher 和集群节点之间使用 Google Cloud VPN 时如何正确配置 MTU 的示例,请参阅 [Google Cloud VPN:MTU 注意事项](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu)。 - -### 已解决的问题 - -#### 由于缺少节点注释,使用 Canal/Flannel 时覆盖网络中断 - -| | | -|------------|------------| -| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) | -| 解决于 | v2.1.2 | - -要检查你的集群是否受到影响,运行以下命令来列出损坏的节点(此命令要求安装 `jq`): - -``` -kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name' -``` - -如果没有输出,则集群没有影响。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md index 6582e5e0f50..8ea6ecae323 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md @@ -66,7 +66,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘 1. 单击 **StatefulSet**。 1. 在**卷声明模板**选项卡上,单击**添加声明模板**。 1. 输入持久卷的名称。 -1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 +1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。 1. 点击**启动**。 @@ -80,7 +80,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘 1. 单击 **⋮ > 编辑配置**,转到使用由 StorageClass 配置的存储的工作负载。 1. 在**卷声明模板**中,单击**添加声明模板**。 1. 输入持久卷名称。 -1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 +1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。 1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。 1. 单击**保存**。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md index a262560e90f..7b2627dc6a1 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md @@ -300,7 +300,7 @@ title: 通过 AWS EC2 Auto Scaling 组使用 Cluster Autoscaler | max-node-provision-time | "15m" | CA 等待节点配置的最长时间 | | nodes | - | 以云提供商接受的格式设置节点组的最小、最大大小和其他配置数据。可以多次使用。格式是 `::`。 | | node-group-auto-discovery | - | 节点组自动发现的一个或多个定义。定义表示为 `:[[=]]` | -| estimator | - | "binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] | +| estimator |"binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] | | expander | "random" | 要在扩容中使用的节点组扩展器的类型。可用值:`["random","most-pods","least-waste","price","priority"]` | | ignore-daemonsets-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 DaemonSet pod | | ignore-mirror-pods-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 Mirror pod | diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/troubleshooting/other-troubleshooting-tips/networking.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/troubleshooting/other-troubleshooting-tips/networking.md index 47c806b0278..32bdd10c27c 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.8/troubleshooting/other-troubleshooting-tips/networking.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.8/troubleshooting/other-troubleshooting-tips/networking.md @@ -102,20 +102,3 @@ title: 网络 * `read tcp: i/o timeout` 有关在 Rancher 和集群节点之间使用 Google Cloud VPN 时如何正确配置 MTU 的示例,请参阅 [Google Cloud VPN:MTU 注意事项](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu)。 - -### 已解决的问题 - -#### 由于缺少节点注释,使用 Canal/Flannel 时覆盖网络中断 - -| | | -|------------|------------| -| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) | -| 解决于 | v2.1.2 | - -要检查你的集群是否受到影响,运行以下命令来列出损坏的节点(此命令要求安装 `jq`): - -``` -kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name' -``` - -如果没有输出,则集群没有影响。 diff --git a/versioned_docs/version-2.0-2.4/how-to-guides/advanced-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md b/versioned_docs/version-2.0-2.4/how-to-guides/advanced-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md index 9e0a60a5543..85ec558cfa6 100644 --- a/versioned_docs/version-2.0-2.4/how-to-guides/advanced-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md +++ b/versioned_docs/version-2.0-2.4/how-to-guides/advanced-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md @@ -302,7 +302,7 @@ cloud-provider|-|Cloud provider type| |max-node-provision-time|"15m"|Maximum time CA waits for node to be provisioned| |nodes|-|sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: `::`| |node-group-auto-discovery|-|One or more definition(s) of node group auto-discovery. A definition is expressed `:[[=]]`| -|estimator|-|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]| +|estimator|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]| |expander|"random"|Type of node group expander to be used in scale up. Available values: `["random","most-pods","least-waste","price","priority"]`| |ignore-daemonsets-utilization|false|Should CA ignore DaemonSet pods when calculating resource utilization for scaling down| |ignore-mirror-pods-utilization|false|Should CA ignore Mirror pods when calculating resource utilization for scaling down| diff --git a/versioned_docs/version-2.5/how-to-guides/advanced-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md b/versioned_docs/version-2.5/how-to-guides/advanced-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md index 577dd9c5f55..b2ef956d1c5 100644 --- a/versioned_docs/version-2.5/how-to-guides/advanced-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md +++ b/versioned_docs/version-2.5/how-to-guides/advanced-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md @@ -304,7 +304,7 @@ cloud-provider|-|Cloud provider type| |max-node-provision-time|"15m"|Maximum time CA waits for node to be provisioned| |nodes|-|sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: `::`| |node-group-auto-discovery|-|One or more definition(s) of node group auto-discovery. A definition is expressed `:[[=]]`| -|estimator|-|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]| +|estimator|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]| |expander|"random"|Type of node group expander to be used in scale up. Available values: `["random","most-pods","least-waste","price","priority"]`| |ignore-daemonsets-utilization|false|Should CA ignore DaemonSet pods when calculating resource utilization for scaling down| |ignore-mirror-pods-utilization|false|Should CA ignore Mirror pods when calculating resource utilization for scaling down| diff --git a/versioned_docs/version-2.5/troubleshooting/other-troubleshooting-tips/networking.md b/versioned_docs/version-2.5/troubleshooting/other-troubleshooting-tips/networking.md index e1d6c78cfe5..d4d3017db39 100644 --- a/versioned_docs/version-2.5/troubleshooting/other-troubleshooting-tips/networking.md +++ b/versioned_docs/version-2.5/troubleshooting/other-troubleshooting-tips/networking.md @@ -103,19 +103,3 @@ When the MTU is incorrectly configured (either on hosts running Rancher, nodes i See [Google Cloud VPN: MTU Considerations](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu) for an example how to configure MTU correctly when using Google Cloud VPN between Rancher and cluster nodes. -### Resolved issues - -#### Overlay network broken when using Canal/Flannel due to missing node annotations - -| | | -|------------|------------| -| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) | -| Resolved in | v2.1.2 | - -To check if your cluster is affected, the following command will list nodes that are broken (this command requires `jq` to be installed): - -``` -kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name' -``` - -If there is no output, the cluster is not affected. diff --git a/versioned_docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md b/versioned_docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md index 18ed7d49f29..33a07e216c3 100644 --- a/versioned_docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md +++ b/versioned_docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md @@ -70,7 +70,7 @@ StatefulSets manage the deployment and scaling of Pods while maintaining a stick 1. Click **StatefulSet**. 1. In the **Volume Claim Templates** tab, click **Add Claim Template**. 1. Enter a name for the persistent volume. -1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. +1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Launch**. @@ -84,7 +84,7 @@ To attach the PVC to an existing workload, 1. Go to the workload that will use storage provisioned with the StorageClass that you cared at click **⋮ > Edit Config**. 1. In the **Volume Claim Templates** section, click **Add Claim Template**. 1. Enter a persistent volume name. -1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. +1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Save**. diff --git a/versioned_docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md b/versioned_docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md index 7daaab8504b..6a95a95b3ae 100644 --- a/versioned_docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md +++ b/versioned_docs/version-2.6/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md @@ -304,7 +304,7 @@ cloud-provider|-|Cloud provider type| |max-node-provision-time|"15m"|Maximum time CA waits for node to be provisioned| |nodes|-|sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: `::`| |node-group-auto-discovery|-|One or more definition(s) of node group auto-discovery. A definition is expressed `:[[=]]`| -|estimator|-|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]| +|estimator|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]| |expander|"random"|Type of node group expander to be used in scale up. Available values: `["random","most-pods","least-waste","price","priority"]`| |ignore-daemonsets-utilization|false|Should CA ignore DaemonSet pods when calculating resource utilization for scaling down| |ignore-mirror-pods-utilization|false|Should CA ignore Mirror pods when calculating resource utilization for scaling down| diff --git a/versioned_docs/version-2.6/troubleshooting/other-troubleshooting-tips/networking.md b/versioned_docs/version-2.6/troubleshooting/other-troubleshooting-tips/networking.md index 9c95ff1d0e5..4d938886206 100644 --- a/versioned_docs/version-2.6/troubleshooting/other-troubleshooting-tips/networking.md +++ b/versioned_docs/version-2.6/troubleshooting/other-troubleshooting-tips/networking.md @@ -106,20 +106,3 @@ When the MTU is incorrectly configured (either on hosts running Rancher, nodes i * `read tcp: i/o timeout` See [Google Cloud VPN: MTU Considerations](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu) for an example how to configure MTU correctly when using Google Cloud VPN between Rancher and cluster nodes. - -### Resolved issues - -#### Overlay network broken when using Canal/Flannel due to missing node annotations - -| | | -|------------|------------| -| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) | -| Resolved in | v2.1.2 | - -To check if your cluster is affected, the following command will list nodes that are broken (this command requires `jq` to be installed): - -``` -kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name' -``` - -If there is no output, the cluster is not affected. diff --git a/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md b/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md index 18ed7d49f29..33a07e216c3 100644 --- a/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md +++ b/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md @@ -70,7 +70,7 @@ StatefulSets manage the deployment and scaling of Pods while maintaining a stick 1. Click **StatefulSet**. 1. In the **Volume Claim Templates** tab, click **Add Claim Template**. 1. Enter a name for the persistent volume. -1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. +1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Launch**. @@ -84,7 +84,7 @@ To attach the PVC to an existing workload, 1. Go to the workload that will use storage provisioned with the StorageClass that you cared at click **⋮ > Edit Config**. 1. In the **Volume Claim Templates** section, click **Add Claim Template**. 1. Enter a persistent volume name. -1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. +1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Save**. diff --git a/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md b/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md index 7daaab8504b..6a95a95b3ae 100644 --- a/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md +++ b/versioned_docs/version-2.7/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md @@ -304,7 +304,7 @@ cloud-provider|-|Cloud provider type| |max-node-provision-time|"15m"|Maximum time CA waits for node to be provisioned| |nodes|-|sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: `::`| |node-group-auto-discovery|-|One or more definition(s) of node group auto-discovery. A definition is expressed `:[[=]]`| -|estimator|-|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]| +|estimator|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]| |expander|"random"|Type of node group expander to be used in scale up. Available values: `["random","most-pods","least-waste","price","priority"]`| |ignore-daemonsets-utilization|false|Should CA ignore DaemonSet pods when calculating resource utilization for scaling down| |ignore-mirror-pods-utilization|false|Should CA ignore Mirror pods when calculating resource utilization for scaling down| diff --git a/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/networking.md b/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/networking.md index 9c95ff1d0e5..4d938886206 100644 --- a/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/networking.md +++ b/versioned_docs/version-2.7/troubleshooting/other-troubleshooting-tips/networking.md @@ -106,20 +106,3 @@ When the MTU is incorrectly configured (either on hosts running Rancher, nodes i * `read tcp: i/o timeout` See [Google Cloud VPN: MTU Considerations](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu) for an example how to configure MTU correctly when using Google Cloud VPN between Rancher and cluster nodes. - -### Resolved issues - -#### Overlay network broken when using Canal/Flannel due to missing node annotations - -| | | -|------------|------------| -| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) | -| Resolved in | v2.1.2 | - -To check if your cluster is affected, the following command will list nodes that are broken (this command requires `jq` to be installed): - -``` -kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name' -``` - -If there is no output, the cluster is not affected. diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md index c40d3000816..a0f3271be5b 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/manage-persistent-storage/dynamically-provision-new-storage.md @@ -70,7 +70,7 @@ StatefulSets manage the deployment and scaling of Pods while maintaining a stick 1. Click **StatefulSet**. 1. In the **Volume Claim Templates** tab, click **Add Claim Template**. 1. Enter a name for the persistent volume. -1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. +1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Launch**. @@ -84,7 +84,7 @@ To attach the PVC to an existing workload, 1. Go to the workload that will use storage provisioned with the StorageClass that you cared at click **⋮ > Edit Config**. 1. In the **Volume Claim Templates** section, click **Add Claim Template**. 1. Enter a persistent volume name. -1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. +1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet. 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Save**. diff --git a/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md b/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md index 7daaab8504b..6a95a95b3ae 100644 --- a/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md +++ b/versioned_docs/version-2.8/how-to-guides/new-user-guides/manage-clusters/install-cluster-autoscaler/use-aws-ec2-auto-scaling-groups.md @@ -304,7 +304,7 @@ cloud-provider|-|Cloud provider type| |max-node-provision-time|"15m"|Maximum time CA waits for node to be provisioned| |nodes|-|sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: `::`| |node-group-auto-discovery|-|One or more definition(s) of node group auto-discovery. A definition is expressed `:[[=]]`| -|estimator|-|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]| +|estimator|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]| |expander|"random"|Type of node group expander to be used in scale up. Available values: `["random","most-pods","least-waste","price","priority"]`| |ignore-daemonsets-utilization|false|Should CA ignore DaemonSet pods when calculating resource utilization for scaling down| |ignore-mirror-pods-utilization|false|Should CA ignore Mirror pods when calculating resource utilization for scaling down| diff --git a/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/networking.md b/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/networking.md index 9c95ff1d0e5..4d938886206 100644 --- a/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/networking.md +++ b/versioned_docs/version-2.8/troubleshooting/other-troubleshooting-tips/networking.md @@ -106,20 +106,3 @@ When the MTU is incorrectly configured (either on hosts running Rancher, nodes i * `read tcp: i/o timeout` See [Google Cloud VPN: MTU Considerations](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu) for an example how to configure MTU correctly when using Google Cloud VPN between Rancher and cluster nodes. - -### Resolved issues - -#### Overlay network broken when using Canal/Flannel due to missing node annotations - -| | | -|------------|------------| -| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) | -| Resolved in | v2.1.2 | - -To check if your cluster is affected, the following command will list nodes that are broken (this command requires `jq` to be installed): - -``` -kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name' -``` - -If there is no output, the cluster is not affected.