Merge pull request #1309 from btat/misc-fixes

Misc fixes
This commit is contained in:
Billy Tat
2024-05-30 09:10:54 -07:00
committed by GitHub
27 changed files with 26 additions and 178 deletions
@@ -70,7 +70,7 @@ StatefulSets manage the deployment and scaling of Pods while maintaining a stick
1. Click **StatefulSet**.
1. In the **Volume Claim Templates** tab, click **Add Claim Template**.
1. Enter a name for the persistent volume.
1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Launch**.
@@ -84,7 +84,7 @@ To attach the PVC to an existing workload,
1. Go to the workload that will use storage provisioned with the StorageClass that you cared at click **⋮ > Edit Config**.
1. In the **Volume Claim Templates** section, click **Add Claim Template**.
1. Enter a persistent volume name.
1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Save**.
@@ -304,7 +304,7 @@ cloud-provider|-|Cloud provider type|
|max-node-provision-time|"15m"|Maximum time CA waits for node to be provisioned|
|nodes|-|sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: `<min>:<max>:<other...>`|
|node-group-auto-discovery|-|One or more definition(s) of node group auto-discovery. A definition is expressed `<name of discoverer>:[<key>[=<value>]]`|
|estimator|-|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|estimator|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|expander|"random"|Type of node group expander to be used in scale up. Available values: `["random","most-pods","least-waste","price","priority"]`|
|ignore-daemonsets-utilization|false|Should CA ignore DaemonSet pods when calculating resource utilization for scaling down|
|ignore-mirror-pods-utilization|false|Should CA ignore Mirror pods when calculating resource utilization for scaling down|
@@ -107,20 +107,3 @@ When the MTU is incorrectly configured (either on hosts running Rancher, nodes i
* `read tcp: i/o timeout`
See [Google Cloud VPN: MTU Considerations](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu) for an example how to configure MTU correctly when using Google Cloud VPN between Rancher and cluster nodes.
### Resolved issues
#### Overlay network broken when using Canal/Flannel due to missing node annotations
| | |
|------------|------------|
| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) |
| Resolved in | v2.1.2 |
To check if your cluster is affected, the following command will list nodes that are broken (this command requires `jq` to be installed):
```
kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name'
```
If there is no output, the cluster is not affected.
@@ -66,7 +66,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘
1. 单击 **StatefulSet**
1. 在**卷声明模板**选项卡上,单击**添加声明模板**。
1. 输入持久卷的名称。
1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。
1. 点击**启动**。
@@ -80,7 +80,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘
1. 单击 **⋮ > 编辑配置**,转到使用由 StorageClass 配置的存储的工作负载。
1. 在**卷声明模板**中,单击**添加声明模板**。
1. 输入持久卷名称。
1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。
1. 单击**保存**。
@@ -300,7 +300,7 @@ title: 通过 AWS EC2 Auto Scaling 组使用 Cluster Autoscaler
| max-node-provision-time | "15m" | CA 等待节点配置的最长时间 |
| nodes | - | 以云提供商接受的格式设置节点组的最小、最大大小和其他配置数据。可以多次使用。格式是 `<min>:<max>:<other...>`。 |
| node-group-auto-discovery | - | 节点组自动发现的一个或多个定义。定义表示为 `<name of discoverer>:[<key>[=<value>]]` |
| estimator | - | "binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] |
| estimator |"binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] |
| expander | "random" | 要在扩容中使用的节点组扩展器的类型。可用值:`["random","most-pods","least-waste","price","priority"]` |
| ignore-daemonsets-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 DaemonSet pod |
| ignore-mirror-pods-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 Mirror pod |
@@ -102,20 +102,3 @@ title: 网络
* `read tcp: i/o timeout`
有关在 Rancher 和集群节点之间使用 Google Cloud VPN 时如何正确配置 MTU 的示例,请参阅 [Google Cloud VPNMTU 注意事项](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu)。
### 已解决的问题
#### 由于缺少节点注释,使用 Canal/Flannel 时覆盖网络中断
| | |
|------------|------------|
| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) |
| 解决于 | v2.1.2 |
要检查你的集群是否受到影响,运行以下命令来列出损坏的节点(此命令要求安装 `jq`):
```
kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name'
```
如果没有输出,则集群没有影响。
@@ -66,7 +66,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘
1. 单击 **StatefulSet**
1. 在**卷声明模板**选项卡上,单击**添加声明模板**。
1. 输入持久卷的名称。
1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。
1. 点击**启动**。
@@ -80,7 +80,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘
1. 单击 **⋮ > 编辑配置**,转到使用由 StorageClass 配置的存储的工作负载。
1. 在**卷声明模板**中,单击**添加声明模板**。
1. 输入持久卷名称。
1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。
1. 单击**保存**。
@@ -300,7 +300,7 @@ title: 通过 AWS EC2 Auto Scaling 组使用 Cluster Autoscaler
| max-node-provision-time | "15m" | CA 等待节点配置的最长时间 |
| nodes | - | 以云提供商接受的格式设置节点组的最小、最大大小和其他配置数据。可以多次使用。格式是 `<min>:<max>:<other...>`。 |
| node-group-auto-discovery | - | 节点组自动发现的一个或多个定义。定义表示为 `<name of discoverer>:[<key>[=<value>]]` |
| estimator | - | "binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] |
| estimator |"binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] |
| expander | "random" | 要在扩容中使用的节点组扩展器的类型。可用值:`["random","most-pods","least-waste","price","priority"]` |
| ignore-daemonsets-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 DaemonSet pod |
| ignore-mirror-pods-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 Mirror pod |
@@ -102,20 +102,3 @@ title: 网络
* `read tcp: i/o timeout`
有关在 Rancher 和集群节点之间使用 Google Cloud VPN 时如何正确配置 MTU 的示例,请参阅 [Google Cloud VPNMTU 注意事项](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu)。
### 已解决的问题
#### 由于缺少节点注释,使用 Canal/Flannel 时覆盖网络中断
| | |
|------------|------------|
| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) |
| 解决于 | v2.1.2 |
要检查你的集群是否受到影响,运行以下命令来列出损坏的节点(此命令要求安装 `jq`):
```
kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name'
```
如果没有输出,则集群没有影响。
@@ -66,7 +66,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘
1. 单击 **StatefulSet**
1. 在**卷声明模板**选项卡上,单击**添加声明模板**。
1. 输入持久卷的名称。
1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。
1. 点击**启动**。
@@ -80,7 +80,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘
1. 单击 **⋮ > 编辑配置**,转到使用由 StorageClass 配置的存储的工作负载。
1. 在**卷声明模板**中,单击**添加声明模板**。
1. 输入持久卷名称。
1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。
1. 单击**保存**。
@@ -300,7 +300,7 @@ title: 通过 AWS EC2 Auto Scaling 组使用 Cluster Autoscaler
| max-node-provision-time | "15m" | CA 等待节点配置的最长时间 |
| nodes | - | 以云提供商接受的格式设置节点组的最小、最大大小和其他配置数据。可以多次使用。格式是 `<min>:<max>:<other...>`。 |
| node-group-auto-discovery | - | 节点组自动发现的一个或多个定义。定义表示为 `<name of discoverer>:[<key>[=<value>]]` |
| estimator | - | "binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] |
| estimator |"binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] |
| expander | "random" | 要在扩容中使用的节点组扩展器的类型。可用值:`["random","most-pods","least-waste","price","priority"]` |
| ignore-daemonsets-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 DaemonSet pod |
| ignore-mirror-pods-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 Mirror pod |
@@ -102,20 +102,3 @@ title: 网络
* `read tcp: i/o timeout`
有关在 Rancher 和集群节点之间使用 Google Cloud VPN 时如何正确配置 MTU 的示例,请参阅 [Google Cloud VPNMTU 注意事项](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu)。
### 已解决的问题
#### 由于缺少节点注释,使用 Canal/Flannel 时覆盖网络中断
| | |
|------------|------------|
| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) |
| 解决于 | v2.1.2 |
要检查你的集群是否受到影响,运行以下命令来列出损坏的节点(此命令要求安装 `jq`):
```
kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name'
```
如果没有输出,则集群没有影响。
@@ -66,7 +66,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘
1. 单击 **StatefulSet**
1. 在**卷声明模板**选项卡上,单击**添加声明模板**。
1. 输入持久卷的名称。
1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。
1. 点击**启动**。
@@ -80,7 +80,7 @@ StatefulSet 管理 Pod 的部署和扩展,同时为每个 Pod 维护一个粘
1. 单击 **⋮ > 编辑配置**,转到使用由 StorageClass 配置的存储的工作负载。
1. 在**卷声明模板**中,单击**添加声明模板**。
1. 输入持久卷名称。
1. 在*存储类*\*字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**存储类**字段中,选择将为此 StatefulSet 管理的 pod 动态配置存储的 StorageClass。
1. 在**挂载点**字段中,输入工作负载将用于访问卷的路径。
1. 单击**保存**。
@@ -300,7 +300,7 @@ title: 通过 AWS EC2 Auto Scaling 组使用 Cluster Autoscaler
| max-node-provision-time | "15m" | CA 等待节点配置的最长时间 |
| nodes | - | 以云提供商接受的格式设置节点组的最小、最大大小和其他配置数据。可以多次使用。格式是 `<min>:<max>:<other...>`。 |
| node-group-auto-discovery | - | 节点组自动发现的一个或多个定义。定义表示为 `<name of discoverer>:[<key>[=<value>]]` |
| estimator | - | "binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] |
| estimator |"binpacking" | 用于扩容的资源评估器类型。可用值:["binpacking"] |
| expander | "random" | 要在扩容中使用的节点组扩展器的类型。可用值:`["random","most-pods","least-waste","price","priority"]` |
| ignore-daemonsets-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 DaemonSet pod |
| ignore-mirror-pods-utilization | false | CA 为了缩容而计算资源利用率时,是否应忽略 Mirror pod |
@@ -102,20 +102,3 @@ title: 网络
* `read tcp: i/o timeout`
有关在 Rancher 和集群节点之间使用 Google Cloud VPN 时如何正确配置 MTU 的示例,请参阅 [Google Cloud VPNMTU 注意事项](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu)。
### 已解决的问题
#### 由于缺少节点注释,使用 Canal/Flannel 时覆盖网络中断
| | |
|------------|------------|
| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) |
| 解决于 | v2.1.2 |
要检查你的集群是否受到影响,运行以下命令来列出损坏的节点(此命令要求安装 `jq`):
```
kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name'
```
如果没有输出,则集群没有影响。
@@ -302,7 +302,7 @@ cloud-provider|-|Cloud provider type|
|max-node-provision-time|"15m"|Maximum time CA waits for node to be provisioned|
|nodes|-|sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: `<min>:<max>:<other...>`|
|node-group-auto-discovery|-|One or more definition(s) of node group auto-discovery. A definition is expressed `<name of discoverer>:[<key>[=<value>]]`|
|estimator|-|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|estimator|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|expander|"random"|Type of node group expander to be used in scale up. Available values: `["random","most-pods","least-waste","price","priority"]`|
|ignore-daemonsets-utilization|false|Should CA ignore DaemonSet pods when calculating resource utilization for scaling down|
|ignore-mirror-pods-utilization|false|Should CA ignore Mirror pods when calculating resource utilization for scaling down|
@@ -304,7 +304,7 @@ cloud-provider|-|Cloud provider type|
|max-node-provision-time|"15m"|Maximum time CA waits for node to be provisioned|
|nodes|-|sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: `<min>:<max>:<other...>`|
|node-group-auto-discovery|-|One or more definition(s) of node group auto-discovery. A definition is expressed `<name of discoverer>:[<key>[=<value>]]`|
|estimator|-|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|estimator|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|expander|"random"|Type of node group expander to be used in scale up. Available values: `["random","most-pods","least-waste","price","priority"]`|
|ignore-daemonsets-utilization|false|Should CA ignore DaemonSet pods when calculating resource utilization for scaling down|
|ignore-mirror-pods-utilization|false|Should CA ignore Mirror pods when calculating resource utilization for scaling down|
@@ -103,19 +103,3 @@ When the MTU is incorrectly configured (either on hosts running Rancher, nodes i
See [Google Cloud VPN: MTU Considerations](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu) for an example how to configure MTU correctly when using Google Cloud VPN between Rancher and cluster nodes.
### Resolved issues
#### Overlay network broken when using Canal/Flannel due to missing node annotations
| | |
|------------|------------|
| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) |
| Resolved in | v2.1.2 |
To check if your cluster is affected, the following command will list nodes that are broken (this command requires `jq` to be installed):
```
kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name'
```
If there is no output, the cluster is not affected.
@@ -70,7 +70,7 @@ StatefulSets manage the deployment and scaling of Pods while maintaining a stick
1. Click **StatefulSet**.
1. In the **Volume Claim Templates** tab, click **Add Claim Template**.
1. Enter a name for the persistent volume.
1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Launch**.
@@ -84,7 +84,7 @@ To attach the PVC to an existing workload,
1. Go to the workload that will use storage provisioned with the StorageClass that you cared at click **⋮ > Edit Config**.
1. In the **Volume Claim Templates** section, click **Add Claim Template**.
1. Enter a persistent volume name.
1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Save**.
@@ -304,7 +304,7 @@ cloud-provider|-|Cloud provider type|
|max-node-provision-time|"15m"|Maximum time CA waits for node to be provisioned|
|nodes|-|sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: `<min>:<max>:<other...>`|
|node-group-auto-discovery|-|One or more definition(s) of node group auto-discovery. A definition is expressed `<name of discoverer>:[<key>[=<value>]]`|
|estimator|-|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|estimator|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|expander|"random"|Type of node group expander to be used in scale up. Available values: `["random","most-pods","least-waste","price","priority"]`|
|ignore-daemonsets-utilization|false|Should CA ignore DaemonSet pods when calculating resource utilization for scaling down|
|ignore-mirror-pods-utilization|false|Should CA ignore Mirror pods when calculating resource utilization for scaling down|
@@ -106,20 +106,3 @@ When the MTU is incorrectly configured (either on hosts running Rancher, nodes i
* `read tcp: i/o timeout`
See [Google Cloud VPN: MTU Considerations](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu) for an example how to configure MTU correctly when using Google Cloud VPN between Rancher and cluster nodes.
### Resolved issues
#### Overlay network broken when using Canal/Flannel due to missing node annotations
| | |
|------------|------------|
| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) |
| Resolved in | v2.1.2 |
To check if your cluster is affected, the following command will list nodes that are broken (this command requires `jq` to be installed):
```
kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name'
```
If there is no output, the cluster is not affected.
@@ -70,7 +70,7 @@ StatefulSets manage the deployment and scaling of Pods while maintaining a stick
1. Click **StatefulSet**.
1. In the **Volume Claim Templates** tab, click **Add Claim Template**.
1. Enter a name for the persistent volume.
1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Launch**.
@@ -84,7 +84,7 @@ To attach the PVC to an existing workload,
1. Go to the workload that will use storage provisioned with the StorageClass that you cared at click **⋮ > Edit Config**.
1. In the **Volume Claim Templates** section, click **Add Claim Template**.
1. Enter a persistent volume name.
1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Save**.
@@ -304,7 +304,7 @@ cloud-provider|-|Cloud provider type|
|max-node-provision-time|"15m"|Maximum time CA waits for node to be provisioned|
|nodes|-|sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: `<min>:<max>:<other...>`|
|node-group-auto-discovery|-|One or more definition(s) of node group auto-discovery. A definition is expressed `<name of discoverer>:[<key>[=<value>]]`|
|estimator|-|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|estimator|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|expander|"random"|Type of node group expander to be used in scale up. Available values: `["random","most-pods","least-waste","price","priority"]`|
|ignore-daemonsets-utilization|false|Should CA ignore DaemonSet pods when calculating resource utilization for scaling down|
|ignore-mirror-pods-utilization|false|Should CA ignore Mirror pods when calculating resource utilization for scaling down|
@@ -106,20 +106,3 @@ When the MTU is incorrectly configured (either on hosts running Rancher, nodes i
* `read tcp: i/o timeout`
See [Google Cloud VPN: MTU Considerations](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu) for an example how to configure MTU correctly when using Google Cloud VPN between Rancher and cluster nodes.
### Resolved issues
#### Overlay network broken when using Canal/Flannel due to missing node annotations
| | |
|------------|------------|
| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) |
| Resolved in | v2.1.2 |
To check if your cluster is affected, the following command will list nodes that are broken (this command requires `jq` to be installed):
```
kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name'
```
If there is no output, the cluster is not affected.
@@ -70,7 +70,7 @@ StatefulSets manage the deployment and scaling of Pods while maintaining a stick
1. Click **StatefulSet**.
1. In the **Volume Claim Templates** tab, click **Add Claim Template**.
1. Enter a name for the persistent volume.
1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Launch**.
@@ -84,7 +84,7 @@ To attach the PVC to an existing workload,
1. Go to the workload that will use storage provisioned with the StorageClass that you cared at click **⋮ > Edit Config**.
1. In the **Volume Claim Templates** section, click **Add Claim Template**.
1. Enter a persistent volume name.
1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
1. Click **Save**.
@@ -304,7 +304,7 @@ cloud-provider|-|Cloud provider type|
|max-node-provision-time|"15m"|Maximum time CA waits for node to be provisioned|
|nodes|-|sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: `<min>:<max>:<other...>`|
|node-group-auto-discovery|-|One or more definition(s) of node group auto-discovery. A definition is expressed `<name of discoverer>:[<key>[=<value>]]`|
|estimator|-|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|estimator|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|expander|"random"|Type of node group expander to be used in scale up. Available values: `["random","most-pods","least-waste","price","priority"]`|
|ignore-daemonsets-utilization|false|Should CA ignore DaemonSet pods when calculating resource utilization for scaling down|
|ignore-mirror-pods-utilization|false|Should CA ignore Mirror pods when calculating resource utilization for scaling down|
@@ -106,20 +106,3 @@ When the MTU is incorrectly configured (either on hosts running Rancher, nodes i
* `read tcp: i/o timeout`
See [Google Cloud VPN: MTU Considerations](https://cloud.google.com/vpn/docs/concepts/mtu-considerations#gateway_mtu_vs_system_mtu) for an example how to configure MTU correctly when using Google Cloud VPN between Rancher and cluster nodes.
### Resolved issues
#### Overlay network broken when using Canal/Flannel due to missing node annotations
| | |
|------------|------------|
| GitHub issue | [#13644](https://github.com/rancher/rancher/issues/13644) |
| Resolved in | v2.1.2 |
To check if your cluster is affected, the following command will list nodes that are broken (this command requires `jq` to be installed):
```
kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name'
```
If there is no output, the cluster is not affected.