Port to other versions

This commit is contained in:
Billy Tat
2026-05-01 15:36:45 -07:00
parent d2815b8707
commit b15041642a
5 changed files with 40 additions and 0 deletions
@@ -15,6 +15,14 @@ Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG
Double check if all the [required ports](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) are opened in your (host) firewall. The overlay network uses UDP in comparison to all other required ports which are TCP. Double check if all the [required ports](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) are opened in your (host) firewall. The overlay network uses UDP in comparison to all other required ports which are TCP.
## Check if your downstream node can communicate to Rancher Manager
Rancher components with HTTP endpoints generally contain a `ping` liveness probe which you can use to test connectivity. Replace the `$RANCHER_URL` as appropriate and run the following from a node to check that it has connectivity to Rancher Manager's servers in the `local` cluster. If successful, it should return `pong`.
```
curl -k https://$RANCHER_URL/ping
```
## Check if Overlay Network is Functioning Correctly ## Check if Overlay Network is Functioning Correctly
The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod.
@@ -15,6 +15,14 @@ Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG
Double check if all the [required ports](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) are opened in your (host) firewall. The overlay network uses UDP in comparison to all other required ports which are TCP. Double check if all the [required ports](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) are opened in your (host) firewall. The overlay network uses UDP in comparison to all other required ports which are TCP.
## Check if your downstream node can communicate to Rancher Manager
Rancher components with HTTP endpoints generally contain a `ping` liveness probe which you can use to test connectivity. Replace the `$RANCHER_URL` as appropriate and run the following from a node to check that it has connectivity to Rancher Manager's servers in the `local` cluster. If successful, it should return `pong`.
```
curl -k https://$RANCHER_URL/ping
```
## Check if Overlay Network is Functioning Correctly ## Check if Overlay Network is Functioning Correctly
The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod.
@@ -15,6 +15,14 @@ Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG
Double check if all the [required ports](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) are opened in your (host) firewall. The overlay network uses UDP in comparison to all other required ports which are TCP. Double check if all the [required ports](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) are opened in your (host) firewall. The overlay network uses UDP in comparison to all other required ports which are TCP.
## Check if your downstream node can communicate to Rancher Manager
Rancher components with HTTP endpoints generally contain a `ping` liveness probe which you can use to test connectivity. Replace the `$RANCHER_URL` as appropriate and run the following from a node to check that it has connectivity to Rancher Manager's servers in the `local` cluster. If successful, it should return `pong`.
```
curl -k https://$RANCHER_URL/ping
```
## Check if Overlay Network is Functioning Correctly ## Check if Overlay Network is Functioning Correctly
The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod.
@@ -15,6 +15,14 @@ Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG
Double check if all the [required ports](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) are opened in your (host) firewall. The overlay network uses UDP in comparison to all other required ports which are TCP. Double check if all the [required ports](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) are opened in your (host) firewall. The overlay network uses UDP in comparison to all other required ports which are TCP.
## Check if your downstream node can communicate to Rancher Manager
Rancher components with HTTP endpoints generally contain a `ping` liveness probe which you can use to test connectivity. Replace the `$RANCHER_URL` as appropriate and run the following from a node to check that it has connectivity to Rancher Manager's servers in the `local` cluster. If successful, it should return `pong`.
```
curl -k https://$RANCHER_URL/ping
```
## Check if Overlay Network is Functioning Correctly ## Check if Overlay Network is Functioning Correctly
The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod.
@@ -15,6 +15,14 @@ Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG
Double check if all the [required ports](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) are opened in your (host) firewall. The overlay network uses UDP in comparison to all other required ports which are TCP. Double check if all the [required ports](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) are opened in your (host) firewall. The overlay network uses UDP in comparison to all other required ports which are TCP.
## Check if your downstream node can communicate to Rancher Manager
Rancher components with HTTP endpoints generally contain a `ping` liveness probe which you can use to test connectivity. Replace the `$RANCHER_URL` as appropriate and run the following from a node to check that it has connectivity to Rancher Manager's servers in the `local` cluster. If successful, it should return `pong`.
```
curl -k https://$RANCHER_URL/ping
```
## Check if Overlay Network is Functioning Correctly ## Check if Overlay Network is Functioning Correctly
The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod.